The Internet of Things has created a series of new ethical dilemmas and they call for a renewed look at the rules and regulations that guide companies as they embrace and implement IoT solutions including smart devices, apps, and gather data from consumers.
The issues of automation, data privacy, individual rights and security all play a role in these emerging set of guidelines, but before we regulate, we need more time and understanding of the ins and outs of these larger questions.
Who is responsible for the behavior of automated devices?
Let’s say an automated car is driving down a roadway, and a pedestrian enters it. To avoid the pedestrian, the car must crash itself into an obstacle that might injure or kill its occupant. If it does not take evasive action, it may seriously injure or kill the pedestrian.
What does the car do? How does it decide? In the aftermath, who is responsible for that choice? Is it the car’s occupant and owner, or the manufacturer who programmed the intelligence in the vehicle to weigh choices?
Yes, this is an extreme example, but it is an example of the ethical dilemma that anyone automating devices and processes must face: what if the smart device has to make a tough choice? Neither one is a good decision, but a choice must be made.
This could arise in smart cars, manufacturing, smart homes, and many more applications. There is no clear line yet about how these decisions should be weighed, and there are no regulations in place.
It’s likely these regulations will arise as situations develop and are legally tested. In the meantime, a baseline must be established.
What data is private and what can be shared? With who?
This issue has already arisen in some ways. In a murder case, the transcripts from an “always listening” Alexa device were demanded by lawyers in a homicide case. The FBI asked Apple to hack an iPhone and retrieve data desired to assist with an investigation.
If you as an individual put one of these devices in your home, are you authorizing constant surveillance? Can what you said in the privacy of your own home be used against you, or do you have an expectation of privacy?
Apple answered these questions by simply telling the FBI no. The process was not as simple as just “hacking an iPhone.” To bypass the phone’s native security, a new operating system would need to be created. The FBI did it anyway eventually but without Apple’s help. The tech company has decided that their user’s privacy overrides the need to know of law enforcement.
Amazon answered in a similar matter, but before the case could get too far, the suspect handed over records voluntarily, and Amazon then compiled.
In these cases, it seems like the tech companies have decided that customers should have an expectation of privacy, and while the law has not been fully tested, it seems they will get their wish.
These, however, are more extreme cases. What about app data, personal information including location voluntarily shared with Google and other services? Once we agree to the terms and conditions of location services, exactly what are we making public?
Workout apps like Strava and Map My Run/Ride can easily be used to see a user’s location at various times simply by looking at their social media feeds and the information they have shared there, at least if they have chosen to make it public. Even “private” or “only friends” posts can be easily discovered using a sneaky friend request, or gaining access to an authorized account.
Medical data, health data, data gathered from devices or apps like Progressive Snap Shot all fall into question here. Can your speeding habits or driving while texting be reported to law enforcement in the name of safety? Or is that information exclusively for use by your insurance provider?
The gray area here is perhaps larger than the black or the white. We all want to be protected whenever possible from others who might cause us harm. However, how far can we go to gather that data? This area is likely to be a topic for courts for years to come.
Community Good vs. Individual Rights
Is it good for us to know the possible causation of certain diseases or behaviors based on lifestyle and common choices? When a phone or app malfunctions, is it in the interest of all users of that device to have that data? Certainly.
However, if an individual does not want to share personal data or other information about themselves with software developers, companies, or even government entities, can we compel them to do so?
The answer is in most cases no, but by far not all. If a person is infected with a potentially contagious disease or is a known criminal or predator guilty of certain crimes, it is of course in the interest of public safety to have that information available.
Even when it comes to simple apps like Siri, is it worth it to give up some privacy to let the system learn from you and become a better digital assistant? What of Apple’s new feature in Safari preventing cross-site tracking, making some tools marketers have been using for years obsolete?
If individuals saw that sharing some data would be better for everyone overall, will they be more inclined to do so?
The questions surrounding the ethics of the Internet of Things are far greater than any answers we have so far. Until more time passes, and we have had more issues presented both legally and morally, they will likely remain unanswered. It will take time to build a new code surrounding these issues, and we can all play a role in those discussions.