READ ORIGINAL POST AT AVAST.COM
“There are just too many things we have to think about every day, too many new things we have to learn. New styles, new information, new technology, new terminology … But still, no matter how much time passes, no matter what takes place in the interim, there are some things we can never assign to oblivion, memories we can never rub away. They remain with us forever, like a touchstone.” – Haruki Murakami, “Kafka on the Shore”
According to a recent Microsoft report, users of digital assistants, such as Amazon’s Alexa and Apple’s Siri, continue to weigh convenience over potential privacy concerns. Eighty percent report being satisfied with the utility these devices provide; only half that percentage (41%) are concerned about the safety of the data they acquire. As I have written previously, this is a tradeoff every one of us must weigh in the digital age, and there aren’t any right or wrong answers. But there are informed and uninformed decisions, and I suspect these survey respondents did not properly consider how the data they feed into their virtual assistants could be used. Once your data is introduced to algorithms, the chain of ownership is broken and you lose control – for children that is an unfair burden, as well as a potential security risk.
Even if you trust the companies that are collecting your data and the algorithms that analyze and apply it, there is a concern of hackers gaining access to it. We may understand what a criminal hacking group wants with our credit card numbers and identity information, but no one can be sure what harm will result from exploitation of the AI analysis of our behavior, our biometric data, and other sensitive data. State actors could use these details to steal secrets, interfere in elections, or manipulate or blackmail officials. Repressive states are already using advanced hacking tools to target dissidents and other groups.
There remain immediate reasons to be wary of digital assistants. The business model in place depends on continued improvements in their accuracy and intelligence. The data that comes in is assimilated into the machine’s learning algorithm, helping the AI to avoid past mistakes and make better predictions. In a recurring loop, the consumer gleans the benefits of the technology, while continuously providing material for its improvement. Around the time of the Microsoft report cited above, it was revealed that Amazon has a dedicated team of employees that listen to voice recordings made by Alexa, in order to train the software to perform better. Amazon’s response was to emphasize that it only records “an extremely small sample” of recordings to improve consumer experience. Disturbingly, however, even users who opt out of having their recordings fed back into the system could still be subject to this manual review process. This month Facebook also admitted paying contractors to listen to and transcribe Messenger conversations. Then Apple confessed that human employees were listening to Siri recordings, reportedly including users having sex. In other words, the information you give to a digital assistant isn’t just going into the black box of an AI. It could very well be replayed by other human beings, not to mention in a way that is traceable back to you.
We tend to be more concerned about humans gaining access to our data, but what about the algorithms themselves? The issue is not limited to this one particular class of products like digital assistants – it ties into far broader ethical considerations about AI. The bits of information we cede to intelligent machines may enter as discrete bundles, but they then cease to be identifiable, subsumed by the network. It is no longer merely a matter of having the rights and regulations in place that give consumers ownership over their data. Today, our information is perpetually being incorporated into complex algorithms, often ones that are no longer transparent even to the engineers who built them.
Our individual data points help to strengthen and expand these systems, but in the process we lose our ownership over them. In the case of AI, there is no way to change one’s mind and reclaim possession of one’s digital property. It should be emphasized that these considerations are not purely philosophical. Take the European Union’s passage of the General Data Protection Regulation (GDPR) last year: Very tangible problems of implementation immediately arise. How do regulators enforce the “right to be forgotten” if the data in question has been incorporated into an AI’s learning processes?
We might also consider the concrete implications for different groups, such as the generation growing up in the midst of this technology. As described in a must-read Wired article, they will face difficulties no previous generation has ever encountered. For one, they will have extensive digital records documenting their lives starting from childhood, trailing behind them for every job interview and loan application, not to mention on prospective dates.
Digital assistants and other AIs do not explain to children or teenagers long-lasting data impacts that a human might patiently make clear. They capture the information with an indifference to those kinds of considerations. The information they collect can then be plugged into algorithms that dictate important social outcomes. A student who got caught cheating in a video recording, for instance, might continue to be remembered and penalized for this misstep well into adulthood, prompting a vicious cycle of underachievement and further infractions. Presumably, we would like to build societies that allow children the space to make mistakes and learn from them. And yet, by putting vast power in the hands of AI, we set the stage for the opposite: a world in which past errors become indelible marks on a person’s record and limit the opportunities for self-improvement.
As we think about these issues, we must balance, as always, the incredible capabilities of these technologies with their corresponding downsides. We should continue to improve AI systems to better serve our goals. As these machines become smarter and more efficient, they will become ever more valuable partners in helping us build vibrant and prosperous societies.
The problems discussed above are serious, and they require urgent attention. We cannot find the right solutions if we fail to keep our fundamental values front and center in the discussion. Regulation must aim to preserve a commitment to individual privacy, in a manner updated for the digital age. (A battle that has been going on at least since the invention of the telephone, as I’ll investigate more in my next blog post.) The tremendous power of AI is finding connections in data that are invisible to the human mind, so there will inevitably be unintended consequences.
Along with considering technological advances, solutions must now be more global, to match the scale of the technology they address. And they must strive to preserve these rights without blocking a path forward for continued tech progress. As a parent with young kids, I am far more excited than afraid for my kids to be growing up at this revolutionary time. But to ensure that they reap the benefits of the powerful digital tools at their fingertips, we must recognize their long-term consequences, especially for those too young to have a voice.