From countering bias in computer vision algorithms to the use of AI in military drones, companies are increasingly looking at artificial intelligence through the lens of ethics.
Artificial intelligence is having a growing impact on our health, education, security, and economy. As it spreads, companies are increasingly being forced to reckon with the ethics of using AI, from how to use unbiased algorithm training data to whether human-sounding voice assistants need to identify themselves as non-humans.
In some cases, AI is even a matter of life and death. In March, an Uber vehicle in autonomous mode struck and killed a woman crossing the street in Tempe, Arizona — the first fatal accident involving an autonomous vehicle and a pedestrian.
Uber immediately suspended all its self-drive pilots, only resuming them in mid-July — but the incident raised questions about the safety and ethical consequences of relying on artificial intelligence.
Uber is certainly not the only company developing AI that has to contend with the ethics of its inventions. According to the CB Insights AI deals tracker, approximately $41B has been invested in AI startups across ~5000 deals over the last five years, and that number is only growing.
With more money than ever flowing into AI, ethicists are calling for transparency, accountability, and fairness in the development of AI across nearly every industry.
Ethics and AI are colliding
Discussions around AI and ethics are on the rise.
News mentions of AI and ethics increased ~5000% from 2014 to 2018, when they reached over 250 mentions in Q3’18.
AI is disrupting critical industries and giving rise to new ethical considerations.
According to CB Insights data, the top three industries that have seen the most AI deals in the last five years are healthcare, security, and fintech. And players in these top AI industries are the subjects of intense ethics debates.
Healthcare
In June 2018, Babylon Health announced that an AI algorithm scored higher than humans on a written test used to certify physicians in the United Kingdom. The Royal College of General Practitioners, a healthcare industry body representing doctors, protested the idea that we should trust AI with our health.
Someday soon, doctors will have to weigh the ethical consequences of an AI-driven misdiagnosis, asking who will take responsibility: the doctor, or the machine?
Notably, the FDA recently signaled that it is taking a fast-track approval strategy for AI-based medical devices.
Security
Google has been under intense internal and external pressure to defend its role in developing AI software for the US Department of Defense. Project Maven has involved Google developing AI image recognition software for US military drones.
Google employees and outsiders have raised ethical concerns that Google’s AI technology could be used to kill people. Google will continue to publicly work with the DOD until the Project Maven contract expires in 2019.
Fintech
Mizuho Financial Group in Japan says it will use AI to replace 19,000 people by 2027 — about a third of its workforce. There is growing worry in fintech that inherent bias in code could be baked into algorithms used to assess credit risk, whereby creditworthy customers could be denied based on race, gender, religion, and other factors.
Unprecedented ethics challenges
As Google increasingly turns itself into an AI company, it faces new ethics debates.
Recently, Google’s unveiling of Duplex — an AI assistant that can make phone calls and sounds and interacts like a real human — is already sparking ethics debates over whether or not Duplex needs to identify itself as an AI when speaking to real people.
Google is also working on Project Dragonfly, a censored search engine in China. The search engine could reportedly fully block certain results for searches such as “freedom of information” or “peaceful protest.” Google employees signed a letter protesting the work, stating:
“[Project Dragonfly] raises urgent moral and ethical issues… Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”
Google is even looking to hire an investigations analyst on its Trust and Safety team to assess the company’s ethical machine learning practices.
Many other major technology companies are also grappling with ethical questions as they sell products and services to the US military and intelligence community. Amazon, for example, is perhaps one of the nation’s most important defense contractors.
Amazon Web Services (AWS) has a contract with the US government called Secret Region, making AWS the first and only commercial cloud provider to serve workloads across the full range of government data classifications, including Unclassified, Sensitive, Secret, and Top Secret.
In addition to tech giants, young companies are also entering the AI and ethics discussion.
The startup AI Foundation raised $10M earlier this month to develop an AI system called Reality Defender. The system uses machine learning to identify fake news and malicious digital content meant to deceive people online. As these kinds of offerings become more widespread, AI companies seeking to monitor content on the internet will have to prove that they are doing so ethically.
To learn more about how AI changes the game when it comes to fake content, check out our brief on the future of information warfare.
What’s next for AI and ethics?
When it comes to AI and ethics, this is only the beginning.
The above examples are early instances of ethics intersecting the development of artificial intelligence. When it comes to experiencing the potential pitfalls of inadequately addressing ethics in AI, we are still in the early stages.
New and even thornier ethics debates are forming around the use of AI to police borders, predict crimes, score credit worthiness, and more.
As these technologies develop, AI and human rights will become more entwined. In turn, human rights groups will up their focus on ethics in AI.
This year Amnesty International, the world’s largest grassroots human rights organization stated:
“Private actors should promote the development and use of these [AI] technologies to help people more easily exercise and enjoy their human rights.”
AI developers and tech companies will increasingly be challenged to identify risks, ensure transparency, enforce oversight, and hold themselves accountable for the ethics of the AI they build.
If you aren’t already a client, sign up for a free trial to learn more about our platform.