'Human Rights' May Help Shape Artificial Intelligence in 2019
Jan 11, 2019 — Atlanta, GA
Ethics and accountability will be among the most significant challenges for artificial intelligence (AI) in 2019, according to a survey of researchers at Georgia Tech’s College of Computing.
In response to an email query about AI developments that can be expected in 2019, most of the researchers – whether talking about machine learning (ML), robotics, data visualizations, natural language processing, or other facets of AI – touched on the growing importance of recognizing the needs of people in AI systems.
“In 2019, I hope we will see AI researchers and practitioners start to frame the debate about proper and improper uses of artificial intelligence and machine learning in terms of human rights,” said Associate Professor Mark Riedl.
[RELATED: Is AI Coming For My Job?]
“More and more, interpretability and fairness are being recognized as critical issues to address to ensure AI appropriately interacts with society,” said Ph.D. student Fred Hohman.
Taking on algorithmic bias
Questions about the rights of end users of AI-enabled services and products are becoming a priority, but Riedl said more is needed.
“Companies are making progress in recognizing that AI systems may be biased in prejudicial ways. [However,] we need to start talking about the next step: remedy. How do people seek remedy if they believe an AI system made a wrong decision?” said Riedl.
Assistant Professor Jamie Morgenstern sees algorithmic bias as an ongoing concern in 2019 and gave banking as an example of an industry that may be in the news for its algorithmic decision-making.
“I project that we’ll have more high-profile examples of financial systems that use machine learning having worse rates of lending to women, people of color, and other communities historically underrepresented in the ‘standard’ American economic system,” Morgenstern said.
[RELATED: Researchers Working To Improve Fairness in the ML Pipeline]
In recent years corporate responses to cases of bias have been hit or miss, but Assistant Professor Munmun De Choudhury said 2019 may see a shift in how tech companies balance their shareholders’ interests with the interests of their customers and society.
“[Companies] will be increasingly subject to governmental regulation and will be forced to come up with safeguards to address misuse and abuse of their technologies, and will even consider broader partnerships with their market competitors to achieve this. For some corporations, business interests may take a backseat to ethics until they regain customer trust,” said De Choudhury.
Working toward more transparency
One way companies can regain that trust is through sharing their algorithms with the public, our experts said.
“Developers tend to walk around feeling objective because ‘it’s the algorithm that is determining the answer’. Moving forward, I believe that the algorithms will have to be increasingly ‘inspectable’ and developers will have to explain their answers,” Executive Associate Dean and Professor Charles Isbell.
Ph.D. student Yuval Pinter agreed. In the coming year, “[I] think we will see that researchers are trying to [develop] techniques and tests that can help us to better understand what’s going on in the actual wiring of our very fancy machine learning models.
“This is not only for curiosity but also because legal applications or regulation in various countries are starting to require that algorithmic decision-making programs be able to explain why they are doing what they are doing,” said Pinter.
Regents’ Professor Ron Arkin believes that these concerns are becoming more central precisely because artificial intelligence will continue to grow in importance in our everyday lives.
[RELATED: Who's Behind the Wheel?]
“Despite continued hype and omnipresent doomsayers, panic and fear over the growth of AI and robotics should begin to subside in 2019 as the benefits to people’s lives are becoming more apparent to the world.
“However, I expect to see lawyers jumping into the fray so we may also see lawsuits determining policy for self-driving cars [and other applications] more so than government regulation or the legal system,” said Arkin.
Albert Snedeker, Communications Manager