A new report from the Rockefeller Foundation highlights the need for a deeper conversation around the responsible use of artificial intelligence (AI) and the rules and regulations needed to harness its power for good.
The report, AI+1: Shaping our integrated future (95 pages, PDF), includes essays from fourteen technologists, philosophers, economists, lawyers, artists, and philanthropists who participated in an October 2019 convening focused on how artificial intelligence can create a better future for humanity. The essays included in the report present diverse perspectives centered around three themes: that AI is more than a technology and reflects the values as well as ethical lapses of any system in which it is embedded; that it should be used responsibly to support human goals as opposed to the market-driven, profit-making uses that dominate its use today; and that the self-regulation that prevails today with respect to its use is inadequate and should be replaced by a rule-making framework characterized by transparency, access to meaningful information, and a willingness to expose harm.
Contributors to the report include Amir Baradaran, founder of iBEGOO; Tim Davies, director of social justice-focused consultancy Practical Participation; Maya Indira Ganesh, a technology researcher and writer; Nils Gilman, vice president of programs at the Berggruen Institute; Claudia Juech, founding CEO and a board member of the Cloudera Foundation and former associate vice president at the Rockefeller Foundation; Hilary Mason, data scientist in residence at Accel Partners, former general manager of machine learning at Cloudera, and founder and CEO of Fast Forward Labs; Sarah Newman, senior researcher and principal at metaLAB (at) Harvard and a Berkman Klein Center for Internet & Society fellow; Tim O'Reilly, CEO and chair of O'Reilly Media; Jake Porway, CEO of DataKind; Marietje Schaake, international policy director at Stanford University's Cyber Policy Center; Katarzyna Szymielewicz, a lawyer specializing in human rights and technology; Stefaan Verhulst, co-founder and chief R&D at New York University's Tandon School of Engineering's GovLab; Richard Whitt, a corporate strategist and technology policy attorney; and Andrew Zolli, vice president of global impact initiatives at Planet.
"Already, researchers have shown how they can use AI to reduce racist speech online, resolve conflicts, counter domestic violence, detect and counter depression, and encourage greater compassion, among many other ailments of the human soul," writes Zolli. "Though still in their infancy, these tools will help us not only promote greater well-being but also demonstrate to the AIs that observe human nature just how elastic human nature is. Indeed, if we don't use AI to encourage the better angels of our nature, these algorithms may come to encode a dimmer view and, in a reinforcing feedback loop, embolden our demons by default."