Humans are Ultimately Behind AI, Responsible for Every Decision
From quantifying happiness to varying perceptions of “good” or “fair” across cultures, I’m still thinking – weeks later – about the topics discussed at AI World Conference and what I’ve learned from following Anthony Scriffignano, Dun & Bradstreet Chief Data Scientist, principally his participation in the State of International AI Initiatives Executive Roundtable with David Bray, Benji Sucher, Kazuo Yano, and Nazli Choucri.
AI World 2018 was packed with more than 3,000 people. It was three stimulating days of AI immersion, featuring fascinating presentations, compelling conversations, and a few logistical challenges that come with managing a film crew on site. But from everything I heard, observed, and learned, I’ve narrowed down the experience to one overarching theme: human control.
By this I mean there is a common misconception that AI will take over humanity and subvert free will and human control. This prevalent public fear (manifested in visuals repeatedly fed to us by Hollywood) filters up to lawmakers’ policy discussions and into the disagreements AI experts have over standardizing goals that are “best” for people that AI technology could speed us toward.
The truth is, the rise of AI is pushing us to tackle the same challenge we’ve been facing for 3,000 years of conflict and wars: disagreement on what’s “right” or “good” for humanity.
It’s nothing new. But now with AI, there’s a technology that can help communities get to an end goal much faster, in a multitude of ways. And for someone who’s watching such changes play out right in front of them, it can be scary to place ever more trust in technology and give up the comforting feeling of control.
If you missed AI World this year, this glimpse into the conference may be a chance to think more about the future of AI (and its implications) and help spur your own insightful conversations with laymen and experts alike.
Spoiler alert: AI technology has been given a huge responsibility due to its great power to change the world. But humans are ultimately behind it, responsible for every decision.
Questions to think about in 2019:
1. How do you define what’s “fair” and “good” for you and your community?
In a general sense, AI is outcome-oriented. The net result takes priority over the processes and actions taken to achieve that outcome. If this is true, then we are primarily driven to standardize our goals. Which leads to the discussion: Is there a shared outcome all humans (and countries) can align on?
“We’re using words like ‘fairness’ and ‘good,’ and these are words that we want to believe are universally understood in the same way,” Scriffignano said. “And research shows that they’re not. We have to be so careful with this … we’re a long way away from what seems obvious in the actual technology we’re talking about.”
2. How do we ensure that AI endeavors don’t get out of hand – and humans don’t lose control?
The power AI holds to disrupt the world has caused many to fear it, to blame it, to judge it, and try to stop it. But the more measured approach is to learn how to use and control it.
The recurring theme at AI World was that the responsibility (in the form of credit or blame) comes down to the people designing and using the tech – not the tech itself.
Everything that goes into AI is programmed by humans – and we have more control than is popularly imagined. There is, however, a lot of complexity that we need to account for in AI’s stew of if/then actions, signals, and triggers – and a consequent need for meticulous testing and risk assessment.
Scriffignano noted, “We are increasingly creating scenarios where we develop systems that have the ability to produce their own capability that’s not always completely explainable to us. While it’s not a thing that has an intelligence unto itself, it can behave in a non-deterministic way that we didn’t anticipate when we created it.”
“We have to resist the temptation to label it [AI] as good or bad, or right or wrong,” Scriffignano summarized. Any tool – a GPS system, a car, Google’s search engine – can be used for good or ill. “If you push forward technology for good, and someone else used it for bad, who’s responsible?”
3. How involved should government be in AI’s regulation?
At the “State of International AI Initiatives” executive roundtable talk, the panel of speakers agreed:
Governance is necessary. But whether it’s done by governments per se or others in the future, it’s not the job of scientists or corporations to manage matters of fairness or justice. They’re focused on a different job – advancing technology for use by the people.
The roundtable also suggested that we might have more freedom in the AI era than we now think. That we’ll move away from a prescriptive approach, which slows problem-solving. A freer approach allows for more creative solutions, which means achieving the end goals faster and in a variety of ways that could ultimately give more control to people.
It was also noted that divergent views will always present themselves and delay progression. But how do we find common ground for the common good? A goal established by the government may not be a shared goal among all the people.
“We don’t need regulation that tries to put all AI in a box,” Scriffignano noted.
Many questions remain. How can government help (rather than hinder) the evolution of AI during these early stages? How can we ensure no government monopolizes control over the technology? And how will those laws impact people’s progress with the technology, its testing and application, and the positive impact it could have?
4. What’s next? How will we progress with AI in 2019?
We should be striving to focus on outcomes more than on the specific technology. The larger the impact a technology is perceived to have on people (and the less control we think we have over it), the more regulation that follows.
But think about regulating the “Internet” or “crypto” or “digital.” The definitions of each technology need to align – and be understood – before enforcing rules around it that people will follow.
“AI is so poorly defined. We have enough trouble doing that with data,” Scriffignano states. He further explains that we shouldn’t necessarily focus on regulating a tool’s capabilities or design. We should instead regulate what people do with it.
In our next article, we’ll get a little more technical and explore how the questions we ask and the terminology we use around AI are often misleading and erroneous. And as you can imagine, this produces unintended and undesirable results.
To learn more, watch the video of the panel discussion that inspired this brief article: State of International AI Initiatives Executive Roundtable