Article

Can Government Keep Up with Artificial Intelligence? (written for NOVA Next/PBS)

via NOVA Next

Aug. 10, 2017

When Amazon first debuted same-day delivery service in Boston, it seemed a promising alternative to poorly-stocked, overpriced, or low-quality supermarkets in neighborhoods like Roxbury. But the service didn’t extend there, despite delivering to residents on all sides of Roxbury.

Amazon was assailed for overlooking the comparatively lower-income Boston neighborhood, and the company, in its defense, said customer data and delivery logistics played into the decision. Amazon didn’t overlook just Roxbury, either. According to a Bloomberg report, in Chicago, New York, Boston, Atlanta, and other cities, black residents were half as likely to live in same-day delivery areas as white residents, despite paying the same $99 membership fee.

Amazon was criticized in 2016 for offering different services to lower-income neighborhoods despite charging the same membership rate.

Today, data collected on individuals, when combined with artificially intelligent systems, are used for rolling out new services, populating newsfeeds, targeting advertisements, determining healthcare treatment plans, even levying court rulings. Yet most of us don’t really know how these tools work or how they arrive at their decisions, and neither do we know how to interpret or validate many of the algorithms that power these systems. Without that knowledge, vast swaths of our economy could become black boxes, unable to be scrutinized for legality or fairness.

It was an idea that was on the mind of industry experts, researchers, psychologists, lawyers, and activists last month at AI Now 2017, a workshop hosted by the AI Now Initiative. Speakers at the conference not only highlighted challenges that AI presents, but they also discussed the ways in which effective governance could alleviate these concerns.

Vanita Gupta, Nicole Wong, Terrell McSweeny, and Julie Brill discussed issues of governance last month at AI Now 2017 in Cambridge, Massachusetts.

“There is no possible way to have some omnibus AI law,” says Ryan Calo, a professor of law and co-director of the Tech Policy Lab at the University of Washington. “But rather we want to look at the ways in which human experience is being reshaped and start to ask what law and policy assumptions are broken.”

Approximating Intelligence

The question of regulating AI is being asked now, after decades of little advancement, because recent technological progress has been rapid. AI can be used to refer to anything from machine learning that identifies faces in Facebook photos to natural language processing used for Google Translate to the proprietary algorithms used by some courts to determine sentencing for criminals. All of these methods take in data about the existing world and output behaviors or predictions. Machine learning, for example, uses the ability of a computer to recognize patterns from large sets of data to make accurate, generalizable predictions for new data.

AI’s recent advance is due in part to the massive amounts of data now available to researchers. We provide information voluntarily on our newsfeeds, and passively as we move through public spaces with license-plate scanners and CCTVs, as we click through websites, and as we use GPS and other mobile apps that share our data with third-party services. According to Georgetown Law’s Center on Privacy and Technology, one in two American adults is in a law enforcement face recognition network, a system that is broadly unregulated.

AI has the potential increase our productivity, reduce our tendency to make biased decisions, and improve safety. But many people are concerned about the unanswered questions raised by AI. On the surface, using data to influence purchases or manipulate newsfeeds may seem a trivial business necessity, but as people rely more heavily on AI, the decisions those algorithms make will be increasingly consequential. Social media services like Facebook and Twitter can selectively disseminate information and sway public opinion; it became a topic of debate after the 2016 U.S. presidential election. And when automated algorithms and machine learning are used for determining credit scores, delivery services, and insurance premiums, the effects can be far more personal.

We passively give up data through almost every digital interaction.

Collecting enormous swaths of data can speed the development of AI, but it can also compromise people’s privacy. People are not always aware of how their data is being collected and used or who owns it, and given the pace at which AI is progressing, companies themselves don’t necessarily know how they’ll be using the data in the future. That can lead to companies and governments requesting overly broad consent from consumers and citizens. “How am I supposed to give you notice to get your consent when I don’t know what I want your data for?” Nicole Wong, a deputy White House chief technology officer (CTO) under President Obama, said at AI Now.

Yet without policy to regulate AI, companies are free to use data as they see fit. They could sell that data, nudge consumers towards products or services they may not need, or exclude certain segments of the population from services that would not benefit the company. In 2014, Facebook conducted experiments to manipulate user mood by altering the tone of their newsfeeds, raising alarm around the internet. Certain car insurance companies, ProPublica has reported, have charged an average of 30% more for car premiums in Zip codes with higher concentrations of minorities than in whiter neighborhoods with similar accident costs.

Planning for the Future

There are legitimate concerns over the extent to which government should regulate the use of AI. Excessive or inappropriate policies can slow down adoption of new technologies or fail to address the true challenges of AI. A 2016 government report on Preparing for the Future of AI mentioned that we are approaching a point at which regulation could create more of a bottleneck than the development of technology itself for some advancements, like autonomous vehicles and drones.

Former White House deputy CTO Ed Felten thinks the government can work to prevent this. “One way to do it is to create policies that are designed by looking at the big picture rather than being very closely tailored to the current state of technology,” he says. “As an example, rather than dictating which technologies should be used in certain settings, it makes more sense to have a performance standard,” much like car safety regulations state bumpers must survive crashes at specific speeds rather than dictating how the bumper is built. Car companies can then adopt new technologies without conflicting with regulations.

“If we don’t have an edge in AI, we are not well positioned to make policy around it.”

In May of 2016, the Obama administration announced an interagency working group to learn more about the benefits and risks of artificial intelligence. By the end of 2016, the group had presented multiple reports on preparing for the future of AI and on topics relating to big data, investment in AI, and data security, culminating in 23 recommendations for how the government, industry, and research sectors should work together as part of an extensive national plan to prepare our society and economy.

The reports emphasized the role of government in monitoring the safety and fairness of AI’s applications, supporting basic research, and providing a plan for workers left unemployed by automation. Members of the Office of Science and Technology Policy (OSTP), who directly advise the president, thought about how the government can appropriately support the development of AI while also encouraging researchers to create systems that are open, transparent, and understandable and that work effectively with people.

“If we don’t have an edge in AI, we are not well positioned to make policy around it,” Calo says. “The U.S. has had an incredibly disproportionate impact on internet governance because we invented it here and we were the first to commercialize.”

“The White House and particularly the OSTP really did the right thing at the right time,” he adds. “They heard from industry and academics and others in civil society, found out what people were worried about, found out what people are hopeful about, and thought a lot about what the role of government is and what is possible for government to do.”

At AI Now 2017, current FTC Commissioner Terrell McSweeny called for government to work with civil society to start defining how we can use AI. “When are we going to say ‘That’s a choice that humans are going to make, that’s a choice machines are going to make.’” she said. Wong says a first step for government could involve drafting a checklist for judging AI tools.

Not everyone in the current administration has marked AI as a priority. While experts contributing to the 2016 report expect that significant changes to the labor market will occur in the next 10 to 20 years, Treasury Secretary Steven Mnuchin thinks these changes will not occur for another 50 to 100 years. “It’s not even on my radar screen,” Mnuchin told Axios in an interview.

Overall government involvement in the conference has fallen off, too. At AI Now, current and former members of the government reflected on the changes they have observed. The 2016 conference was co-hosted by the OSTP as the final workshop out of series of four on the challenges and opportunities of AI, and numerous White House officials were present. This year, no acting White House officials were present, and many executive branch positions are still vacant.

Others aren’t ready to write off government involvement. Wong has mixed feelings about the stance of the current administration. “It’s early to judge the administration on AI,” she said in July. “I have friends still there getting positive signals…but there are lots of signals that are not fabulous,” she added. “There is also a dismissiveness of regulation and of ethical guidance.”

Calo agrees. “The current administration is not necessarily hostile to artificial intelligence, it’s just that it’s not as planful,” the professor says. “The Obama administration was characterized both by a need to plan but also a real recognition of how powerful AI could be and how important it is. I don’t really see that in this administration.”

The same neural networks that can interpret and discriminate images can also generate them based on key terms and existing images.

We reached out to the OSTP for comment, and they declined, saying their policy lead for AI is out of the office for an extended period of time. Officially, the roles of director, previously held by John Holdren, and chief technology officer, previously held by Megan Smith, have not been filled. Much of the expertise on AI had previously resided in the Office of the Chief Technology Officer and in the National Security Division, and most of these individuals have left and not been replaced. In fact, the only political appointee to the OSTP is Michael Krastios, a former aide to Trump and previous member of Thiel Capital and Clarium Capital Management. Since no other key appointments have been made, Kratsios is serving as OSTP’s deputy chief technology officer and acting OSTP director.

Operating without technical and scientific expertise in the White House risks stalling previous momentum on AI policy, Calo and Felten say. Both emphasize the need for technical expertise in the government. “Government needs to make the effort to make sure they have the people and the processes in place to be well informed about the state of technology and where it’s going,” Felten says.

“Expertise is absolutely the first step,” Calo says. Without it, “the government has to rely on other stakeholders, and those stakeholders will have their own interests. Sometimes, if the government doesn’t have adequate expertise, it won’t act because it will be paralyzed. And other times they will take industry’s word for something and act too quickly—and then have a problem.”

“Often you just don’t get the benefit of some innovation because the government doesn’t have the expertise to evaluate whether it’s safe and so they just say no.”

While the OSTP may be in flux, President Trump has renewed the charter of the National Science and Technology Council, which brings together multiple agencies to advance the President’s science initiatives. The NSTC subcommittee on Machine Learning and Artificial Intelligence, which was formed in May of 2016 and coordinated the reports on AI, was recently renewed through January of 2018.

Taking a Stand

Even if necessary expertise does not exist in the White House, other branches of government may fill the void. Senator Maria Cantwell (D-WA) is working with Calo to propose a federal advisory committee on AI for the U.S. Senate. Congress and the Department of Transportation are picking up the pace on driverless cars and drones. “Today, federal agencies, states, and courts tackle AI piecemeal. This has some advantages, including making room for experimentation. But some argue we should coordinate our response. And I personally argue that we need a government repository of expertise,” Calo says.

Where the executive branch of the U.S. federal government may have paused on AI, other governments continue to move forward. Last summer, the European Union approved the General Data Protection Regulation which provides a basic “right to explanation,” preventing the use of decisions made without any human intervention, and giving people the right to contest decisions levied using only AI. The United Kingdom has developed a national surveillance camera strategy to support safety and technology development, while still protecting the privacy of the public. Despite the progress, government officials like Tony Porter, the UK’s Surveillance Camera Commissioner who authored the surveillance report, note that the government is struggling to keep up with the pace of technological development.

Recognizing this gap, speakers at the AI Now conference asked, if the federal government is unable or unwilling to lead the charge, will the research community and industry do so instead? “The private sector will need to step up,” said Vanita Gupta, former principal deputy assistant attorney general.

The answer, it seems, is maybe. “You need both government-led and industry-led initiatives, and both are happening,” Felten says. Industry leaders and researchers are approaching these challenges through technological advancements, research, and advocacy for government and public involvement. Adoption rates mean that the rules of usage are being set now. If policy can’t keep up, practice will create de facto best practices, norms, and rules.

Industry members have recognized that AI poses broad challenges that require team efforts, and are collaborating through large collectives. The AI Now initiative and the ACLU jointly announced the opening of an interdisciplinary New York-based research center to explore AI’s social impacts (specifically rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure). Elon Musk, CEO of Tesla and SpaceX, along with others like the investor Peter Thiel and Reid Hoffman, former CEO of LinkedIn, has launched a non-profit company called OpenAI, tasked with exploring the safety of Artificial General Intelligence. Corporations like Amazon, Google, Facebook, IBM, and Microsoft have founded the Partnership on AI to discuss and develop best practices and address opportunities and challenges with AI technologies.

Still, Calo cautions against relying solely on industry. “There’s a long history of big, new industries coming together and coming up with a set of ethical rules of conduct that are later invalidated for being anticompetitive,” he says.

There are many unknowns about AI, but academics, civil servants, and industry members are starting to think about the future.

Meanwhile, some industry leaders have started taking matters of policy into their own hands. Musk called for a regulatory body to guide development of AI at the National Governors Association meeting last month, urging governors to be “proactive in regulation instead of reactive” in correcting what he views as a government lack of insight on AI, Musk told MIT Technology Review. “By the time we are reactive in AI regulation, it’s too late.”

He’s not the only industry leader to take a stand. Eric Horvitz, technical fellow and managing director at Microsoft Research testified before a Senate committee on commerce subcommittee in November 2016. In his testimony, Horvitz presented recommendations for supporting innovation and confronting the challenges of AI, which bared resemblance to the recommendations laid out by the OSTP in 2016, such as promoting public research investments, developing guidelines for AI, and creating frameworks for citizen data access.

Meanwhile, there is active research in the AI community about transparency and accountability and fairness, Felten says. Academics like Ryan Calo are also taking up the charge: yesterday Calo published a roadmap on AI policy based on his work with the government and other federal agencies on co-hosting AI workshops.

One proposal, put forth by Latanya Sweeney, a professor of government at Harvard University and former chief technologist for the FTC, is to create consumer and safety testing like we have with physical products. “Maybe we need something like that for some AIs,” she said. “The algorithms have to be able to be transparent, tested, and have some kind of warranty or be available for inspection,” she added. “I want the people we elect controlling those norms, not the technology itself.”

As AI becomes more tightly woven into society, new rules will emerge. And that may come with many promising changes. “AI can be transparent and analyzable in a way that is unlike decisions made in a human brain,” Felten says. “If we take advantage of those opportunities, we actually can have systems that are more accountable and make decisions that are better justified than the ones we have now.”

Related Content