AI panel urges US to boost tech skills amid China’s rise
An artificial intelligence commission led by former Google CEO Eric Schmidt is urging the U.S. to boost its AI skills to counter China, including by pursuing “AI-enabled” weapons – something that Google itself has shied away from on ethical grounds.
Schmidt and current executives from Google, Microsoft, Oracle and Amazon are among the 15 members of the National Security Commission on Artificial Intelligence, which released its final report to Congress on Monday.
“To win in AI we need more money, more talent, stronger leadership,” Schmidt said Monday.
The report says that machines that can “perceive, decide, and act more quickly” than humans and with more accuracy are going to be deployed for military purposes — with or without the involvement of the U.S. and other democracies. It warns against unchecked use of autonomous weapons but expresses opposition to a global ban.
It also calls for “wise restraints” on the use of AI tools such as facial recognition that can be used for mass surveillance.
“We have to develop technology that preserves our Western values, but we have to be prepared for a world in which not everyone is doing that,” said Andrew Moore, a commissioner and the head of Google Cloud AI.
The group has the ear of top lawmakers from both parties, but has attracted criticism for including many members who work for tech companies with big government contracts, and who thus have a lot at stake in federal rules on emerging technology.
The report calls for a “White House-led strategy” to defend against AI-related threats, to set standards on how intelligent machines can be used responsibly and to boost U.S. research and development to maintain the nation’s technological advantage over China.
“We believe we are one or two years ahead of China, not five or 10,” Schmidt told the Senate Armed Services Committee last week. He clarified Monday that that he was expressing his personal opinions and not necessarily those of the commission.
It’s not yet clear whether President Joe Biden’s administration is on board with the commission’s approach. It’s still awaiting confirmation of a new director for the White House Office of Science and Technology Policy, which Biden has elevated to a Cabinet-level position.
“AI policy tends to be very bipartisan,” said Michael Kratsios, who was U.S. chief technology officer under President Donald Trump and led a push to pump more resources into AI development across federal agencies. The greatest imperative, he said, is that “the next great AI technologies are developed in the West.”
One big difference between the two administrations is likely to be the approach to building AI talent. The commission recommends a more open immigration policy than what Trump favored.
Congress formed the AI panel in 2018 and appointed 12 of its 15 commissioners, with the others picked by Trump’s Defense and Commerce secretaries. A judge later compelled the commission to make its meetings and records more accessible to the public after a civil liberties group, the Electronic Privacy Information Center, challenged its secrecy.
It’s been led by Schmidt, who was Google’s CEO and later the executive chairman of its parent company Alphabet. He previously helped lead the Defense Innovation Board, which advises the Pentagon on new technology.
That brought some conflict in 2018 when Google backed out of Project Maven, a U.S. military initiative using AI-based computer vision technology to analyze drone footage in conflict zones. The company, responding to internal activism from employees, also pledged not to use AI in any weapons-related applications.
“I did not agree with the Google decisions on Maven,” Schmidt told senators last week, calling it an “aberration” compared to the tech industry as a whole, where he says there are plenty of companies that want to work with the military. He said AI and machine vision systems are particularly good at “watching for things,” which is something the military spends a lot of time doing.
The commission also includes executives like Safra Catz, the CEO of software giant Oracle, and Amazon’s incoming CEO, Andy Jassy, who currently runs its cloud computing division, as well as top AI experts at Microsoft and Google. All four companies have competed against each other for federal cloud computing contracts. The representatives from Microsoft and Google joined other members in approving the final report Monday, but abstained from the section relating to government partnerships with the private sector.
Excluding human rights groups and rank-and-file tech experts from the commission has led the group to more easily frame this policy issue as a “democracy versus authoritarianism” competition against China while skirting more difficult topics, like the use of AI technologies on the U.S.-Mexico border, said Jack Poulson, a former Google researcher who now directs industry watchdog Tech Inquiry.
“The nominal reason to have these tech CEOs on these committees is they’re experts in the technology. But they’re also, subject to shareholder requirements, acting in the interests of their companies,” Poulson said. “They don’t want significant regulation or antitrust enforcement.”
The government-industry partnership may be important for the U.S. and its allies to help set standards for the responsible use of AI, said Megan Lamberth, a research associate at the Center for a New American Security.
“AI has the potential to really transform not only how militaries fight wars, but how economies operate and how societies and people interact with each other,” Lamberth said. “If there’s a gap in leadership, another country is going to fill that void.”
The American Civil Liberties Union said in a statement Monday that the commission made useful recommendations but it should have gone further by establishing civil rights protections now, before AI systems are widely deployed by intelligence agencies and the military.
The commission asked Congress to make new laws requiring federal agencies to conduct human rights assessments of new AI systems used on Americans. But it didn’t recommend the binding surveillance limits sought by activists.