OpenAI, Google and other companies sign White House pledge over AI risks

During President Biden’s 50-year Washington political career, he has witnessed a slew of innovations, including the invention of the cellphone, the first Mac computer, the World Wide Web and social media.

But rapid advances in artificial intelligence, highlighted by the recent release of ChatGPT, have stunned the seasoned president.

“We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years,” he said. “That has been an astounding revelation to me, quite frankly.”

To meet the moment, the Biden White House on Friday took its most ambitious step to date to address the safety concerns and risks of artificial intelligence, announcing that seven of the most influential companies building AI have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the White House’s involvement in an increasingly urgent debate over AI regulation.

Biden also said he was going to work with both parties to develop “appropriate” AI legislation, throwing the weight of the White House behind bipartisan efforts in Congress to craft AI rules. Washington policymakers face growing pressure from consumer advocates and AI ethicists to craft new laws governing the technology, but previous congressional efforts to regulate Silicon Valley have been derailed by industry lobbying, partisan squabbles and competing priorities.

Biden also said the administration is developing an executive order focused on AI. A senior White House official, who spoke on the condition of anonymity to discuss the pledge, said that the administration was reviewing the role of AI across government agencies and said it was a “high priority” for Biden. The person shared few specific details about the executive order or a timeline for when it would be released.

In the White House pledge, the companies — which include Google, Amazon, Microsoft, Meta and ChatGPT-maker OpenAI — vowed to allow independent security experts to test their systems before they are released to the public and committed to sharing data about the safety of their systems with the government and academics.

The firms also pledged to develop tools to alert the public when an image, video or text is created by artificial intelligence, a method known as “watermarking.”

On July 21, President Biden announced that seven top technology companies building artificial intelligence agreed to a voluntary pledge to mitigate risks of AI. (Video: The Washington Post)

In addition to the tech giants, several newer businesses at the forefront of AI development signed the pledge, including Anthropic and Inflection. (Amazon founder Jeff Bezos owns The Washington Post. Interim CEO Patty Stonesifer sits on Amazon’s board.)

Enforcement of the pledge would largely fall to the Federal Trade Commission, which has emerged as the federal government’s top tech industry watchdog. Breaking from a public commitment can be considered a deceptive practice, which would run afoul of existing consumer protection law, according to a FTC official, who spoke on the condition of anonymity to discuss the agency’s thinking on enforcement.

However, the pledge does not include specific deadlines or reporting requirements — and the mandates are outlined in broad language — which could complicate regulators’ efforts to hold the companies to their promises.

During a speech about the pledge, Biden said the commitments would help the industry “fulfill its fundamental obligation to Americans” to develop secure and trustworthy technology.

“These commitments are real, and they are concrete,” he said.

White House signals support for AI legislation

In the absence of new laws, the pledge marks the Biden White House’s strongest attempt to date to implement guardrails for developers working in the field. Yet policymakers and consumer advocates warned that Friday’s pledge should be just the beginning of the White House’s work to address AI safety, pointing to tech companies’ checkered history of keeping their safety and security commitments.

“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said Jim Steyer, the founder and CEO of the advocacy group Common Sense Media in a statement.

Many of the top executives who signed the pledge — including Microsoft President Brad Smith, Inflection AI CEO Mustafa Suleyman and Meta President of Global Affairs Nick Clegg — attended Biden’s speech at the White House. After a reporter asked the company leaders if they were real or AI, Biden entered and told attendees in the Roosevelt Room, “I am the AI.”

Several of the signers have already publicly agreed to some similar actions to those in the White House’s pledge. Before OpenAI rolled out its GPT-4 system widely, it brought in a team of outside professionals to run exercises, a process known as “redteaming.” Google has already said in a blog post it is developing watermarking, which companies and policymakers have touted as a way to address concerns that AI could supercharge misinformation.

The White House official said the agreement would lead to higher standards across the industry.

“This is going to be pushing the envelope on what companies are doing and raising the standards for safety, security and trust of AI,” the person said.

CEO behind ChatGPT warns Congress AI could cause ‘harm to the world’

Despite broad concerns about the growing power and influence of the tech sector, Congress has not passed comprehensive regulations of Silicon Valley, and the Biden administration has attempted to use voluntary pledges as a stopgap measure. Nearly two years ago, the Biden administration sought public commitments from major tech companies to improve their cybersecurity practices at a similar White House summit.

Tech executives on Friday morning reiterated their commitments to the White House in statements and blog posts. Anna Makanju, OpenAI’s vice president of global affairs, said the pledge contributes “specific and concrete practices” to the global debate over AI laws. Amazon spokesman Tim Doyle said the company was “committed” to collaborating with the White House and other policymakers to advance responsible AI.

The White House announcement follows Biden and Vice President Harris’s recent flurry of AI meetings with top tech executives, researchers, consumer advocates and civil liberties groups.

Biden meets with tech company critics on AI

There are a bevy of different proposals in Congress to regulate AI, and key bipartisan measures are probably months away. Senate Majority Leader Charles E. Schumer (D-N.Y.) has formed a bipartisan group to work on AI legislation, which has spent the summer seeking briefings with top AI experts.

Schumer said in a statement Friday that he planned to work closely with the Biden administration and would build upon its actions. He said the AI framework he recently introduced will “strengthen and expand” the president’s pledge.

“To maintain our lead, harness the potential, and tackle the challenges of AI effectively requires legislation to build and expand on the actions President Biden is taking today,” he said.

FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy

Meanwhile, government agencies are evaluating ways that they can use existing laws to regulate artificial intelligence. The Federal Trade Commission has opened an extensive probe into ChatGPT, sending the company a demand for documents about the data security practices of its product and times that it has made false statements.

Washington’s counterparts in Brussels have been moving more aggressively on AI regulation. The European Union is in negotiations on its E.U. AI Act, which policymakers expect will become law by the end of this year. However, it will likely not become enforceable for two years after that. In the interim, European officials have been seeking similar voluntary commitments from tech companies to comply in a pledge called the “AI Pact,” a public commitment to begin preparing for the E.U. AI Act.

Gerrit De Vynck contributed to this report.

Next Post

Bridging A Gap Between LLMs And Programming With TypeChat

By now, large language models (LLMs) like OpenAI’s ChatGPT are old news. While not perfect, they can assist with all kinds of tasks like creating efficient Excel spreadsheets, writing cover letters, asking for music references, and putting together functional computer programs in a variety of languages. One thing these LLMs […]
Bridging A Gap Between LLMs And Programming With TypeChat

You May Like