Vice President Kamala Harris will announce a series of new global initiatives on artificial intelligence Wednesday during a major tech policy speech at the Global Summit on AI Safety in London.
During a pre-conference address, Harris was expected to call on world governments to prioritize immediate measures to curtail short term AI threats, like misinformation and racial discrimination, rather than focusing solely on what could become a threat in the future.
In a statement released before the speech, Harris said speculative AI risks are "without question profound, and demand global action. But let us be clear, there are additional threats that also demand our action, threats that are currently causing harm and which, to many people, also feel existential."
In a major action, Harris will announce the establishment of the U.S. AI Safety Institute, which will operate as a branch of the National Institute of Standards and Technology, in developing AI evaluation tools, guidelines and best practices to mitigate risks internationally and involve experts from various tech sectors.
The vice president is also set to unveil a declaration signed by 30 nations, including the United States, to endorse the responsible military use of artificial intelligence, a discussion that has recently fueled concerns of a modern tech-based cold war.
She'll also announce more than $200 million in private funding that will advance the government's AI priorities through philanthropic efforts.
Harris will spell out other actions that aim to detect and block AI-generated phone calls, set standards for content authentication such as watermarking and labeling, while seeking pledges from other nations to incorporate similar policies.
Harris' visit takes place after British Prime Minister Rishi Sunak warned last week about the danger of artificial intelligence, saying the technology could make it easier to build biological and chemical weapons and put the public in peril.
Back in Washington, the Biden administration was hustling to implement stronger safeguards to limit AI's potential risks to U.S. national security, economic security, or public health and safety.
As part of the effort, the Office of Management and Budget planned to release its first-ever draft policy guidance on the use of AI by the U.S. government, which builds on the administration's Blueprint for an AI Bill of Rights, which was released earlier this year to promote responsible innovation in the field.
Harris' visit also comes two days after President Joe Biden signed an executive order that put new requirements on tech developers to mitigate the inherent risks of AI and seeks to establish new safety standards to protect individual privacy and shield the nation's secrets.
Biden's directive prioritizes cooperation between the federal government and the private sector in the most sweeping action yet to regulate AI in an attempt to establish rigorous safety testing for the technology before it's made available to the public.
The order requires AI system developers to share safety test results and other critical information with the administration as they become available, especially in cases where an AI model could pose a serious risk to the nation.
The administration was seeking to ensure safe and transparent development of the emerging technology, while Biden was also preparing to deliver legislation to Congress on the matter.
Harris was working to garner support for the plan among U.S. allies and international partners in an effort to steer the technology toward democratic values, including transpare
ncy, privacy, accountability, and consumer protections, the White House said.
In May, Harris secured voluntary commitments from more than a dozen prominent AI companies to develop the technology responsibly. In July, Harris convened leaders in consumer protection, labor and civil rights to discuss AI risks, which highlighted the need to balance innovation with safety.
"We reject the false choice that suggests we can either protect the public or advance innovation. We can -- and we must -- do both," Harris said. "And we must do so swiftly, as this technology rapidly advances."
Artificial Intelligence Analysis
Objectives:
Vice President Kamala Harris is visiting London to make a major tech policy speech at the Global Summit on AI Safety. She will announce a series of new global initiatives on artificial intelligence, including the establishment of the U.S. AI Safety Institute, a declaration signed by 30 nations to endorse the responsible military use of AI, and more than $200 million in private funding for AI-related philanthropic efforts. Current
State-of-the-Art and Limitations:
The current state-of-the-art in AI safety involves AI evaluation tools, guidelines, and best practices to mitigate risks internationally, involving experts from various tech sectors. However, there are still limitations in detecting and blocking AI generated phone calls, setting standards for content authentication, and incorporating similar policies. New Approach and Why it Will Succeed:
Vice President Harris’ new approach is to call on world governments to prioritize immediate measures to curtail short term AI threats such as misinformation and racial discrimination, rather than focusing solely on speculative AI risks. This approach will succeed because it is focused on addressing current threats instead of potential threats, and it will also involve experts from various tech sectors in developing AI evaluation tools, guidelines, and best practices. Target Audience and Impact:
The target audience of Vice President Harris’ initiatives is world governments, tech experts, and philanthropists. If successful, these initiatives would reduce the risk of misinformation and racial discrimination, as well as set standards for content authentication and involve other nations in incorporating similar policies.
Risks Involved:
The risks involved in pursuing this approach include the potential for AI to be misused and the inability to detect and block AI-generated phone calls. Cost and
Timeline:
The cost of pursuing this approach will be more than $200 million in private funding for AI-related philanthropic efforts. The timeline is unclear, but the initiatives are set to be announced on Wednesday.
Success Metrics:
The mid-term and final success metrics for this approach would include successfully curtailing short-term AI threats, setting standards for content authentication, and involving other nations in incorporating similar policies. Score for ability to interest DARPA:
8/10