Homeland Security’s AI safety board is a who’s who of tech CEOs

Satya Nadella and Sam Altman

Microsoft CEO Satya Nadella came on stage for OpenAI CEO Sam Altman’s keynote to talk about the future of their partnership.

Tiernan Ray

The US has enlisted the help of some of the most prominent tech CEOs to help the Department of Homeland Security (DHS) safely deploy AI across critical infrastructure.

Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman, and Alphabet CEO Sundar Pichai are among the more than 20 people who will now sit on the Artificial Intelligence Safety and Security Board, the DHS announced on Friday. Anthropic CEO Dario Amodei, IBM CEO Arvind Krishna, and Nvidia CEO Jensen Huang will also serve on the board.

“The Board will develop recommendations to help critical infrastructure stakeholders, such as transportation service providers, pipeline and power grid operators, and internet service providers, more responsibly leverage AI technologies,” the department said in a statement. “It will also develop recommendations to prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety.”

Also: US, UK join forces on AI safety and testing AI models

Tech CEOs make up the vast majority of the 22 seats on the board, which also counts Delta Airlines CEO Ed Bastian and Occidental Petroleum CEO Vicki Hollub among its members. Seattle Mayor Bruce Harrell and Maryland Governor Wes Moore also have seats on the board, as do a handful of AI scholars.

The board appointments come at a critical time in AI deployments. With a never-ending supply of new AI technologies, along with a steady stream of eye-popping updates to various AI tools, governments around the globe need to be prepared to not only take advantage of the technology, but also address how it could be used against them.

Also: Google survey: 63% of IT and security pros believe AI will improve corporate cybersecurity

The US government has been especially bullish on driving AI understanding and innovation, such as via recent agreements with the Japanese government and companies in both countries to develop new AI technologies. In a sign that the US government is also concerned with AI security, the Biden Administration earlier this year announced a requirement that all federal agencies have a plan in place to ensure their use of AI is secure and used in fair and equitable ways. If agencies fail to deliver an AI policy that would ensure this by December 2024, they could be barred from using AI.

The Artificial Intelligence Safety and Security Board takes the US government’s efforts one step further by enlisting help to strengthen and safeguard critical infrastructure. Indeed, the US government has warned for years that foreign adversaries could try to target critical infrastructure. Malicious actors can easily use AI tools against the US, so the government wants help from the companies that develop AI tools to mitigate the risks.

“Artificial Intelligence is a transformative technology that can advance our national interests in unprecedented ways. At the same time, it presents real risks — risks that we can mitigate by adopting best practices and taking other studied, concrete actions,” DHS Secretary Alejandro Mayorkas said in a statement. “I am grateful that such accomplished leaders are dedicating their time and expertise to the Board to help ensure our nation’s critical infrastructure — the vital services upon which Americans rely every day — effectively guards against the risks and realizes the enormous potential of this transformative technology.”

Joining the board is also useful to the companies themselves. Not only can they use their position to promote the use of their products and broader government investment in AI, they have a seat at the table with a government agency that clearly needs their help. We’ll be watching to see exactly how they’ll use their new positions.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here