Information security is one of the most vital challenges facing businesses, governments, and even just casual web users as cyberattacks become more sophisticated with each passing day. However, a new initiative by the Biden Administration might just kick cybersecurity into high gear – thanks to AI and a healthy dose of competitive hacking.
The program takes the form of a two-year-long competition called the “AI Cyber Challenge” (AIxCC), with the aim of using artificial intelligence to help safeguard the country’s most essential software.
“President Biden has been clear,” Arati Prabhakar, Director of the White House Office of Science and Technology Policy, said during a press briefing. “AI is the most powerful technology of our time, and we have to get it right for the American people.“
Spearheaded by the Defense Advanced Research Projects Agency (DARPA), the competition – announced today ahead of Def Con 31 in Las Vegas, which kicks off on August 10 and runs through August 13 – puts out the call for participants nationwide to identify and remedy software vulnerabilities using powerful Large Language Models (LLM) that power OpenAI ChatGPT and Google Bard AI.
“This competition will be a clarion call for all kinds of creative people in organizations to bolster the security of critical software that American families and businesses and all of our society relies on,” Prabhakar said.
Well-established AI giants like Anthropic, Google, Microsoft, and OpenAI have joined in on this initiative and will be contributing expertise and access to advanced AI hardware for competitors. This contest, with prizes nearing $20 million, seeks to generate novel solutions for fortifying the computer code that powers so much of our modern digital infrastructure.
“There’s no magic one shots that will secure the nation,” said Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology. “Instead, defense always has to be one step ahead. We see the promise of AI in enabling defense to be one step ahead.”
How the contest will work
The AIxCC initiative is intended to find and fix software vulnerabilities in critical national infrastructure, like electric grids, transportation networks, and public utility and healthcare systems, and the competition promises multi-million dollar rewards for the most effective software security solutions using advanced AI technologies.
The contest will feature a preliminary round in Spring 2024, after which the top 20 teams advance to the semifinals at DEF CON 2024. The top five from this semifinal will then move on to the finals at DEF CON 2025, with the top three finishers securing significant cash rewards.
The Open Source Security Foundation (OpenSSF) will make itself available to advise participants of the challenge, whose role includes ensuring the prompt application of winning software codes in safeguarding critical American software infrastructure.
AI security improvements will quickly make their way into everyone’s PC, but it’s not a guaranteed fix
While the competition’s focus right now is on very large, national security-adjacent networks and software systems (it’s why DARPA is involved, after all), that doesn’t mean the benefits of this competition will be limited to better protection for hospitals and the military.
Just like the internet, another DARPA-backed initiative, was originally designed to help universities share research more easily but then unintentionally grew into the all-encompassing system we all use today, the work AIxCC competitors put into hardening critical network infrastructure against attacks will quickly make its way through the entire information security ecosystem. This will in turn make the best VPNs and best antivirus software even more effective against emerging threats, many of which will use AIs themselves to find vulnerabilities to exploit.
Given the multi-round, multi-year nature of the competition, we should expect to see the benefits of new AI discoveries filtering down to the broader public pretty quickly, though it won’t all happen overnight.
What’s more, most cybersecurity failures don’t come from so-called zero-day exploits, which are newly discovered vulnerabilities that we didn’t know existed until someone uses them to attack a network or computer. More often, users don’t use already-existing fixes to known problems, and so leave themselves vulnerable to attack. In those cases, all the AI in the world can’t help you if you click on links in emails that you shouldn’t be clicking on.
Still, improved security is always a positive, and if new AI models can help us do that, then it’s definitely something to celebrate.