Vanderbilt and Oak Ridge partner on AI research and development for national security

Vanderbilt University and Oak Ridge National Laboratory have announced a new partnership focusing on artificial intelligence research to develop technologies for national security, as U.S. universities continue to invest more in machine learning research and education programming.

According to a news release, Vanderbilt and Oak Ridge will build on complementary research and development efforts to create “science-based” AI assurance methods that make sure AI-enabled systems used for national security are able to function in “challenging and contested environments.”

In addition, the partnership will test and evaluate the resilience and performance of AI systems, which can often be vulnerable to cyberattacks by adversaries. The partnership was announced at last week’s Tennessee Valley Corridor 2024 Summit in Nashville.

“We are excited to partner with Oak Ridge National Laboratory to ensure that AI-enabled programs are safe, accurate and reliable, when it has never been more imperative to do so,” Vanderbilt University Chancellor Daniel Diermeier said in a public statement. “This radical collaboration among our best researchers and one of the nation’s premier national laboratories will address these crucial challenges head-on. We look forward to the great work we will do together.”

Padma Raghavan, Vanderbilt’s vice provost for research and innovation and chief research officer, told Tennessee Firefly that the partnership's inaugural project will involve "training" AI models that will allow the U.S. Air Force to adopt and make use of AI-based technologies and autonomous vehicles on the battlefield.

Raghavan said that currently available AI models often “lack the required security, robustness and dependability for the U.S. military to confidently deploy AI-based technologies in many mission-relevant applications.” She added that experts from Vanderbilt’s Institute for Software Integrated Systems and ORNL’s Center for AI Security Research will work together to develop the training, testing and evaluation methods for the AI models that will allow the U.S. Air Force to fully utilize autonomous vehicles.

“While recent tech advances in AI illustrate its broad potential, real-world systems that integrate AI, sensors and software need to be both dependable and secure, especially when it comes to national security. Universities have a crucial role to play but advancing the science and technology and applying them to military scenarios require cross-sector collaboration,” she told Tennessee Firefly. “The Vanderbilt partnership with Oak Ridge National Lab (ORNL) will address some of our nation’s most pressing defense challenges by bringing together our complementary expertise.”

According to the announcement, Vanderbilt’s basic and applied research in emerging technologies and cybersecurity — particularly at the Vanderbilt Institute for Software Integrated Systems ­ — has recently helped to create a foundation for AI assurance research. Furthermore, Oak Ridge recently established the Center for Artificial Intelligence Security Research, or CAISER, to address emerging AI threats, as well as to train and test large AI models.

“With ORNL’s unique expertise and capabilities in computing and AI security, we can train, test, analyze and harden AI models using massive datasets,” ORNL Director Stephen Streiffer said. “Working in close cooperation with Vanderbilt, I look forward to advancing the Defense Department’s deployment of AI-based systems for national defense.”

The announcement said Department of Defense officials believe that autonomous vehicles could be a “game-changer for the U.S. military.” It noted that the initial focus will be on autonomous vehicles like the AI-enabled X-62A VISTA that recently took Air Force Secretary Frank Kendall for a flight featuring simulated threats and combat maneuvers without human intervention.

“The growth in AI applications is breathtaking—most notably in the commercial marketplace, but increasingly in the national defense space as well. While all users of AI are concerned about security and trust of these systems, none is more concerned than the Department of Defense, which is actively developing processes to ensure their appropriate use,” Mark Linderman, chief scientist at the Air Force Research Laboratory Information Directorate, said in a statement.

Previous
Previous

Report finds Tennessee’s public charter school students showing more academic growth than traditional school peers

Next
Next

Governor Lee endorses conservate school board member Aron Maberry in a State House race in Montgomery County