White House Takes Important Step to Accelerate AI Adoption

White House Takes Important Step to Accelerate AI Adoption

The new White House Executive Order on artificial intelligence is the latest recognition that AI is the most transformative technology of our time, like the steam engine, electricity, and the internet before it.

My company’s work in recent years with federal agencies, cloud service providers, partners, and researchers has helped us appreciate the enormous potential of this technology. It holds the promise to save lives through improved healthcare and disaster relief, strengthen our cyber defenses, accelerate scientific breakthroughs, and cut billions in maintenance costs and waste.

We know AI will have a profound impact, because many companies have already adopted it, and consumers are already benefiting from it. A McKinsey global survey of 2,000 companies found that nearly half have incorporated at least one AI capability in their business processes. Nearly 80 percent say they derived significant or moderate value in doing so.

After more than 30 years working in the public sector, I’ve never seen government leaders more motivated to adopt a new technology. I’d like to share my perspective on why the Executive Order is so important.

AI is the Future, and the Future is Now

It’s hard to overstate AI’s importance: It’s the biggest commercial opportunity in the world right now, according to a recent PwC report. It could contribute up to $15 trillion to the global economy by 2030 -- more than the current output of China and India combined. The World Economic Forum believes it will add almost 60 million net new jobs by 2022.

The Executive Order directs the federal government to prioritize investment in AI research, make more data sets available to researchers, and prepare our workforce with STEM education and data science training. The EO makes it clear that the government must take a leadership role in AI for the safety, security and prosperity of our citizens.

Here are just a few examples where AI could have a profound impact in the public sector:

  • Healthcare. Nearly two million Americans die each year from disease. AI offers the potential to detect and diagnose illness sooner and help doctors prescribe more personalized treatments. My company, NVIDIA, is working with doctors and research teams at Stanford University, the Mayo Clinic, the National Institute of Health, Massachusetts General Hospital and healthcare institutions around the world to apply AI to advance medical image analysis and improve diagnosis and treatments. Such work could have huge implications for patients at the VA and hospitals around the country.
  • Cybersecurity. The 2019 National Intelligence Strategy states that cyber attacks “will pose increasing threats to U.S. security, including critical infrastructure, public health and safety, economic prosperity, and stability.” Some intelligence leaders believe cybersecurity threats are on par with dangers like terrorism and weapons of mass destruction. We’re working with Booz Allen Hamilton to develop faster AI-powered cybersecurity systems and train federal employees in AI and machine learning to thwart such threats in real time.
  • Transportation. Approximately 40,000 Americans die each year in traffic accidents. Derek Kan, Department of Transportation Undersecretary of Policy, said that the DOT has access to vast amounts of data that could help reduce fatalities. Autonomous vehicles provide enormous amounts of information on roadway and operating conditions, which can help transportation experts assess and reduce crash risks. The agency, in its recent Automation 3.0 report, wrote, “The U.S. DOT sees a bright future for automation technology and great potential for transforming our surface transportation system for the better, toward a future with enhanced safety, mobility, and economic competitiveness across all transportation modes.” NVIDIA is working with hundreds of partners to make safe self-driving vehicles a reality, building high-performance, energy-efficient systems for inside the vehicle and in the datacenter.
  • Disaster Relief. Natural disasters such as hurricanes and wildfires cost the U.S. a record $306 billion in 2017. In the critical hours and days following a disaster, real-time intelligence for first responders can save lives and ease suffering. AI has incredible potential to help: a NASA engineer developed a neural network that was nearly 98 percent accurate in detecting wildfires using MODIS, a satellite-based system. CrowdAI used applied algorithms and DigitalGlobe data to understand the extent of property damage from the deadly 2018 Camp Fire in Northern California. Such an approach can help residents, FEMA, and insurance companies quickly understand the extent of damage. NVIDIA is working with Carnegie Mellon University and Lockheed Martin to apply AI to coordinate and accelerate humanitarian assistance and disaster relief.
  • Defense platform sustainment costs. Maintenance costs are a huge challenge for the Defense Department, typically equaling 50 percent or more of a major platform. Annual sustainment costs for the Army, Air Force, and Navy are about $160 billion, according to a 2018 report from the office of the Undersecretary of Defense. AI could help lower these costs by predicting maintenance issues, but the challenge is to glean insights from terabytes of data, such as drone video and ultrasound scans. Baker Hughes, a GE company, is using GPUs to develop predictive maintenance capabilities for oil rigs one million times faster than current systems. Such systems could also be applied to DOD platforms such as bases, ships and aircraft.
  • Waste, fraud, and abuse. Federal agencies will make $141 billion in improper payments in fiscal 2017, according a recent GAO report. Companies like Capital One are applying AI to fraud prevention with tools such as interactive alerts, easy-to-report fraud transactions, and the ability to lock cards in real time. Paypal used AI to cut fraud in half, saving more than $2 billion each year. Similar initiatives can be applied to reduce government fraud and waste.

Such initiatives won’t be easy. Every agency, every organization, faces challenges in harnessing and implementing it. That’s why it’s so important that the government play a leadership role to facilitate research, development, adoption and training. Government can make an important contribution in at least three areas.  

First, data is key to AI. Suzette Kent, Federal CIO, speaking in October at the NVIDIA GPU Technology Conference in Washington, said that the U.S. has some of the most valuable data in the world. She said the U.S. must make the right investments in collecting, curating and making data sets available so that federal agencies can harness critical insights. Suzette’s leadership is clearly having an impact here.


Second, training is vital. The government employs thousands of engineers, supported by tens of thousands of engineers at government contractors. Many of them have access to valuable data but are not able to glean insights from it. The government should take steps to train its engineers in deep learning and data science. In addition, the U.S. needs to train the next generation by encouraging an aggressive STEM education program. Today’s high school and college students need access to data science and machine learning training now.

Third, we need to improve our computing infrastructure. Teaching a computer to make predictions from data requires massive compute horsepower. We need to modernize federal, university and research systems to give engineers and data scientists more powerful tools to develop and implement AI capabilities. The government needs to deploy a robust hardware infrastructure that will enable agencies to support this Executive Order. These systems can either be in the cloud or on premises, and are already available from OEMs like IBM, Dell, Hewlett Packard Enterprise and others.

Fueling the AI Revolution

Since researchers began using NVIDIA GPUs to train AI models eight years ago, our technology has been essential in fueling today’s AI revolution. Late last year, a Department of Energy supercomputer broke an AI record -- achieving a billion billion operations per second -- using 27,648 NVIDIA GPUs for research on climate change.

In addition to our work with the DOE, we work with dozens of federal agencies including DARPA, DOT, NASA, NIH, NIST, NOAA, NSF, as well as government contractors and university research teams, to help them understand, adopt and deploy GPU computing systems. We’ve also been providing technical input to governmental research organizations on how to advance computing architecture in the post-Moore’s Law era.

Given our vantage at the nexus of AI and government, we know that harnessing this transformative technology will be critical for our health, security, and prosperity. We look forward to helping agencies implement the Executive Order to improve services and save taxpayer money.

Greg Estes

VP Corporate Marketing and Developer Programs at NVIDIA

5y

NVIDIA news yesterday partnering with #Microsoft #RedHat #Ericsson to bring supercomputing capabilities to the edge will also be germane to the Public Sector and this will be great opportunity to share these important new collaborations at GTC-DC.

Tim Hartman

Chief Executive Officer at GovExec

5y

Thank you for leading this important discussion on the AI opportunity for government, Anthony Robbins. NVIDIA is doing great work to develop the capabilities our country will need in the digital era.

Like
Reply

Anthony- Great article that shows #Nvidia’s position on AI. Will be an exciting #GTC next month with #Raytheon #Lockheed #General Dynamics #NASA #DOE #HHS #DOD and so many others.

Tony, Good Article ..  Our Partnership in OpenPOWER systems rocking US wide and already people working on many scaling AI applications using our massive AI super computer Exascale Deep Learning for Climate Analytics   This is the work Berkley lab team completed using our AI Super computer ( SUMMIT ) Extract pixel-level masks of extreme weather patterns using a variant of the DeepLabv3+ deep image segmentation network implemented in TensorFlow. Harnessing the exceptional performance of the Tensor Flow back-end as well as the improved data feeding mechanisms through the tf.data API, we achieve up to 40 Tera Flop/s performance on a single nvidia v100 GPU. By applying optimizations to the distributed training enabler framework Horovod, the training algorithm as well as to the data feeding pipeline asynchronously executed on the IBM Power9 host processors,  could scale the distributed training efficiently to the full Summit HPC system. Code delivered a sustained throughput of 325.8 PF/s and a parallel efficiency of 90.7% in single precision on 4560 Summit nodes, i.e. 27360 V100 GPUs. By taking advantage of the FP16 Tensor Cores, a half-precision version of the DeepLabv3+ network achieves a peak and sustained throughput of 1.13 EF/s and 999.0 PF/s respectively.  

To view or add a comment, sign in

More articles by Anthony Robbins

Insights from the community

Others also viewed

Explore topics