America’s Digital Defenses Are Failing—but AI Can Save Them
Table Of Content
In November 1988, the Morris worm—an experimental computer program written by a curious graduate student—unintentionally crippled the early Internet and exposed for the first time the serious consequences of poorly designed software. Nearly 40 years later, the world still runs on fragile code riddled with the same kinds of flaws and defects. Amid frequent news reports about hacks and leaks, a key truth is often overlooked: the United States does not have a cybersecurity problem. It has a software quality problem. The multibillion-dollar cybersecurity industry largely exists to compensate for insecure software.
The impact of persistent weaknesses in U.S. software is playing out in real time. Since at least 2021, for instance, hackers connected to China’s Ministry of State Security and People’s Liberation Army have exploited the same types of flaws that the Morris Worm feasted on decades ago. These groups—referred to as Salt Typhoon and Volt Typhoon—have taken advantage of unpatched systems, poorly secured routers, and devices built for connectivity rather than resilience to infiltrate telecommunications networks, transportation systems, and power utilities. And just this year, Russian Federal Security Service hackers exploited an unpatched flaw in networking devices to compromise thousands of routers and switches connected to U.S. infrastructure. As more institutions, from hospitals to ports, rely on software to function, unsafe code is a growing threat to the United States.
These vulnerabilities endure because software vendors face few incentives to prioritize security. It remains cheaper and faster to shift the costs of insecurity downstream to customers. And because much of the code that underpins critical infrastructure is decades old, rewriting it securely has long been too expensive and time-consuming to make business sense.
But capabilities—including the accelerating power of artificial intelligence—are emerging to fix these software problems across entire digital ecosystems. This could spell the end of cybersecurity as we currently know it—and make the United States much less vulnerable as a result. But the window to take advantage of new technology is closing as U.S. adversaries, too, are looking to use AI to enhance their cyberattack capabilities. Now is the time for U.S. government agencies, large companies, and investors to work together to fundamentally shift economic incentives and use AI to improve the United States’ digital defenses. Cyberspace will never be completely safe. But the cybersecurity market as it currently exists does not have to be a permanent feature of the digital age. A better and more secure approach to software is within reach.
MISALIGNED MARKETS
In the popular narrative, hackers—whether they are individual rogue actors, state-sponsored groups, or teams backed by criminal syndicates—are mysterious and clever, deviously exploiting careless employees and misconfigured servers. But most intrusions do not succeed because attackers wield exotic cyberweapons. They succeed because widely deployed technology products are installed with well-known and preventable defects.
The core issue is economic, not technological. Most buyers have no practical way to judge whether the software they purchase is secure. They must take vendors at their word, which creates little incentive for the designers or sellers of software to invest in protections that customers cannot see or measure. As a result, software vendors compete on aspects that are more obvious to buyers: lower prices, getting their products to market first, and convenient functionalities such as one-click integrations with other systems or easy remote access. But focusing on these features often comes at the expense of adequate safeguards against cyberthreats. Market forces simply do not incentivize prioritizing security in the design process.
This has led to the rise of the cybersecurity aftermarket—a sprawling ecosystem of antivirus systems, detection capabilities, firewalls, and much more—which essentially provides bolt-on solutions to address software insecurities. And although the cybersecurity industry has evolved into an impressive community of talented innovators, its interventions are necessarily rearguard actions. Cybersecurity systems limit the damage of malware that should never have been able to spread, clean up breaches that should never have occurred, and fix flaws that should never have existed.
Software companies also deprioritize security in their product design because they are rarely held liable for security failures. In the United States, there is no enforceable baseline standard for what security protections software must have, nor are there penalties for insecure software, essentially making unsafe design a rational business choice. When catastrophic breaches occur, software companies create patches rather than redesign the product to be more secure. This is largely because the party that suffers is the customer. Until these companies are held liable and regulators enforce standards, exploitable code will remain the foundation of digital infrastructure. It is cheaper and faster for vendors to ship unsafe products and let customers and cybersecurity teams shoulder the burden of guarding their weak points.
These perverse incentives must be shifted. Prevention would be much better than a set of inconsistently applied and sometimes flimsy cures. U.S. cyberspace is porous, and only the creators and sellers of software can change outcomes at scale. An individual user cannot make encryption mandatory; a vendor can. A hospital cannot rewrite a commercial application to avoid data corruption; a vendor can. A city cannot secure the code that runs its water system; a vendor can. Responsibility must sit at the point of production, not of consumption.
NEW TECHNOLOGY, NEW POSSIBILITIES
For decades, even well-intentioned software companies could not justify efforts to create more secure products. It was both too costly and too challenging. But increasingly powerful artificial intelligence is changing this calculus. New technology holds the promise of making it possible to cheaply and effectively produce newer, safer code and also fix the weaknesses of older code, which would drastically reduce cyber-risk to global digital infrastructure.
Artificial intelligence is already reshaping software engineering. Major technology firms report that AI systems generate roughly a quarter of their code, a share that could rise above 80 percent within the next five years. Relying on AI to fix software has some risks: because AI models are learning from decades of imperfect human code, they could reproduce the same vulnerabilities that plague current software. But these systems not only learn from existing code; they also learn from every known flaw and attempted fix. Trained purposefully on secure coding standards and continuously refined through feedback, AI can correct human errors, not perpetuate them. Over time, AI will produce more secure code than any human developer can.
AI systems could also be used to repair defects in widely used software. From 2023 to 2025, the Defense Advanced Research Projects Agency ran an AI Cyber Challenge to test whether AI systems could autonomously find and patch software flaws. The results were impressive. Leading models identified the majority of the vulnerabilities that the organizers had seeded in the code and even discovered previously unknown weaknesses—doing in minutes, and at a fraction of the cost, what takes expert teams days or weeks. Private sector firms, including companies such as Google, Meta, and Microsoft, which build software that Americans use every day, are taking similar action, using AI systems to detect and fix vulnerabilities.
AI systems could autonomously find and patch software flaws.
Most important, AI will help tackle the hardest challenge in securing software: aging code that supports much of the world’s digital infrastructure. Major software tools, including word processors and email systems, and critical sectors, such as banking and transportation, all run on software written decades ago. The design choices made when this code was written have left deeply rooted weaknesses. Rewriting all this legacy code to be more secure has, until recently, been prohibitively expensive and risky—the digital equivalent of trying to rebuild an airplane midflight. But as AI becomes increasingly powerful, it offers the promise of millions of agents simultaneously reading, understanding, and transforming insecure code at scale. AI is thus an economic breakthrough as well as a technical one: it makes the painstaking work of rewriting the network of insecure software affordable.
Critics will argue that AI will only weaken cybersecurity by arming attackers with faster, stealthier, and more adaptive tools. This risk is real: AI will enable adversaries to automate their offensive strategies at the same time as defensive systems automate detection and response. But the more profound impact of AI should be upstream, in prevention rather than reaction. By helping build software that eliminates the vulnerabilities that adversaries seek to exploit, AI can address the root cause of cyber insecurity.
The implications of AI for cyberdefense are transformative. Instead of purchasing an endless cascade of products to compensate for defective software, organizations could pay for software that is measurably more secure out of the box—and for AI-assisted tools that maintain its defenses automatically. Programmers will no longer write code that needs fixing later; AI assistants will help them create systems with security built in from the start. Before new products go live, automated checks could scan for weak spots—just as cars are crash-tested before reaching the road. Old, fragile systems will be continuously modernized, removing dangerous flaws that attackers exploit today. As this becomes the norm, the entire model of cybersecurity will change. Security will become a standard feature of digital life, not a costly add-on.
FROM CHAOS TO COHERENCE
But these changes are not guaranteed to happen. Ending the cybersecurity aftermarket by creating high-quality software at scale requires ambitious leaders willing to take bold action. Incremental tweaks won’t close the gap between today’s fragile, defect-ridden ecosystem and a future in which software is designed to be secure. Governments, technology vendors, customers, and investors must take steps to align incentives for producers and consumers, accelerate innovation, and make security a visible feature of software.
Most critically, the AI systems that will help secure software must themselves be built securely. AI models can be manipulated through corrupted training data; they can make unpredictable decisions that even their creators can’t fully explain; they may depend on software components sourced from untrustworthy suppliers; and they can introduce entirely new weaknesses that adversaries can exploit, such as the potential to trick an AI model into revealing sensitive data. Rolling out AI-enabled features as quickly as possible, without first ensuring they are secure, would only repeat the same mistakes that created today’s fragile digital ecosystem.
The White House AI Action Plan, released in July, acknowledges these challenges, calling for security, transparency, and accountability to be built into AI systems from the outset. Achieving these goals will require cooperation between the public and private sectors, including the establishment of shared testing environments to rigorously evaluate the safety of AI-enabled systems before they are deployed. It also needs mechanisms to verify the provenance of AI models (that is, who created, trained, and modified them) and to audit how models and their training data perform over time. Clear guardrails on the development, deployment, and use of AI systems should be designed to prevent abuse while preserving room for innovation. California’s new AI accountability law, enacted in September, provides one model for how to establish transparency and risk-assessment obligations that could shape a coherent national approach.
At the same time, to shift market incentives, policymakers and industry leaders must work together to create clear, standardized benchmarks that make software products’ security features visible to buyers. Just as customers can evaluate cars through crash-test ratings, appliances through energy-efficiency labels, or food through nutrition labels, buyers should have a similar ability to assess how the software products that Americans rely on every day are built. Consumers should know whether basic protections such as secure authentication are turned on by default, how quickly security flaws are fixed, and whether products give customers the tools to detect and recover from intrusions.
Security should be a standard feature of digital life, not a costly add-on.
The foundations of such efforts exist. In January 2025, the Biden administration launched the U.S. Cyber Trust Mark, a label to certify that Internet-connected devices such as smart home products meet standard cybersecurity criteria. Similar to how Energy Star labels verify products’ energy efficiency and encourage consumers to buy efficient appliances, the Cyber Trust Mark will help market forces reward companies that invest in security. But the program should be expanded. All software products, not just Internet-connected devices, should have transparent labels to enable a race toward security as the default.
Regulators also have a responsibility to ensure that the burden for software vulnerabilities falls on vendors, not buyers. Cybersecurity regulation, however, has largely evolved sector by sector. As a result, pipelines, railways, financial services, and communications systems are each governed by different standards, which are enforced by different agencies. The patchwork of overlapping requirements drives superficial compliance rather than genuine risk reduction. A better approach, as recommended by the Cyberspace Solarium Commission—a bipartisan panel of lawmakers, former officials, and industry leaders established by Congress in 2019—would be to focus on the software that underpins every sector, not the sector itself, by establishing a clear liability framework to hold software producers responsible when negligent design or development practices lead to security failures. Because software now forms the foundation of nearly every major institution, establishing such a framework at the level of software would realign economic incentives toward building safer products and shift accountability from end users to those best positioned to prevent harm: the makers of the code itself.
In addition to improving software liability, harmonizing remaining sector-specific cybersecurity rules is critical. Conflicting mandates leave companies navigating inconsistent demands from multiple regulators. A more effective model would consolidate leadership under the Office of the National Cyber Director, an entity created by Congress in 2021 to coordinate national cybersecurity policy, which would empower a single entity to drive strategy, determine priorities, and ensure policy coherence rather than regulatory chaos. Giving the ONCD the mandate to set the agenda—and the resources to enforce it—would make it the government’s strongest driver of systemic software security.
The ONCD could also facilitate the adoption of more secure software by fixing the federal government’s software procurement process. The U.S. federal government is the single largest buyer of software on the planet. Yet more than four years after the Department of Homeland Security submitted secure software standards for inclusion in the Federal Acquisition Regulation, the corpus of processes governing public procurement, there is still no finalized rule requiring vendors to attest to secure development practices. The FAR process—designed more for paperclips than patches—is too slow for a domain in which threats evolve by the minute. Federal procurers could take a lesson from the finance corporation JPMorgan Chase, which recently issued an open letter to its software suppliers setting clear expectations that their products should prioritize security over rushing features to market. Early reports suggest that its vendors are responding by strengthening development practices and offering greater transparency—a demonstration of how purchasing power can drive accountability upstream.
THE FUTURE IS NOW
The consequences of America’s failure to solve its software quality problem are becoming more severe. Power grids, hospitals, pipelines, ports, and financial networks now run almost entirely on software, leaving them exposed to escalating risks of corruption and disruption. Companies and regulators can continue treating software insecurity as a fact of nature—reacting to breaches, layering on patches, and blaming users—or they can make security the default setting.
Despite growing awareness of the problem among both policymakers and businesses, progress toward effective solutions has been limited. Some agencies have advanced initiatives encouraging software companies to build security into their products, but there is still no binding national framework. The issue rarely ranks as a political priority—neither in Congress, which has yet to pass sweeping liability or design mandates, nor in the executive branch, which oscillates between rhetoric and restraint. Meanwhile, powerful industry lobbies and major technology firms continue to resist reforms that might raise costs or slow the race for feature-rich product releases. But the Trump administration’s continued focus on harnessing the federal government’s vast procurement power—and on expanding initiatives promoting more secure design for both traditional and AI-enabled software—offers a potential inflection point. If coupled with clear standards and enforcement mechanisms, these efforts could begin to transform security from an afterthought into a market expectation.
Cybersecurity should no longer mean a permanent defensive struggle.
Making real progress is now possible—and is crucial to U.S. defense. The Pentagon spends billions each year on cybersecurity operations and employs tens of thousands of personnel to defend vulnerable software systems. Reducing preventable software flaws would free up resources and personnel to focus on deterring and disrupting adversaries—for instance, expanding offensive cyber-capabilities to impose costs on those who target U.S. critical infrastructure—shifting the balance of power from constant defense to proactive deterrence.
These changes will also unleash the full potential of the cybersecurity sector, which is increasingly inseparable from AI itself. The future of cyber work lies not in reacting to yesterday’s breaches but in engineering trust into the fabric of digital life by securing the algorithms, data, and infrastructure that power the global economy. Human expertise and machine intelligence will work in concert to strengthen critical systems, safeguard innovation, and preserve the United States’ technological edge.
Cybersecurity should no longer mean a permanent defensive struggle. It should mean the deliberate design of a safer digital world. With the right policies, the right incentives, and the right application of AI, the United States can finally move from defending the past to securing the future. This is how cybersecurity truly ends: not with perfect protection but with systems resilient enough to withstand disruption.
Loading…