You actually can just go build defensive tech
Building defensive tech is a great opportunity to reduce AI risk when there is little political appetite to pass meaningful AI policy.
The vibes right now in the AI policy world feel different compared to a few years ago.
In 2023, it was possible to do things like hold an “AI safety summit” and get important people to sign statements acknowledging that future AI systems might cause catastrophic harm.
2025 is different.
Despite extremely capable AI now seeming far closer than it did a few years ago, the recent AI Safety Action Summit merely resulted in a mealy-mouthed statement that said nothing about catastrophic risks. Pundits have noticed, too: Anton Leicht writes that “the 2023 playbook for safety policy is obsolete”. Tyler Cowen famously opined that the AI safety movement is dead (though of course, he notes that “the opportunity to make AI more safe is only just beginning”).
It would be a mistake to underestimate how suddenly political appetite for safety measures could shift if, say, an AI was caught doing something sketchy in the wild. But in any case, there are still a ton of productive, safety-enhancing things to do even when appetite for enacting new policy is low.
One category that stands out to me in particular is that of accelerating defensive technologies. Call this “hardening the broader world”, or “d/acc”, or “def/acc”, or “resilience and adaptation”, or “differential technological development”, or… You get the point.
Why a bunch of AI policy people are into defensive technology
A number of AI policy people — myself included, if you haven’t been able to tell already — are excited about defensive technologies. One reason is because such technologies intervene directly on the channels through which AIs (or humans using AIs) might cause a bunch of damage, rather than on the AI systems themselves.
Biology is an example of a channel that many AI policy people are worried about. If a frontier AI system is capable of walking someone with basic biology skills through the A to Z of creating a biological weapon, we better hope that there are a number of other defensive measures that stop this from actually happening in practice. By the time some frontier AI company has created a system with this capability, Pandora’s box has been opened; it’s only a (possibly short) matter of time before someone trains a similarly capable system and then releases it on the internet.

I think there’s a substantial chance that a system capable of walking someone through how to make a bioweapon (meaningfully beyond what they’d be able to do with other tools like search engines) will be developed in 2025. If I’m right, then we should really hit the gas pedal on building defensive technologies such as next-gen personal protective equipment and germicidal UV lights.
Examples of defensive technologies
Here are some other examples of defensive technologies aimed at other threats that advanced AI may soon pose to society:
Proof-of-identity technologies: There might be a bunch of AI agents running around the internet soon. Given this, we’d benefit from technologies that make it easier to reliably distinguish whether some online agent is human or AI (for instance, “IDs for AI agents”). Even if they don’t work in real time, such technologies make it easier to conduct digital “forensics” so we can tell who is responsible if something goes wrong. For instance, if an AI goes rogue and manages to order some dangerous biological materials, it would be great if we could quickly tell who trained and deployed this system.
Cybersecurity technologies: Nothing groundbreaking here — cyber systems that are harder to hack would be nice for a variety of reasons. Tools like automatic software verification/bug-finding would be helpful for making code more secure, in turn making it harder for AI agents to seize control of digital infrastructure, exfiltrate their weights, or succeed at large-scale cyberattacks.
“AI for epistemics” technologies: Rapid AI progress will force decision makers to, well, make decisions in light of significant uncertainty and time pressure. Amongst the chaos, we’d benefit a lot from tools that enhance human judgment. Examples include AI forecasting systems that predict future AI progress (and its implications), AI research assistants that find and synthesize evidence from a variety of sources, and verification systems that help identify when other AI systems (or humans) are providing misleading information.
Cooperation technologies: Better tools for global cooperation could help us navigate the rapid technological and geopolitical changes AI might trigger. These include infrastructure for executing assurance contracts, AI systems trained to help social groups identify common ground during deliberation, or negotiation systems where each party deploys an AI delegate to bargain on their behalf.
Few people dislike defensive technologies
The above approaches might be less leveraged than well-crafted policy interventions that tackle catastrophic risks head-on.
However, they’re probably far more tractable than attempts to pass policy when political appetite is minimal. I’m unaware of any powerful interest groups that vociferously oppose efforts to build better biosurveillance tools, for instance (some people might even want to give you money to build these technologies). On the other hand, I’m aware of a number of extraordinarily powerful interest groups that will fight tooth and nail to block efforts to pass meaningful AI policy.
The upshot of this is that defensive technologies often can be developed without requiring consensus from a bunch of competing stakeholders — for better or for worse, you really can just go build stuff! And gosh, who doesn’t like building.
To repeat myself: I don’t really buy that the anti-safety vibes are more than a temporary thing. Vibes are transitory; they will shift again, and again, and again, from now through the intelligence explosion. Political appetite will surely grow as more people start to “feel the AGI”, and perhaps quite rapidly.
Until then though, building defensive technologies — and perhaps more importantly, building a large coalition of excited stakeholders to rally behind the foundational idea — seems like a solid bet for a number of people to make.
Less leveraged = more feeling
Also to note, there is a funding round open for people to try and make HEMs happen: https://www.longview.org/request-for-proposals-on-hardware-enabled-mechanisms-hems-for-ai-verification/
I'd be excited for more work on this to be done.