The Second Amendment and the Politics of Dangerous Tools
Part of the Cities and Society topic guide.
Why danger alone is not a sufficient argument for monopoly over powerful technologies.
Every time a genuinely powerful technology shows up, the discourse follows the same script. Voice cloning enables fraud → ban it. Crypto enables scams → ban it. Open-source AI can be misused → restrict it at the model layer. We look at the worst-case use of a new capability and then reason backward from there to prohibition.
I get why. Sometimes the worst case is really bad. But I keep noticing that the underlying logic has shifted in a way that people don't fully acknowledge. We've started treating danger itself as evidence that ordinary people shouldn't be trusted with a capability. And I think that instinct is worth questioning more today than ever before.
The American constitutional tradition does not start from the premise that dangerous capabilities should be monopolized by institutions because they're dangerous. It starts from a much harder premise: that free people will sometimes have access to things that can be misused, and the government's job is to govern use without destroying legitimate value.
The Second Amendment is the clearest expression of that idea. And probably the most uncomfortable one.
I want to be precise about what I mean. I'm not making a legal argument. The Second Amendment doesn't literally govern software, and "AI is basically a gun" is not a serious position. Guns, encryption, crypto, AI. Each has different risk profiles, different externalities, and different social uses.
But the Second Amendment isn't just a law. It's a philosophical stance. It encodes a political judgment about the relationship between citizens and power that matters way beyond gun policy: a free society doesn't respond to every dangerous capability by taking it away from citizens. Sometimes you decide the capability is too bound up with autonomy and distributed power to hand exclusively to the state.
Firearms are dangerous by design, and nobody can honestly argue otherwise. They are used for self defense, sport, deterrence, but also for violence. And the American settlement has been, for centuries, that widespread possession is preferable to a world where only the government has meaningful force. You can disagree with that bargain. But it reflects our constitutional instinct that liberty sometimes means living with citizens who have access to dangerous tools.
I think that instinct should at least inform how we think about modern tech.
Take encryption. In the 1990s, the US government literally classified strong encryption as a munition. The Crypto Wars. Phil Zimmermann got investigated by the FBI for publishing PGP. The logic was: bad actors will use this to hide from law enforcement, so civilians shouldn't have it, or should only have weakened versions of it.
That concern was not wrong. Criminals do benefit from encryption. Terrorists, child predators, hostile states, all of them use encryption.
But so does literally everyone else. Encryption now protects every bank transaction, every software update, every private message, every company's source code, every dissident's communications. The entire operational integrity of the internet depends on it. If we'd responded to encryption by banning it, or worse, by mandating backdoors, we wouldn't be safer. We'd just be more brittle, more surveilled, and more dependent on institutions we have no reason to fully trust.
The right answer was not "pretend there's no risk." It was to absorb the risk and govern around it. Accept that bad actors would exploit the tool, and build enforcement mechanisms that didn't require destroying the tool for everyone else.
Same thing applies, in different ways, to crypto and open source AI. Both are primarily associated in public debate with their worst users. Crypto gets reduced to fraud, scams, and sanctions evasion. AI gets reduced to deepfakes and zero-day exploitation. Those dangers exist, but the leap from "real danger" to "therefore broad prohibition" is exactly the move that deserves more skepticism.
Here's the thing I think people miss: a society that only permits safe tools will eventually only permit tools whose use can be centrally monitored, centrally revoked, or centrally approved. This isn't a hypothetical, just look at Europe. The EU AI Act creates a licensing regime where the default is prohibition until you prove compliance. The bureaucratic apparatus required to demonstrate that your AI system is safe enough to exist is so extensive that it functionally guarantees only large, well-lawyered incumbents can play. Which actually looks less like safety regulation and more like the EU's actual comparative advantage: generating compliance moats for established firms while calling it consumer protection.
None of this means every tool should ship without constraints. That's a straw man that I'm not interested in. It's perfectly reasonable to impose liability rules, access controls, auditing requirements, identity verification, and export restrictions. But the real question is where you put the burden of proof.
Do you start from the assumption that citizens can use powerful tools unless there's a strong case for restricting them? Or do you start from the assumption that institutions should control powerful tools unless citizens can prove they deserve access?
The Second Amendment matters to this debate not because it resolves questions about AI or crypto. It matters because it reminds us that the American tradition has never been purely safety-maxing. It has frequently accepted risk as the price of dispersing power. It has often preferred a world where citizens have consequential capabilities over one where those capabilities are concentrated in governments, banks, or large firms.
When a new technology shows up, we should be asking better questions than "what's the worst thing someone could do with this?" We should ask: what legitimate freedom does this enable? Who gains power if access is restricted? Can you punish misuse without banning the tool? What dependency are we creating if only big institutions are allowed to use it?
The deepest lesson of the Second Amendment isn't that every dangerous tool is sacred. It's that danger alone is not a sufficient argument for monopoly. A republic has to decide, again and again, whether it trusts its citizens with power.