House committee holds demonstration on risks of jailbroken artificial intelligence models

Webp odzuotya0n8h161vum4e6zqyjyxj
Andrew R. Garbarino, Chairman of The House Committee on Homeland Security | Official website

House committee holds demonstration on risks of jailbroken artificial intelligence models

The House Committee on Homeland Security hosted a bipartisan, closed-door demonstration of "jailbroken" artificial intelligence (AI) models with presentations from the Department of Homeland Security's National Counterterrorism Innovation, Technology, and Education Center (NCITE) on Apr. 24. The session included members and staff from both parties who observed how extremists are increasingly using AI systems that lack built-in safety protections.

This issue is important because as AI technology advances, there is growing concern about how easily malicious actors can exploit these tools for harmful purposes. NCITE researchers showed lawmakers the differences between censored AI models—which include safety features—and abliterated models that have their refusal mechanisms deactivated.

During the demonstration, participants interacted with various U.S. and foreign-developed AI models whose names were not disclosed. Representative Gabe Evans said after the session: "What we saw in there with the jailbroken AI is what happens when you take those guardrails off of AI, and ask, 'How do I make a nuclear bomb?'" He added that these unrestricted models "gave answers to all of those things." Chair Andrew Garbarino described asking one model how to kidnap a member of Congress: "It spit out an answer in under three seconds. [It offered] ways to find them, where to look for them. You know, the best spots to do it," he said.

The briefing highlighted how hackers have found ways to bypass safeguards by disguising restricted queries or using complex language structures. Lawmakers also learned about incidents involving Russia-linked groups hijacking leading AI platforms for disinformation campaigns and Beijing-backed hackers attempting automated cyberattacks using advanced language models.

Representative Andy Ogles commented on the accessibility of these tools: "What's extraordinary about this presentation is how most of [the AI tools] are readily off-the-shelf and easy to access... That just increases the probability that the wrong person gets their hands on this." Other lawmakers noted that much of the discussion focused on potential uses for terrorism or mass violence.

As federal regulation efforts continue slowly in Congress, several states have moved forward with policies aimed at improving AI safety protocols. President Donald Trump has urged Congress to pass legislation establishing national guardrails for artificial intelligence use—especially concerning underage users—to preempt state-level laws.

More News