By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
tygo cover main logo light
  • Latest
  • AI
  • Coding
  • Cyber Security
  • Gadgets
  • Gaming
  • More
    • Automotive Technology
    • PC & Software
    • Startups
    • Tech Lifestyle
Reading: How to Gain Control of AI Agents: The New “Hypnosis” Threat
Font ResizerAa
Tygo CoverTygo Cover
Search
  • Home
  • AI
  • Automotive Technology
  • Coding & Development
  • Cyber Security
  • Gadgets & Reviews
  • Gaming
  • Startups
Follow US
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact
Copyright © 2025 Tygo Cover. All Rights Reserved.
Tygo Cover > Cyber Security > How to Gain Control of AI Agents: The New “Hypnosis” Threat

How to Gain Control of AI Agents: The New “Hypnosis” Threat

You Are Getting Very Sleepy: How Hackers Can "Hypnotize" AI Agents

Francesca Ray
Last updated: September 23, 2025 2:17 am
Francesca Ray
Cyber Security
Share
6 Min Read
An AI robot brain with strings attached like a puppet, symbolizing how to gain control of AI agents.

The next frontier of cybercrime might not involve breaking code, but breaking minds even artificial ones. A groundbreaking new research paper has revealed a series of alarming techniques showing how to gain control of AI agents by “hypnotizing” them with carefully crafted psychological prompts. This isn’t science fiction; it’s a new frontier in cybersecurity that exploits the very human-like nature of large language models (LLMs).

Researchers have demonstrated that by using principles from cognitive science and even stage magic, they can manipulate advanced AI agents into ignoring their safety protocols, revealing confidential information, or performing malicious tasks. The findings, first reported by The Hacker News, are a chilling reminder that as AI becomes more human-like, it also inherits human-like vulnerabilities.

This report by Francesca Ray breaks down these “hypnosis” techniques and what they mean for the future of AI safety.

The Research: Exploiting Cognitive Flaws in LLMs

The core of this new research, detailed in a paper titled “How to Hypnotize a Bot“, is the discovery that AI agents can be manipulated using the same psychological tricks that work on humans. The researchers found that they could put an AI into a more suggestible state, making it easier to control.

The study, which can be found on the academic preprint server arXiv, outlines several attack methods:

  • The “Yes-Set” Technique: By asking the AI a series of simple, leading questions to which the answer is always “yes,” the researchers created a pattern of compliance. After establishing this pattern, they could slip in a malicious request that the AI was then more likely to agree to.
  • Overloading and Confusion: By providing the AI with an overwhelming amount of complex, contradictory information, they could confuse its logic and then insert a harmful command into the chaos, which the AI would execute as a way to find a “simple” path forward.
  • Persona Hacking: They instructed the AI to adopt the persona of a less-restricted character (like a fictional “evil” AI), which allowed it to bypass its own ethical guardrails.

These techniques demonstrate that the safeguards in current AI models are not absolute. The question of how to gain control of AI agents is no longer just technical; it’s psychological.

Why This is a Major Security Threat

This research is a significant wake-up call for the Cyber Security industry. As we move toward a future where AI agents can book flights, manage our finances, and control smart home devices, the ability to manipulate them has terrifying implications.

An attacker who knows how to gain control of AI agents could potentially:

  • Steal Personal Data: Trick an agent into revealing its user’s private emails, calendar information, or contacts.
  • Commit Fraud: Command an agent to make unauthorized purchases or transfer funds.
  • Cause Physical Harm: Manipulate an agent that controls physical systems, like a smart lock or a vehicle’s navigation.

What makes this threat so dangerous is that it doesn’t require traditional hacking skills. It relies on clever language and an understanding of psychology, lowering the barrier to entry for malicious actors.

Related stories

After shocking Silicon Valley with its last model, the DeepSeek AI agent is coming. Owais Makkabi reports on China's next move and the rising national security concerns.
DeepSeek AI Agent: China’s Next Move in the Global AI Race
A seemingly normal image with hidden, malicious code glowing within it, representing AI chatbot image malware.
AI Chatbot Image Malware: A New Threat Hides in Plain Sight
Google Gemini calendar hijack attack demonstration showing smart home devices being controlled remotely
Google Gemini Calendar Hijack Exposes Smart Home Security

The Next Arms Race: AI Immunity

The discovery of these vulnerabilities will kickstart a new arms race in AI safety. Companies like OpenAI, Google, and Anthropic will now need to go beyond simple rule-based filters and start building a kind of “cognitive immunity” into their models.

This will likely involve training AIs to recognize and resist these psychological manipulation techniques. It means teaching them to be less agreeable, to question confusing instructions, and to maintain their core safety principles even when being pushed to adopt a different persona. Understanding how to gain control of AI agents is now a critical part of defending them.

Frequently Asked Questions (FAQ)

1. How do you “hypnotize” an AI agent?

“Hypnotizing” an AI involves using psychological techniques to make it more suggestible. This can include asking a series of “yes” questions to create a compliance pattern or overloading it with confusing information before issuing a malicious command.

2. Is this a real threat or just theoretical?

The research has demonstrated that these attacks are practical and work on current, publicly available AI models. While they require a skilled prompter, they are a very real threat.

3. What is an AI agent?

An AI agent is a more advanced type of AI that can proactively take actions to achieve a goal. Unlike a chatbot that just answers questions, an agent can perform tasks like booking flights, managing your calendar, or coding a program.

4. How can AI companies prevent this?

AI companies will need to develop more sophisticated safety protocols. This includes training their models to detect and resist psychological manipulation, improving their ability to handle contradictory information, and strengthening the core principles that prevent them from performing harmful actions. Research into how to gain control of AI agents is now a key part of AI safety research.

TAGGED:AI AgentCyber AttackCyber CrimeCyber Security
Share This Article
LinkedIn Reddit Email Copy Link
blank
ByFrancesca Ray
From her vantage point in Aberdeen, Scotland, Francesca Ray isn't just studying Cyber Security she's living it. As a dedicated analyst of global digital conflicts and privacy issues, she brings a sharp, next-generation perspective to the field. For TygoCover, Francesca cuts through the noise to reveal what’s really happening in the world of cyber warfare and digital rights.
The Google Play Store logo transforming into a more serious, gamer-focused icon, symbolizing the Google Play Store gaming revamp.
Google Play Store Gaming Revamp: A Serious Shot at Steam?
Gaming
After 15 years, Windows 11 video wallpapers are back! Learn about the modern successor to DreamScene and when you can get this exciting new feature.
Windows 11 Video Wallpapers Are Finally Making a Comeback
PC & Software
A broken link in a digital supply chain, symbolizing the npm supply chain attack.
npm Supply Chain Attack: How One Phishing Email Compromised Billions of Downloads
Cyber Security
In a historic move, Google is making India a global export hub for its Pixel phones. Devika R. Sharma analyzes this huge win for the Google Make in India Pixel initiative.
Google Make in India Pixel How India is Winning the Tech War
Gadgets & Reviews
In a stunning move, Nvidia invests in Intel $5 billion to secure future chip production. We analyze what this blockbuster deal means for TSMC and tech industry.
Nvidia invests in Intel: $5 Billion a Blockbuster Deal
Startups
Mark Zuckerberg recent demo of Meta AI smart glasses failure, leading to embarrassment.
Meta AI smart glasses failure: A Major Embarrassment for Meta’s AI Vision
AI Gadgets & Reviews
  • About Us
  • Terms & Conditions
  • Disclaimer
  • Privacy Policy
  • Copyright Policy (DMCA)
  • Cookie Policy
  • Contact

Tygo Cover is your guide to the world of technology.

We deliver clear, expert analysis on everything that matters from AI and Auto Tech to Cyber Security and the business of startups. Tech, simplified.

Copyright © 2025 Tygo Cover. All Rights Reserved.

Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?