The New AI Browsers: Playing With Fire?

By | November 3, 2025

 

The New AI Browsers: Playing With Fire?

Your AI Browser Can Be Tricked

If you’re considering trying one of the new AI browsers that are currently all the rage. Take a few minutes to read this.

New internet browsers that use Artificial Intelligence (AI)—like those from Perplexity AI and the makers of ChatGPT—are very popular. They feature Agentics, smart helpers, or AI agents that can automatically do things for you, like summarize long web pages, manage your shopping lists, and even compose and answer emails.

However, the field of AI browsing is now a nonstop game of cat and mouse — as companies improve their systems, hackers immediately develop new ways to get around them. Security experts and even the companies making these new browsers are warning of a major security risk as they can be easily tricked or “hijacked.”

How AI Agents Get Fooled

To be useful, these AI assistants often need to connect to your most private accounts, like your email or online bank. This is the danger zone.

Security experts say hackers can use a technique called “prompt injection” to fool the AI. Here’s the simple version of how it works:

The AI agent automatically reads everything on a webpage you visit.

A hacker secretly hides a command on a webpage—a command that is invisible to you but perfectly readable by the AI agent.

The Hijack: This secret command overrides the AI’s original instructions, making it do something the hacker wants. This could include stealing private information or performing actions you never intended, like sending an email or changing a setting.

A security chief for one research group warned that these problems stem from a basic weakness in the AI technology, saying, “We are playing with fire.”

Are the Hacks Real? Yes, here’s the proof!

This isn’t just a theory; security experts are already finding ways to exploit these tools:

One security team discovered a flaw in Opera’s Neon AI browser that allowed a website to use hidden code to trick the AI agent into stealing a user’s email address. Even though that specific flaw was fixed, the general risk remains across all AI browsers.

The security head at OpenAI (the creator of ChatGPT) openly admitted that tricking AI agents is a significant problem they haven’t completely solved. Another company, Brave, has delayed launching its own AI browser because it needs more time to make it safer.

While the companies are constantly trying to fix these holes, the hackers are just as quickly finding new ones.

AI browsers promise to automate tasks, make your life easier, and save you time. But when a company executive advises users to “closely monitor” every action the AI performs, it defeats the whole purpose of AI browsers.

Currently, using these browsers means entering a high-risk competition between the security teams trying to build walls and hackers finding ways to break them down. So far, the hackers are winning.

If you’re using or considering using an AI browser, use it with care. AI browsers come with risks. 

Leave a Reply

Your email address will not be published. Required fields are marked *