Close Menu
The Politics
    What's Hot

    Stopping weight loss jabs can lead to rapid weight regain in one year, study suggests | Science, Climate & Tech News

    March 4, 2026

    Why Ecuador Invited the U.S. Military to Help With Its Drug Gangs

    March 4, 2026

    Feminist Scholars and Activists on the Past and Future of Feminism

    March 4, 2026
    Facebook X (Twitter) Instagram
    • Demos
    • Politics
    • Buy Now
    Facebook X (Twitter) Instagram
    The Politics
    Subscribe
    Thursday, March 5
    • Home
    • Breaking
    • World
      • Africa
      • Americas
      • Asia Pacific
      • Europe
    • Sports
    • Politics
    • Business
    • Entertainment
    • Health
    • Tech
    • Weather
    The Politics
    Home»Tech»OpenAI admits prompt injection attacks can’t be fully patched in AI systems
    Tech

    OpenAI admits prompt injection attacks can’t be fully patched in AI systems

    Justin M. LarsonBy Justin M. LarsonJanuary 4, 2026No Comments8 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    NEWYou can now listen to Fox News articles!

    Cybercriminals don’t always need malware or exploits to break into systems anymore. Sometimes, they just need the right words in the right place. OpenAI is now openly acknowledging that reality. The company says prompt injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be fully patched, but a long-term risk that comes with letting AI agents roam the open web. This raises uncomfortable questions about how safe these tools really are, especially as they gain more autonomy and access to your data.

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

    Outsmart hackers who are out to steal your identity

    AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)

    Why prompt injection isn’t going away

    In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to ever be completely eliminated. Prompt injection works by hiding instructions inside web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.

    OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can cause when something goes wrong.

    OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately started testing its limits. Within hours, demos appeared showing that a few carefully placed words inside a Google Doc could influence how the browser behaved. That same day, Brave published its own warning, explaining that indirect prompt injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.

    This isn’t just OpenAI’s problem. Earlier this month, the National Cyber Security Centre in the U.K. warned that prompt injection attacks against generative AI systems may never be fully mitigated.

    FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

    ChatGPT Atlas screen in an auditorium

    Prompt injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user ever seeing it.  (Kurt “CyberGuy” Knutsson)

    The risk trade-off with AI browsers

    OpenAI says it views prompt injection as a long-term security challenge that requires constant pressure, not a one-time fix. Its approach relies on faster patch cycles, continuous testing, and layered defenses. That puts it broadly in line with rivals like Anthropic and Google, which have both argued that agentic systems need architectural controls and ongoing stress testing.

    Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacker bot looks for ways to sneak malicious instructions into an AI agent’s workflow.

    The bot runs attacks in simulation first. It predicts how the target AI would reason, what steps it would take and where it might fail. Based on that feedback, it refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can surface weaknesses faster than real-world attackers.

    Even with these defenses, AI browsers aren’t safe. They combine two things attackers love: autonomy and access. Unlike regular browsers, they don’t just display information, but also read emails, scan documents, click links and take actions on your behalf. That means a single malicious prompt hidden in a webpage, document or message can influence what the AI does without you ever seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.

    THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

    Person wearing a hoodie works on multiple computer screens displaying digital data in a dark room.

    As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation in the loop becomes critical for safety. (Kurt “CyberGuy” Knutsson)

    7 steps you can take to reduce risk with AI browsers

    You may not be able to eliminate prompt injection attacks, but you can significantly limit their impact by changing how you use AI tools.

    1) Limit what the AI browser can access

    Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage or payment methods unless there’s a clear reason. The more data an AI can see, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.

    2) Require confirmation for every sensitive action

    Never allow an AI browser to send emails, make purchases or modify account settings without asking you first. Confirmation breaks long attack chains and gives you a moment to spot suspicious behavior. Many prompt injection attacks rely on the AI acting quietly in the background without user review.

    3) Use a password manager for all accounts

    A password manager ensures every account has a unique, strong password. If an AI browser or malicious page leaks one credential, attackers can’t reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something isn’t right before you manually enter anything.

    Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

    Check out the best expert-reviewed password managers of 2025 at Cyberguy.com

    4) Run strong antivirus software on your device

    Even if an attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes or malicious network activity. Strong antivirus software focuses on behavior, not just files, which is critical when dealing with AI-driven or script-based attacks.

    The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

    Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

    5) Avoid broad or open-ended instructions

    Telling an AI browser to “handle whatever is needed” gives attackers room to manipulate it through hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it harder for malicious content to influence the agent.

    6) Be careful with AI summaries and automated scans

    When an AI browser scans emails, documents or web pages for you, remember that hidden instructions can live inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.

    7) Keep your browser, AI tools and operating system updated

    Security fixes for AI browsers evolve quickly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures you get protection as soon as they’re available, even if you miss the announcement.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Kurt’s key takeaway

    There’s been a meteoric rise in AI browsers. We’re now seeing them from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are pushing hard to add AI and agentic features into their current infrastructure. While these browsers can be useful, the technology is still early. It’s best not to fall for the hype and to wait for it to mature.

    Do you think AI browsers are worth the risk today, or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com

    Sign up for my FREE CyberGuy Report 

    Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter. 

    Copyright 2025 CyberGuy.com.  All rights reserved.

    Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.



    Source link

    Related

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    Justin M. Larson
    • Website

    Related Posts

    Tech

    Stopping weight loss jabs can lead to rapid weight regain in one year, study suggests | Science, Climate & Tech News

    March 4, 2026
    Tech

    Consumer Protection Week warns of legal data broker privacy threats

    March 4, 2026
    Tech

    Susan Powter uses tech to rebuild business after financial collapse

    March 4, 2026
    Tech

    Major VPN network to block ‘despised and despicable’ child sexual abuse material | Science, Climate & Tech News

    March 4, 2026
    Tech

    AI could be giving US lethal edge in Iran war – but there are dangers | Science, Climate & Tech News

    March 3, 2026
    Tech

    Nearly 1M fintech lender Figure accounts exposed

    March 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    • Africa
    • Americas
    • Asia Pacific
    • Breaking
    • Business
    • Economy
    • Entertainment
    • Europe
    • Health
    • Politics
    • Politics
    • Sports
    • Tech
    • Top Featured
    • Trending Posts
    • Weather
    • World
    Economy News

    Stopping weight loss jabs can lead to rapid weight regain in one year, study suggests | Science, Climate & Tech News

    Justin M. LarsonMarch 4, 20260

    People on obesity jabs will regain the majority of the weight they lose within a…

    Why Ecuador Invited the U.S. Military to Help With Its Drug Gangs

    March 4, 2026

    Feminist Scholars and Activists on the Past and Future of Feminism

    March 4, 2026
    Top Trending

    Stopping weight loss jabs can lead to rapid weight regain in one year, study suggests | Science, Climate & Tech News

    Justin M. LarsonMarch 4, 20260

    People on obesity jabs will regain the majority of the weight they…

    Why Ecuador Invited the U.S. Military to Help With Its Drug Gangs

    Justin M. LarsonMarch 4, 20260

    Drug gangs have turned the South American country into one of the…

    Feminist Scholars and Activists on the Past and Future of Feminism

    Justin M. LarsonMarch 4, 20260

    Feminist scholars and activists, including Gloria Steinem and Leymah Gbowee, reflected on…

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Advertisement
    Demo
    Editors Picks

    Review: Record Shares of Voters Turned Out for 2020 election

    January 11, 2021

    EU: ‘Addiction’ to Social Media Causing Conspiracy Theories

    January 11, 2021

    World’s Most Advanced Oil Rig Commissioned at ONGC Well

    January 11, 2021

    Melbourne: All Refugees Held in Hotel Detention to be Released

    January 11, 2021
    Latest Posts

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021
    Advertisement
    Demo
    Editors Picks

    Stopping weight loss jabs can lead to rapid weight regain in one year, study suggests | Science, Climate & Tech News

    March 4, 2026

    Why Ecuador Invited the U.S. Military to Help With Its Drug Gangs

    March 4, 2026

    Feminist Scholars and Activists on the Past and Future of Feminism

    March 4, 2026

    In Sierra Leone, a New Maternal Hospital Aims to be the Blueprint

    March 4, 2026
    Latest Posts

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021
    Advertisement
    Demo
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

    News

    • World
    • US Politics
    • EU Politics
    • Business
    • Opinions
    • Connections
    • Science

    Company

    • Information
    • Advertising
    • Classified Ads
    • Contact Info
    • Do Not Sell Data
    • GDPR Policy
    • Media Kits

    Services

    • Subscriptions
    • Customer Support
    • Bulk Packages
    • Newsletters
    • Sponsored News
    • Work With Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 The Politics Designed by The Politics.
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.