Close Menu
The Politics
    What's Hot

    Chat & Ask AI app exposed 300 million messages due to misconfiguration

    February 5, 2026

    Warning of long airport queues under new EU border control system

    February 5, 2026

    2026 Winter Olympics: Italy foils ‘Russian cyber-attacks’ at Milan-Cortina Games

    February 5, 2026
    Facebook X (Twitter) Instagram
    • Demos
    • Politics
    • Buy Now
    Facebook X (Twitter) Instagram
    The Politics
    Subscribe
    Thursday, February 5
    • Home
    • Breaking
    • World
      • Africa
      • Americas
      • Asia Pacific
      • Europe
    • Sports
    • Politics
    • Business
    • Entertainment
    • Health
    • Tech
    • Weather
    The Politics
    Home»Tech»Chat & Ask AI app exposed 300 million messages due to misconfiguration
    Tech

    Chat & Ask AI app exposed 300 million messages due to misconfiguration

    Justin M. LarsonBy Justin M. LarsonFebruary 5, 2026No Comments7 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    NEWYou can now listen to Fox News articles!

    A popular mobile app called Chat & Ask AI has more than 50 million users across the Google Play Store and Apple App Store. Now, an independent security researcher says the app exposed hundreds of millions of private chatbot conversations online. 

    The exposed messages reportedly included deeply personal and disturbing requests. Users asked questions like how to painlessly kill themselves, how to write suicide notes, how to make meth and how to hack other apps. 

    These were not harmless prompts. They were full chat histories tied to real users.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    HOW TECH IS BEING USED IN NANCY GUTHRIE DISAPPEARANCE INVESTIGATION

    Man holding phone while typing on computer.

    Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including complete conversation histories tied to real users. (Neil Godwin/Getty Images)

    What exactly was exposed

    The issue was discovered by a security researcher who goes by Harry. He found that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Because of that misconfiguration, it was easy for outsiders to gain authenticated access to the app’s database. Harry says he was able to access roughly 300 million messages tied to more than 25 million users. He analyzed a smaller sample of about 60,000 users and more than one million messages to confirm the scope.

    The exposed data reportedly included:

    • Full chat histories with the AI
    • Timestamps for each conversation
    • The custom name users gave the chatbot
    • How users configured the AI model
    • Which AI model was selected

    That matters because many users treat AI chats like private journals, therapists, or brainstorming partners.

    How this AI app stores so much sensitive user data

    Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that lets users talk to large language models built by bigger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While those companies operate the underlying models, Chat & Ask AI handles the storage. That is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It is also easy to find if someone knows what to look for.

    We reached out to Codeway, which publishes the Chat & Ask AI app, for comment, but did not receive a response before publication.

    149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK

    Woman typing on phone.

    The exposed database reportedly included timestamps, model settings and the names users gave their chatbots, revealing far more than isolated prompts. (Elisa Schu/Getty Images)

    Why this matters to everyday users

    Many people assume their chats with AI tools are private. They type things they would never post publicly or even say out loud. When an app stores that data insecurely, it becomes a gold mine for attackers. Even without names attached, chat histories can reveal mental health struggles, illegal behavior, work secrets and personal relationships. Once exposed, that data can be copied, scraped and shared forever.

    YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

    Man outside with Airpods looking at his phone.

    Because the app handled data storage itself, a simple Firebase misconfiguration made sensitive AI chats accessible to outsiders, according to the researcher. (Edward Berthelot/Getty)

    Ways to stay safe when using AI apps

    You do not need to stop using AI tools to protect yourself. A few informed choices can lower your risk while still letting you use these apps when they are helpful.

    1) Be mindful of sensitive topics

    AI chats can feel private, especially when you are stressed, curious, or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical concerns, financial details, or questions that could create legal risk if exposed, take time to understand how the app stores and protects your data. If those protections are unclear, consider safer alternatives such as trusted professionals or services with stronger privacy controls.

    2) Research the app before installing

    Look beyond download counts and star ratings. Check who operates the app, how long it has been available and whether its privacy policy clearly explains how user data is stored and protected.

    3) Assume conversations may be stored

    Even when an app claims privacy, many AI tools log conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.

    4) Limit account linking and sign-ins

    Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. When possible, avoid linking AI tools to primary accounts used for work, banking, or personal communication.

    5) Review app permissions and data controls

    AI apps may request access beyond what is required to function. Review permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention, or turn off syncing, enable those settings.

    6) Use a data removal service

    Your digital footprint extends beyond AI apps. Anyone can find personal details about you with a simple Google search, including your phone number, home address, date of birth and Social Security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves breach data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked back to you if a breach occurs.

    While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

    Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

    Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

    Kurt’s key takeaways

    AI chat apps are moving fast, but security is still lagging behind. This incident shows how a single configuration mistake can expose millions of deeply personal conversations. Until stronger protections become standard, you need to treat AI chats with caution and limit what you share. The convenience is real, but so is the risk..

    Do you assume your AI chats are private, or has this story changed how much you are willing to share with these apps? Let us know your thoughts by writing to us at Cyberguy.com.

    Sign up for my FREE CyberGuy Report
    Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

    CLICK HERE TO DOWNLOAD THE FOX NEWS APP

    Copyright 2026 CyberGuy.com. All rights reserved.

    Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.



    Source link

    Related

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    Justin M. Larson
    • Website

    Related Posts

    Tech

    Autism rates between girls and boys challenged by new study | Science, Climate & Tech News

    February 5, 2026
    Tech

    Federal probe launched into Waymo crash in Santa Monica school zone

    February 4, 2026
    Tech

    Savannah Guthrie’s mother Nancy missing: Tech aids police investigation

    February 4, 2026
    Tech

    149 million passwords exposed in database found by Jeremiah Fowler

    February 3, 2026
    Tech

    Your phone sends data while you sleep every single night to companies

    February 3, 2026
    Tech

    Elon Musk’s xAI chatbot Grok faces ICO probe after allegedly creating sexual imagery of children | UK News

    February 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    • Africa
    • Americas
    • Asia Pacific
    • Breaking
    • Business
    • Economy
    • Entertainment
    • Europe
    • Health
    • Politics
    • Politics
    • Sports
    • Tech
    • Top Featured
    • Trending Posts
    • Weather
    • World
    Economy News

    Chat & Ask AI app exposed 300 million messages due to misconfiguration

    Justin M. LarsonFebruary 5, 20260

    NEWYou can now listen to Fox News articles! A popular mobile app called Chat &…

    Warning of long airport queues under new EU border control system

    February 5, 2026

    2026 Winter Olympics: Italy foils ‘Russian cyber-attacks’ at Milan-Cortina Games

    February 5, 2026
    Top Trending

    Chat & Ask AI app exposed 300 million messages due to misconfiguration

    Justin M. LarsonFebruary 5, 20260

    NEWYou can now listen to Fox News articles! A popular mobile app…

    Warning of long airport queues under new EU border control system

    Justin M. LarsonFebruary 5, 20260

    Airport organisation says queues could last up to six hours over the…

    2026 Winter Olympics: Italy foils ‘Russian cyber-attacks’ at Milan-Cortina Games

    Justin M. LarsonFebruary 5, 20260

    Italy has foiled “Russian origin” cyber-attacks targeting the Winter Olympics, says Foreign…

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Advertisement
    Demo
    Editors Picks

    Review: Record Shares of Voters Turned Out for 2020 election

    January 11, 2021

    EU: ‘Addiction’ to Social Media Causing Conspiracy Theories

    January 11, 2021

    World’s Most Advanced Oil Rig Commissioned at ONGC Well

    January 11, 2021

    Melbourne: All Refugees Held in Hotel Detention to be Released

    January 11, 2021
    Latest Posts

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021
    Advertisement
    Demo
    Editors Picks

    Chat & Ask AI app exposed 300 million messages due to misconfiguration

    February 5, 2026

    Warning of long airport queues under new EU border control system

    February 5, 2026

    2026 Winter Olympics: Italy foils ‘Russian cyber-attacks’ at Milan-Cortina Games

    February 5, 2026

    55,000 troops killed in war with Russia, Zelensky says

    February 5, 2026
    Latest Posts

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021
    Advertisement
    Demo
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

    News

    • World
    • US Politics
    • EU Politics
    • Business
    • Opinions
    • Connections
    • Science

    Company

    • Information
    • Advertising
    • Classified Ads
    • Contact Info
    • Do Not Sell Data
    • GDPR Policy
    • Media Kits

    Services

    • Subscriptions
    • Customer Support
    • Bulk Packages
    • Newsletters
    • Sponsored News
    • Work With Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 The Politics Designed by The Politics.
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.