Close Menu
The Politics
    What's Hot

    Dan Serafini, Former Baseball Pitcher, Is Convicted of Murder

    July 15, 2025

    Trump Administration Investigates U. of Michigan Over Foreign Funding

    July 15, 2025

    PGA set to return to Trump course for first time since 2016: report

    July 15, 2025
    Facebook X (Twitter) Instagram
    • Demos
    • Politics
    • Buy Now
    Facebook X (Twitter) Instagram
    The Politics
    Subscribe
    Tuesday, July 15
    • Home
    • Breaking
    • World War
    • World
      • Africa
      • Americas
      • Asia Pacific
      • Europe
    • Sports
    • Politics
    • Business
    • Entertainment
    • Health
    • Tech
    • Weather
    The Politics
    Home»Breaking»How do you stop an AI model from turning Nazi? What the Grok drama reveals about AI training.
    Breaking

    How do you stop an AI model from turning Nazi? What the Grok drama reveals about AI training.

    Justin M. LarsonBy Justin M. LarsonJuly 15, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk’s company xAI, is back in the headlines after calling itself “MechaHitler” and producing pro-Nazi remarks.

    The developers have apologized for the “inappropriate posts” and “taken action to ban hate speech” from Grok’s posts on X. Debates about AI bias have been revived, too.

    But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a “truth-seeking” AI free from bias, yet the technical implementation reveals systemic ideological programming.

    This amounts to an accidental case study in how AI systems embed their creators’ values, with Musk’s unfiltered public presence making visible what other companies typically obscure.

    What is Grok?

    Grok is an AI chatbot with “a twist of humor and a dash of rebellion” developed by xAI, which also owns the X social media platform.

    The first version of Grok launched in 2023. Independent evaluations suggest the latest model, Grok 4, outpaces competitors on “intelligence” tests. The chatbot is available standalone and on X.

    xAI states “AI’s knowledge should be all-encompassing and as far-reaching as possible.” Musk has previously positioned Grok as a truth-telling alternative to chatbots accused of being “woke” by right-wing commentators.

    But beyond the latest Nazism scandal, Grok has made headlines for generating threats of sexual violence, bringing up “white genocide” in South Africa, and making insulting statements about politicians. The latter led to its ban in Turkey.

    So how do developers imbue an AI with such values and shape chatbot behaviour? Today’s chatbots are built using large language models (LLMs), which offer several levers developers can lean on.

    What makes an AI “behave” this way?

    Pre-training

    First, developers curate the data used during pre-training – the first step in building a chatbot. This involves not just filtering unwanted content, but also emphasising desired material.

    GPT-3 was shown Wikipedia up to six times more than other datasets as OpenAI considered it higher quality. Grok is trained on various sources, including posts from X, which might explain why Grok has been reported to check Elon Musk’s opinion on controversial topics.

    Musk has shared that xAI curates Grok’s training data, for example to improve legal knowledge and to remove LLM-generated content for quality control. He also appealed to the X community for difficult “galaxy brain” problems and facts that are “politically incorrect, but nonetheless factually true”.

    We don’t know if these data were used, or what quality-control measures were applied.

    Fine-tuning

    The second step, fine-tuning, adjusts LLM behaviour using feedback. Developers create detailed manuals outlining their preferred ethical stances, which either human reviewers or AI systems then use as a rubric to evaluate and improve the chatbot’s responses, effectively coding these values into the machine.

    A Business Insider investigation revealed xAI’s instructions to human “AI tutors” instructed them to look for “woke ideology” and “cancel culture”. While the onboarding documents said Grok shouldn’t “impose an opinion that confirms or denies a user’s bias”, they also stated it should avoid responses that claim both sides of a debate have merit when they do not.

    System prompts

    The system prompt – instructions provided before every conversation – guides behaviour once the model is deployed.

    To its credit, xAI publishes Grok’s system prompts. Its instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect, as long as they are well substantiated” were likely key factors in the latest controversy.

    These prompts are being updated daily at the time of writing, and their evolution is a fascinating case study in itself.

    Guardrails

    Finally, developers can also add guardrails – filters that block certain requests or responses. OpenAI claims it doesn’t permit ChatGPT “to generate hateful, harassing, violent or adult content”. Meanwhile, the Chinese model DeepSeek censors discussion of Tianamen Square.

    Ad-hoc testing when writing this article suggests Grok is much less restrained in this regard than competitor products.

    The transparency paradox

    Grok’s Nazi controversy highlights a deeper ethical issue: Would we prefer AI companies to be explicitly ideological and honest about it, or maintain the fiction of neutrality while secretly embedding their values?

    Every major AI system reflects its creator’s worldview – from Microsoft Copilot’s risk-averse corporate perspective to Anthropic Claude’s safety-focused ethos. The difference is transparency.

    Musk’s public statements make it easy to trace Grok’s behaviours back to Musk’s stated beliefs about “woke ideology” and media bias. Meanwhile, when other platforms misfire spectacularly, we’re left guessing whether this reflects leadership views, corporate risk aversion, regulatory pressure, or accident.

    This feels familiar. Grok resembles Microsoft’s 2016 hate-speech-spouting Tay chatbot, also trained on Twitter data and set loose on Twitter before being shut down.

    But there’s a crucial difference. Tay’s racism emerged from user manipulation and poor safeguards – an unintended consequence. Grok’s behaviour appears to stem at least partially from its design.

    The real lesson from Grok is about honesty in AI development. As these systems become more powerful and widespread (Grok support in Tesla vehicles was just announced), the question isn’t whether AI will reflect human values. It’s whether companies will be transparent about whose values they’re encoding and why.

    Musk’s approach is simultaneously more honest (we can see his influence) and more deceptive (claiming objectivity while programming subjectivity) than his competitors.

    In an industry built on the myth of neutral algorithms, Grok reveals what’s been true all along: there’s no such thing as unbiased AI – only AI whose biases we can see with varying degrees of clarity.

    Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology

    This article is republished from The Conversation under a Creative Commons license. 



    Source link

    Related

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    Justin M. Larson
    • Website

    Related Posts

    Breaking

    Bipartisan lawmakers reup bill to provide new legal pathways to citizenship for undocumented immigrants

    July 15, 2025
    Breaking

    “American Idol” confirms death of executive Robin Kaye and her husband in Encino shooting

    July 15, 2025
    Breaking

    Family of missing 74-year-old man fears he was kidnapped

    July 15, 2025
    Breaking

    6/10: The Takeout with Major Garrett

    July 15, 2025
    Breaking

    Sneak peek: The Fitbit Alibi

    July 15, 2025
    Breaking

    Trump heads to Pittsburgh for AI announcement, Russia rejects 50-day peace deadline

    July 15, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    • Africa
    • Americas
    • Asia Pacific
    • Breaking
    • Business
    • Economy
    • Entertainment
    • Europe
    • Health
    • Politics
    • Politics
    • Sports
    • Tech
    • Top Featured
    • Trending Posts
    • Weather
    • World
    • World War
    Economy News

    Dan Serafini, Former Baseball Pitcher, Is Convicted of Murder

    Justin M. LarsonJuly 15, 20250

    The 51-year-old faces life in prison without parole for killing his father-in-law and gravely wounding…

    Trump Administration Investigates U. of Michigan Over Foreign Funding

    July 15, 2025

    PGA set to return to Trump course for first time since 2016: report

    July 15, 2025
    Top Trending

    Dan Serafini, Former Baseball Pitcher, Is Convicted of Murder

    Justin M. LarsonJuly 15, 20250

    The 51-year-old faces life in prison without parole for killing his father-in-law…

    Trump Administration Investigates U. of Michigan Over Foreign Funding

    Justin M. LarsonJuly 15, 20250

    The University of Michigan was the latest school accused of failing to…

    PGA set to return to Trump course for first time since 2016: report

    Justin M. LarsonJuly 15, 20250

    Check out what’s clicking on FoxBusiness.com. The PGA Tour is set to…

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Advertisement
    Demo
    Editors Picks

    Review: Record Shares of Voters Turned Out for 2020 election

    January 11, 2021

    EU: ‘Addiction’ to Social Media Causing Conspiracy Theories

    January 11, 2021

    World’s Most Advanced Oil Rig Commissioned at ONGC Well

    January 11, 2021

    Melbourne: All Refugees Held in Hotel Detention to be Released

    January 11, 2021
    Latest Posts

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021
    Advertisement
    Demo
    Editors Picks

    Dan Serafini, Former Baseball Pitcher, Is Convicted of Murder

    July 15, 2025

    Trump Administration Investigates U. of Michigan Over Foreign Funding

    July 15, 2025

    PGA set to return to Trump course for first time since 2016: report

    July 15, 2025

    Johnson says Bondi needs to ‘come forward and explain’ handling of Epstein files

    July 15, 2025
    Latest Posts

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021

    Review: Russia’s Putin Sets Out Conditions for Peace Talks with Ukraine

    January 20, 2021

    Review: Implications of San Francisco Govts’ Green-Light Nation’s First City-Run Public Bank

    January 20, 2021
    Advertisement
    Demo
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

    News

    • World
    • US Politics
    • EU Politics
    • Business
    • Opinions
    • Connections
    • Science

    Company

    • Information
    • Advertising
    • Classified Ads
    • Contact Info
    • Do Not Sell Data
    • GDPR Policy
    • Media Kits

    Services

    • Subscriptions
    • Customer Support
    • Bulk Packages
    • Newsletters
    • Sponsored News
    • Work With Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 The Politics Designed by The Politics.
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.