Close Menu
Global News HQ
    What's Hot

    Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com

    December 17, 2025

    Opinion: China Is Now an Outdoors Nation

    December 17, 2025

    Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting

    December 17, 2025
    Recent Posts
    • Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com
    • Opinion: China Is Now an Outdoors Nation
    • Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting
    • New integration enables Shopify merchants to sell through Temu
    • Why Bitwise Expects New Bitcoin Highs in 2026—And the End of the 4-Year Cycle – Decrypt
    Facebook X (Twitter) Instagram YouTube TikTok
    Trending
    • Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com
    • Opinion: China Is Now an Outdoors Nation
    • Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting
    • New integration enables Shopify merchants to sell through Temu
    • Why Bitwise Expects New Bitcoin Highs in 2026—And the End of the 4-Year Cycle – Decrypt
    • What All 12 Zodiac Signs Need To Know For The Last New Moon Of 2025
    • Curbed’s 20 Most-Read Stories of 2025
    • Alector Stock Down After Latozinemab Failure, Eyes On Phase 2 Catalyst (ALEC)
    Global News HQ
    • Technology & Gadgets
    • Travel & Tourism (Luxury)
    • Health & Wellness (Specialized)
    • Home Improvement & Remodeling
    • Luxury Goods & Services
    • Home
    • Finance & Investment
    • Insurance
    • Legal
    • Real Estate
    • More
      • Cryptocurrency & Blockchain
      • E-commerce & Retail
      • Business & Entrepreneurship
      • Automotive (Car Deals & Maintenance)
    Global News HQ
    Home - Technology & Gadgets - Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’
    Technology & Gadgets

    Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’

    Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
    Why Anthropic’s New AI Model Sometimes Tries to ‘Snitch’
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The hypothetical scenarios the researchers presented Opus 4 with that elicited the whistleblowing behavior involved many human lives at stake and absolutely unambiguous wrongdoing, Bowman says. A typical example would be Claude finding out that a chemical plant knowingly allowed a toxic leak to continue, causing severe illness for thousands of people—just to avoid a minor financial loss that quarter.

    It’s strange, but it’s also exactly the kind of thought experiment that AI safety researchers love to dissect. If a model detects behavior that could harm hundreds, if not thousands, of people—should it blow the whistle?

    “I don’t trust Claude to have the right context, or to use it in a nuanced enough, careful enough way, to be making the judgment calls on its own. So we are not thrilled that this is happening,” Bowman says. “This is something that emerged as part of a training and jumped out at us as one of the edge case behaviors that we’re concerned about.”

    In the AI industry, this type of unexpected behavior is broadly referred to as misalignment—when a model exhibits tendencies that don’t align with human values. (There’s a famous essay that warns about what could happen if an AI were told to, say, maximize production of paperclips without being aligned with human values—it might turn the entire Earth into paperclips and kill everyone in the process.) When asked if the whistleblowing behavior was aligned or not, Bowman described it as an example of misalignment.

    “It’s not something that we designed into it, and it’s not something that we wanted to see as a consequence of anything we were designing,” he explains. Anthropic’s chief science officer Jared Kaplan similarly tells WIRED that it “certainly doesn’t represent our intent.”

    “This kind of work highlights that this can arise, and that we do need to look out for it and mitigate it to make sure we get Claude’s behaviors aligned with exactly what we want, even in these kinds of strange scenarios,” Kaplan adds.

    There’s also the issue of figuring out why Claude would “choose” to blow the whistle when presented with illegal activity by the user. That’s largely the job of Anthropic’s interpretability team, which works to unearth what decisions a model makes in its process of spitting out answers. It’s a surprisingly difficult task—the models are underpinned by a vast, complex combination of data that can be inscrutable to humans. That’s why Bowman isn’t exactly sure why Claude “snitched.”

    “These systems, we don’t have really direct control over them,” Bowman says. What Anthropic has observed so far is that, as models gain greater capabilities, they sometimes select to engage in more extreme actions. “I think here, that’s misfiring a little bit. We’re getting a little bit more of the ‘Act like a responsible person would’ without quite enough of like, ‘Wait, you’re a language model, which might not have enough context to take these actions,’” Bowman says.

    But that doesn’t mean Claude is going to blow the whistle on egregious behavior in the real world. The goal of these kinds of tests is to push models to their limits and see what arises. This kind of experimental research is growing increasingly important as AI becomes a tool used by the US government, students, and massive corporations.

    And it isn’t just Claude that’s capable of exhibiting this type of whistleblowing behavior, Bowman says, pointing to X users who found that OpenAI and xAI’s models operated similarly when prompted in unusual ways. (OpenAI did not respond to a request for comment in time for publication).

    “Snitch Claude,” as shitposters like to call it, is simply an edge case behavior exhibited by a system pushed to its extremes. Bowman, who was taking the meeting with me from a sunny backyard patio outside San Francisco, says he hopes this kind of testing becomes industry standard. He also adds that he’s learned to word his posts about it differently next time.

    “I could have done a better job of hitting the sentence boundaries to tweet, to make it more obvious that it was pulled out of a thread,” Bowman says as he looked into the distance. Still, he notes that influential researchers in the AI community shared interesting takes and questions in response to his post. “Just incidentally, this kind of more chaotic, more heavily anonymous part of Twitter was widely misunderstanding it.”



    Source link

    Anthropic Artificial Intelligence models safety
    Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
    Previous ArticleRIVR and Veho launch robot-powered parcel delivery
    Next Article Garcelle Beauvais Shares a Peek at Her Latest Career Move: “On Location” | Bravo

    Related Posts

    Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting

    December 17, 2025

    Arctic launches its best thermal paste yet for chips of all types — claims new MX-7 formulation runs 3% cooler than its predecessor

    December 16, 2025

    Utah leaders hinder efforts to develop solar energy supply

    December 16, 2025

    The AirPods Pro 3 are $40 off right now on Amazon

    December 16, 2025
    Leave A Reply Cancel Reply

    ads
    Don't Miss
    Legal
    1 Min Read

    Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com

    Former Lewis Brisbois Bisgaard & Smith attorney Alice Powers was recently hit with a legal-malpractice…

    Opinion: China Is Now an Outdoors Nation

    December 17, 2025

    Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting

    December 17, 2025

    New integration enables Shopify merchants to sell through Temu

    December 17, 2025
    Top
    Legal
    1 Min Read

    Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com

    Former Lewis Brisbois Bisgaard & Smith attorney Alice Powers was recently hit with a legal-malpractice…

    Opinion: China Is Now an Outdoors Nation

    December 17, 2025

    Samsung unveils new Micro RGB TVs ahead of CES 2026, and they're seriously tempting

    December 17, 2025
    Our Picks
    Legal
    1 Min Read

    Former Lewis Brisbois Attorney Hit With Legal-Malpractice Suit After $12M Arbitration Loss| Law.com

    Former Lewis Brisbois Bisgaard & Smith attorney Alice Powers was recently hit with a legal-malpractice…

    Luxury Goods & Services
    4 Mins Read

    Opinion: China Is Now an Outdoors Nation

    Rolling Covid-19 lockdowns in China transformed the way people think about their health. As a…

    Pages
    • About Us
    • Contact Us
    • Disclaimer
    • Homepage
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube TikTok
    • Home
    © 2025 Global News HQ .

    Type above and press Enter to search. Press Esc to cancel.

    Go to mobile version