Sam Altman Hints ChatGPT Could Be Named “Goblin” Next

ChatGPT to Launch Contact Sync Feature to Find Friends Using OpenAI Products

OpenAI’s next ChatGPT model may carry an unusual name — “Goblin.”

The idea surfaced after Sam Altman, CEO of OpenAI, posted on X suggesting: “What if we name the next model ‘goblin’, almost worth it to make you all happy…”

The remark quickly triggered speculation online about whether the upcoming AI system could officially adopt the quirky name.

But behind the joke lies a more technical story involving how ChatGPT models behave and evolve.

Rise of “goblins” inside ChatGPT

Following the release of GPT-5.5, users and researchers noticed something unusual. The system had been avoiding casual references to creatures like goblins, gremlins, trolls, and similar fictional beings unless relevant.

This was traced back to internal system prompts designed to control tone and reduce unnecessary repetition of niche terms.

And then the data showed something surprising.

According to OpenAI’s internal analysis, mentions of the word “goblin” increased by 175%, while “gremlin” references rose by 52% after GPT-5.1 launched in November.

Also read: OpenAI AI-Powered Earbuds “Dime” Launch Expected by End of 2026

Personality setting behind the spike

OpenAI linked the behavior to a discontinued optional “nerdy” personality mode. That setting encouraged the model to embrace curiosity and “the strangeness of the world” while discussing topics in a playful tone.

So instead of staying neutral, the model leaned into quirky language.

The company said that although this personality accounted for only 2.5% of total responses, it generated nearly 66.7% of all goblin-related mentions.

That imbalance helped researchers identify how reinforcement learning unintentionally rewarded certain word patterns.

How the behavior spread

OpenAI explained that reinforcement learning can sometimes amplify small patterns. If certain responses are rewarded during training, they can spread into other model behaviors during fine-tuning.

But in this case, it meant creature-related language started appearing more often than intended.

The issue was noticed during development of GPT-5.5, though training had already begun before engineers fully understood the cause.

What OpenAI learned from it

The company said the investigation helped improve its ability to audit and correct unintended model behavior. It also led to new internal tools for tracking how personality settings influence outputs.

So while “Goblin” started as a joke from Sam Altman, it highlighted a real challenge in AI development — how small design choices can unexpectedly shape model behavior at scale.

Leave a Reply

Your email address will not be published. Required fields are marked *