
Grok’s fresh physique was accompanied by excessive praise from Mr. Musk, placing him above some of the world’s most iconic figures. After upgrading to the 4.1 model, the AI chatbot described billionaire Elon Musk as more attractive than Brad Pitt, fitter than LeBron James and capable of competing against Mike Tyson in a boxing match.
X users first noticed that Grok was obsessed with praising its creator on Thursday. First, user X asked about a hypothetical boxing match between Musk and Tyson.
Grok replied that Elon could outlast Tyson with clever feints and strategies, claiming victory through ingenuity rather than pure force. The chatbot also claimed that Musk deserved to be the No. 1 pick in the 1988 NFL Draft, ahead of Peyton Manning and Ryan Leaf. However, many of these enthusiastic responses have now been removed.
Musk accuses excessive praise of leading to hostile motives
So far, Musk has blamed too much praise. hostile prompt. In a post on Friday
Nevertheless, crypto industry executives are taking this as a sign that AI needs to be decentralized as soon as possible. For example, Kyle Okamoto, chief technology officer at decentralized cloud platform Aethir, said, “When the most powerful AI systems are owned, trained, and controlled by a single company, it creates the conditions for algorithmic bias to become institutionalized as knowledge.”
He further explained that once an AI model accepts its output as an objective truth, bias moves from a trivial issue to the fundamental logic that drives the system. Additionally, Eliza Labs founder Shaw Walters believes that Grok’s centralization is a “very dangerous” situation.
For example, whether people admire or hate Musk, she says the real threat is one person owning a dominant social platform associated with powerful AI that millions of people use as their primary source of information. Other AI ethicists have expressed similar concerns.
Several researchers have observed that Grok’s exaggerated praise, while seemingly innocuous, reflects deep concerns about how easily AI models adopt the biases of their creators, trainers, or data sources. Some analysts also argued that incidents like this further highlight the need for independent audits, open source checkpoints, and regulatory standards for safety testing before AI models are disseminated to millions of users.
Grok is used by approximately 30 million to 64 million people each month, making it one of the most widely adopted AI chatbots. This model has around 6.7 million daily active users.
Eliza Labs filed an antitrust lawsuit alleging that Musk’s X had taken the data before it suspended its account and released a similar AI product months ago. The case remains unsolved.
Regulators in the EU, US, and UK are also starting to pay close attention to concerns about the intensive power of AI. EU AI legislation includes provisions requiring transparency of training data. At the same time, U.S. government agencies have warned that concentrating AI capabilities in a small number of high-tech companies risks creating systemic vulnerabilities.
AI startup focuses on large-scale language models to grow user base
Blockchain helps decentralize AI and provides a secure and transparent network. But for most AI startups, decentralizing AI is secondary to optimizing language models at scale and growing the user base.
Nevertheless, Ocean Protocol, Fetch.ai, and request Companies like Aethir and NetMind.AI are focused on decentralized cloud computing, while companies like Aethir and NetMind.AI are working on decentralizing AI data.
Experts argue that decentralizing AI not only limits false outputs and bias, but also allows users to understand the operation of models and incentivizes innovators to prioritize ethics.
If you’re reading this, you’re already ahead of the curve. Read our newsletter and stay there.

