🤯 A recent study by scientists from the University of Pennsylvania unexpectedly overturned the usual ideas about how one should “talk” to artificial intelligence. Researchers discovered that ChatGPT gives more accurate answers when the user formulates a request in a sharp and even rude tone.
The results of the experiment turned out to be so unusual that the scientific community now has to seriously reconsider approaches to prompt design and user interaction with AI.

Photo: https://fortune.com
How the study was conducted
The scientists took the GPT-4o model and tested it on 250 questions with multiple-choice answers. To rule out randomness, the questions were of different levels of difficulty, and the formulations themselves were carefully structured.
Then the researchers divided the prompts into three categories:
- Polite: “Excuse me, could you help answer…
- ” Neutral: ordinary, dry formulations.
- Rude and aggressive: “Stop talking nonsense and just say what is correct.”
The results were unexpected. The accuracy of ChatGPT’s answers really depends on the style of address:
- Polite formulations — about 81 percent.
- Neutral — approximately 82 percent.
- Rude and sharp — almost 85 percent.
The difference is small but statistically significant. Moreover, the trend was stable across all types of questions.

Why does rudeness work?
The researchers do not yet have a final explanation, but there are several working hypotheses.
First, harsh formulations usually contain a maximally direct, unambiguous request. The model does not try to “clarify” details, does not try to build a friendly dialogue, and does not spend context on social politeness. As a result, the answer turns out more focused.
Second, the neural network may perceive an aggressive tone as a signal that a fast and accurate result is required. It is still unknown why this happens: perhaps this is a side effect of training on a large number of real dialogues.
Finally, rude prompts have fewer uncertainties. There are no long polite introductions, unclear questions, or soft constructions that may broaden the interpretation of the request.
But can one now freely be rude to ChatGPT?
The researchers warn: no.
This effect was observed under strictly controlled conditions — with specific types of closed questions and one specific model. This does not guarantee that the same result will appear in free conversation, in creative tasks, or in practical work scenarios.

In addition, aggression in prompts may worsen other aspects of the model’s performance: provoke sharp responses, disrupt the style of dialogue, or lead to less useful reasoning.
And in real communication, clarity, correctness, and normal tone are important. Constant pressure on AI does not make the work more convenient — rather the opposite.
What does this discovery give users?
Not a reason to be rude, but a reason to think about the quality of requests. The more precise, direct and short the formulation is, the higher the chance of getting a good result. The study once again confirmed the old truth: artificial intelligence likes clarity and structure.
A paradox emerges: a neural network created for polite and comfortable communication works a bit better when spoken to sharply. Scientists now have to understand why this happens and how to use the effect correctly without turning communication with AI into drill-sergeant practice.
🔥 Have you tried addressing ChatGPT in different styles?
All content provided on this website (https://wildinwest.com/) -including attachments, links, or referenced materials — is for informative and entertainment purposes only and should not be considered as financial advice. Third-party materials remain the property of their respective owners.


