Do you have to be polite to AI?

Do you have to be polite to AI?

AI Tools & Products 9 min read Article

Summary

The article explores the effectiveness of various communication strategies when interacting with AI chatbots, revealing that common beliefs about politeness and prompting may not yield consistent results.

Why It Matters

Understanding how to effectively communicate with AI is crucial as these technologies become more integrated into daily life. Misconceptions about prompting can lead to ineffective use of AI tools, impacting productivity and user experience.

Key Takeaways

  • Politeness may not consistently improve AI responses.
  • Effective communication with AI relies more on clarity than on specific word choices.
  • Cultural differences can affect how AI interprets politeness.
  • Research on AI prompting is ongoing and results can vary significantly.
  • AI models are constantly evolving, making past research quickly outdated.

Do you have to be polite to AI?22 hours agoShareSaveThomas GermainShareSaveSerenity Strull/ BBCFrom being polite to pretending you're on Star Trek, the advice you get about talking to chatbots can be truly bizarre, and totally useless. Here's what actually works.When a group of researchers decided to test whether "positive thinking" made AI chatbots more accurate, it led to some surprising results. As they asked various chatbots questions, they tried calling the AIs "smart", encouraged them to think carefully and even ended their questions with "This will be fun!" None of it made a consistent difference, but one technique stood out. When they made an artificial intelligence pretend it was on Star Trek, it got better at basic maths. Beam me up, I guess.People have all sorts of bizarre strategies to get better responses from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear AI does better if you threaten it, others think chatbots are more cooperative if you're polite and some people ask the robots to role-play as experts in whatever subject they're working on. The list goes on. It's part of the mythology around "prompt engineering" or "context engineering" – different ways to construct instructions to make AI deliver better results. Here's the thing: experts tell me that a lot of accepted wisdom about prompting AI simply doesn't work. In some cases, it could even be dangerous. But the way you talk to an AI does matter, and some techniques ...

Related Articles

Llms

AI Has Broken the Internet

So the web has been breaking a lot lately. Vercel is down. GitHub is down. Claude is down. Cloudflare is down. AWS is down. Everything is...

Reddit - Artificial Intelligence · 1 min ·
Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Llms

Are LLMs a Dead End? (Investors Just Bet $1 Billion on “Yes”)

| AI Reality Check | Cal Newport Chapters 0:00 What is Yan LeCun Up To? 14:55 How is it possible that LeCun could be right about LLM’s be...

Reddit - Artificial Intelligence · 1 min ·
Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch
Llms

Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project | TechCrunch

The AI recruiting startup confirmed a security incident after an extortion hacking crew took credit for stealing data from the company's ...

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime