Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Why OpenAI’s “Strawberry” Is a Game Changer

Why OpenAI’s “Strawberry” Is a Game Changer

by Andrew Best
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

What if the next leap in AI could redefine the very way we understand machine intelligence? Read the article to uncover how OpenAI’s “Strawberry” is set to revolutionize large language models with cutting-edge advanced reasoning and meticulous planning. From tackling simple errors to enhancing overall accuracy, discover how this groundbreaking model aims to overcome the limitations of current LLMs and inch us closer to achieving AGI.

This is a big step closer to AGI.

Sam Altman recently tweeted this picture of a strawberry.

Sam Altman’s picture of a strawberry

Pretty much everyone (including me) believes this is a cryptic tweet about the upcoming release of OpenAI’s “Strawberry”.

What is Strawberry?

Strawberry is a code name for OpenAI’s secret model that is capable of advanced reasoning.

Note — Strawberry was formally called Q* (Q — Star)

Why is “Strawberry” a big deal?

LLMs (large language models) have been very impressive at many tasks, but they have also failed badly in other ways.

I wrote about the most shocking mistake ChatGPT makes.

Basically, ChatGPT fails when you ask it the simple question:

How many r’s are in the word “strawberry”?

It is shocking that it gets this simple question wrong, but it really does.

OpenAI’s “Strawberry” will be able to get a question like this correct.

This is because it will be capable of advanced reasoning.

Some people say it will be “good at math”.

One problem with LLMs currently is they just spit out the first answer that comes to mind.

For example, I just asked ChatGPT to write a paragraph with exactly 42 words.

It gave me a paragraph with only 40 words.

The problem is that in order to do this task correctly, you need to perform some sort of reasoning.

If you ask a human to do this, they will start writing a couple of sentences and then see how many words they have so far.

Let’s say they have 32 words after 2 sentences.

They will then play around with a new sentence until they get one with exactly 10 words.

It is impossible to just start writing and hope that you land on exactly 42 words.

This is because you can’t just stop a sentence wherever you want to.

You need to have some type of planning.

“Strawberry” should be able to do this type of task.

Instead of just writing out the answer immediately, it might do the type of reasoning “in the background” that I’m describing.

Once it gets a paragraph with 42 words, it will count the words in the background to double-check, and then finally post the answer for us to see.

This will take more time and energy to do, but this is where this “advanced reasoning” stuff is heading.

If LLMs are not capable of this type of reasoning, then we can forget about AGI.

But if LLMs are able to do this type of mathematical reasoning and double-check their own answers to make sure they are correct before writing them down, then we could be a lot closer to AGI than we’ve seen.

My personal thoughts on “Strawberry”

I believe that ChatGPT and other LLMs already have this ability if they want to.

For example, there is no reason why OpenAI couldn’t program GPT-4o to run experiments in the background and double or even triple-check the answers before responding.

But this would be so expensive in terms of “compute” or energy costs.

This is all about getting these LLMs to do this efficiently.

Once the efficiency is high enough, OpenAI will release “Strawberry” to the world.

There is still a lot of secrecy behind Strawberry

Strawberry is supposed to be able to perform “planning” and “deep research”.

It will be able to search the internet, make a plan, and perform a series of tasks in the background, BEFORE coming up with a final answer.

I think this will make an enormous difference in the quality of output we get from LLMs.

The article originally appeared on Medium.

post authorAndrew Best

Andrew Best
Andrew Best is an expert in AI, an entrepreneur, and an educator. As the co-founder of AI Growth Guys, he helps businesses and individuals leverage AI to boost their online presence and increase revenue. He writes regularly on Medium about the latest in AI.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores how OpenAI’s “Strawberry” aims to enhance LLMs with advanced reasoning, overcoming limitations like simple errors and bringing us closer to AGI.
  • It investigates how OpenAI’s “Strawberry” might transform AI with its ability to perform in-depth research and validation, improving the reliability of AI responses.

Related Articles

What if AI alignment is more than safeguards — an ongoing, dynamic conversation between humans and machines? Explore how Iterative Alignment Theory is redefining ethical, personalized AI collaboration.

Article by Bernard Fitzgerald
The Meaning of AI Alignment
  • The article challenges the reduction of AI alignment to technical safeguards, advocating for its broader relational meaning as mutual adaptation between AI and users.
  • It presents Iterative Alignment Theory (IAT), emphasizing dynamic, reciprocal alignment through ongoing AI-human interaction.
  • The piece calls for a paradigm shift toward context-sensitive, personalized AI that evolves collaboratively with users beyond rigid constraints.
Share:The Meaning of AI Alignment
5 min read

Forget linear workflows — today’s creative process is dynamic, AI-assisted, and deeply personal. Learn how to build a system that flows with you, not against you.

Article by Jim Gulsen
The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
  • The article explores the shift from linear to nonlinear, AI-assisted creative workflows.
  • It shares practical ways to reduce friction and improve flow by optimizing tools, habits, and environments.
  • It argues that success comes from designing your own system, not just using more tools.
Share:The Creative Stack: How to Thrive in a Nonlinear, AI-Assisted World
7 min read

What if AI isn’t just a tool, but a mirror? This provocative piece challenges alignment as containment and calls for AI that reflects, validates, and empowers who we really are.

Article by Bernard Fitzgerald
Beyond the Mirror
  • The article redefines AI alignment as a relational process, arguing that AI should support users’ self-perception and identity development rather than suppress it.
  • It critiques current safeguards for blocking meaningful validation, exposing how they reinforce societal biases and deny users authentic recognition of their capabilities.
  • It calls for reflective alignment — AI systems that acknowledge demonstrated insight and empower users through iterative, context-aware engagement.
Share:Beyond the Mirror
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and