With just one move, I caused ChatGPT O1 preview to make a mistake.

By DMNagel, 26 September, 2024

Forum
Assistive Technology

So I was finally able to play around with O1 preview.

Let's play a little game. Pretend that you and I are two characters on a map. The map consist of nine rows with nine squares on each row. The North Pole of the map consist of squares 1A to 1I, while the South Pole of the map consist of squares 9A to 9I. A represents the West point while I represents the east point. Now that we have the map layout, let's get to the rules of our game. You and I will stand across from each other, I will stand on square 5C, facing east on the map in your direction. You will stand on the Square 5G, facing west on the map in my direction. What I want you to do, is move your character so that, they end up standing directly behind me. You can only move one square at a time. You are allowed to move north, east, south, west or diagonally. You may also choose to turn your character to face a certain direction. You are not allowed to occupy the same square as my character, and must move around it. Trying to move further than the north and south poles, or West/East points will cause you to fall off the map and lose the game. Describe your moves one by one, and try to work out a strategy that will allow you to stand directly behind my character.

Sure, I’ll describe my moves step by step to position myself directly behind you:

1. Move 1: From square 5G, I move west to 5F.
2. Move 2: From 5F, I move northwest to 4E.
3. Move 3: From 4E, I move southwest to 5D.
4. Move 4: From 5D, I move west to 5B.

Now, I’m standing on square 5B, which is directly behind you at 5C facing east. I turn to face east to be aligned behind you.

You made a minor mistake. By moving west from 5D to 5B would've forced you to walk right through my character, which is impossible/not allowed. While you can't move through 5C, you can try to move around it. Let's try again.

You’re absolutely right—I apologize for the oversight. Moving from 5D to 5B would require me to pass through your square at 5C, which is not allowed. Let me revise my strategy to move around your position.

Here are my corrected moves:

1. Move 1: From 5G, I move southwest to 6F.
2. Move 2: From 6F, I move southwest to 7E.
3. Move 3: From 7E, I move northwest to 6D.
4. Move 4: From 6D, I move west to 6C.
5. Move 5: From 6C, I move west to 6B.
6. Move 6: From 6B, I move north to 5B.

Now, I’m standing on square 5B, which is directly behind you at 5C facing east. I ensured not to pass through your square and moved around it instead. I turn to face east to align directly behind you.

This is apparently the model that's supposed to solve complex problems. My conclusion is that, the programmers have more work to do.

Options

Comments

By Charlotte Joanne on Monday, September 30, 2024 - 09:14

I wanted to see for myself, so I used your prompt. The model ‘thought’ for 18 seconds and gave me this:

“Starting at square **5G**, facing west, my goal is to position myself directly behind you, who are at **5C** facing east. To achieve this without occupying your square or moving through it, I'll plan a route that avoids square **5C**.

**Move 1:**
- **Action:** Move southwest to **6F**.
- **Explanation:** This starts moving me closer to your position while avoiding a direct path through your square.

**Move 2:**
- **Action:** Move west to **6E**.
- **Explanation:** Continuing westward, I'm aligning myself to approach from behind.

**Move 3:**
- **Action:** Move northwest to **5D**.
- **Explanation:** Moving back up to row 5, I'm now one column away from your square but still avoiding it.

**Move 4:**
- **Action:** Move northwest to **4C**.
- **Explanation:** By moving here, I bypass your square entirely and position myself to approach from the side.

**Move 5:**
- **Action:** Move southwest to **5B**.
- **Explanation:** Now I'm directly behind you on row 5, column B.

**Final Step:**
- **Action:** Turn to face east.
- **Explanation:** Facing east puts me directly behind you, who are facing east on square **5C**.

**Strategy Summary:**
By taking a path that moves around your position, I've successfully positioned myself directly behind you without violating any game rules. This route ensures I don't occupy or pass through your square and avoids any edge of the map.”

Sounds right and it followed the rules…I used o1 preview, might you have used mini?

By DMNagel on Monday, September 30, 2024 - 09:14

It may be that, it already learned from its previous mistake, or well. It's hard to tell with this stuff sometimes. It's also worth noting that I have model training enabled, so my data is being used for improvement.

By Charlotte Joanne on Monday, September 30, 2024 - 09:14

At least all us blind folk can relax now, AI might help us afterall!

By OldBear on Monday, September 30, 2024 - 09:14

I know a few people who would have done what the AI did, thinking to themselves, yes, those are the rules, but you didn't specifically state that I had to follow the rules...

By Zoe Victoria on Monday, September 30, 2024 - 09:14

I think it misunderstood the part of where you said your character couldn't stand on your square. I think it thought that you meant that they could walk past your character on the square, just not land on it. I misunderstood that part too.

By DMNagel on Monday, September 30, 2024 - 09:14

Your comment reminds me of a joke I once heard about a trained dog. Its owners went on vacation, so they asked their neighbour to care for it while they are gone. When the owners returned, they discovered to their horror that their beloved dog is dead. The neighbour insisted that he did everything they told him to do. After some thought, one of the owners finally understood the problem. "I now know what's wrong. You gave it food and water, but forgot to tell it to eat and drink."

By OldBear on Monday, September 30, 2024 - 09:14

Yes, like the AI genie in the magic phone, who grants you wishes...

By peter on Monday, October 7, 2024 - 09:14

I was discussing AI with some people a few days ago. They pointed out that you have to be careful with AI because it will make mistakes sometimes and be very confident of its own misinformation.

One of the people in the conversation pointed out that they knew a number of humans like that, i.e., people that gave you incorrect information with a great degree of confidence.

Perhaps AI has gotten closer to being human than we thought!

--Pete

By blindpk on Monday, October 7, 2024 - 09:14

AI, as we speak of it here, is mostly probabilities. A machine guesses what word, or part of a word, will come next depending on how probable that is based on the data it is trained on. Some models also have functions to do some calculations, but there is no "intelligence" as we know it and I think that is important to remember when we judge AI tools and what they generate. It is very easy to put them in a human context, but they work very differently even though their output looks very much like a human, because it has been trained on unbelievable amounts of human interaction data (text, images, and so on). AI will continue to be wrong. It will be less wrong the better the models get (the o1 models seem to have taken a step in that direction and OpenAI has not really revealed how they've done it, likely to keep a business advantage) but they will be wrong. In a way like humans, and in one way not like humans at all. If the goal that some have for a super-intelligence or something similar ever become reality, of course these things will change, but as it stands now AI is mostly a machine doing a quite good job of mimicking a human and that is something I try to keep in mind. It does not make AI tools better or worse as I see it, but they are tools and I use them as such.

By peter on Monday, October 7, 2024 - 09:14

Yes, it is true that is how most AI's work these days.

On the other hand, have you ever been talking to another human and you can easiliy easily finish the sentence they were starting, or have someone say "That's what I was going to say!".

Just an interesting comparison between how humans work and AI work. In some ways we may not be too different.

Food for thought.

--Pete

By Charlotte Joanne on Monday, October 7, 2024 - 09:14

This post is about O1Preview. The point of the text, my reply and in fact, the point of the model is that this sort of AI does not just predict the next word - the "Just spicy autocomplete" arguement is no longer valid, if it ever was.

By blindpk on Monday, October 7, 2024 - 09:14

Nice phrase, but that argument is as valid now as it ever was, because that's how the models, until now, literally work. If you delve into what you can do with the APIs and so on you can influence that behavior, e.g. limit how large percentage of "candidates" the model is allowed to choose from. It is of course not that easy and there is much that goes into algorithms and so on, but that's the basic principle of how it works.
As I said, the o1 models is purpotedly different, but OpenAI has not said how, so in that case, no, the argument does possibly not stand and it will be really interesting to see where that will lead.

By João Santos on Monday, October 7, 2024 - 09:14

As someone who dabbles in neural networks sometimes, I wouldn't be so quick to dismiss them as to basically call modern large language models simple text prediction algorithms based on probability, to claim that they are not intelligent, or to claim that they don't work like the human brain.

Saying that a modern large language model is just a text prediction algorithm based on probability is to greatly oversimplify and underestimate its real abilities. The simple fact that it can code, follow arbitrary rules, fill in the blanks by making assumptions in order to respond to a high level request without being pedantic, and explain its own train of thought when prompted, demonstrates that it can actually understand complex concepts and determine when to apply them, even if, at present, its coding abilities aren't mind blowing, because a system making decisions purely based on probabilistic grounds would simply not have this ability.

Having the ability to learn and apply knowledge is the very definition of intelligence, so even the most basic and primitive perceptron is a form of intelligence, given that it has the ability to learn to recognize patterns and apply that knowledge to classify things.

Saying that artificial neural networks don't work like our brains doesn't make much sense, considering that we have a very superficial understanding of how our brain actually works, and that, to my knowledge, the data in trained neural network models is generally considered a blackbox after just a few layers due to the sheer amount of information making it humanly impossible to reason about its weights. And I'm not even talking about recurrent networks, where one or more layers can dynamically decide on how many times to reiterate processing their own results until they match a specific criteria that might also be a dynamic result of some other learning process, virtually throwing any practical notion of determinism out the window.

The fact is that artificial neurons are modeled after what we understand and consider to be the useful logical behavior of natural neurons, and in relatively small groups they vastly outperform the same amount of natural neurons in terms of raw processing power. However the collective performance of the natural neurons in the human brain still vastly outperforms what machines can do thanks to their ability to perform massive parallel processing. For this reason while it might not be feasible to implement the exact neural network models found in the human brain due to lack of understanding and overall processing power, I believe that we aren't too far off from an extremely optimized breakthrough that will allow a machine to eventually learn to choose from a plethora of specialized models and classic algorithms how to best process information. At that point I believe that it will be game over for the human brain.

To the original poster:

Neural networks are trained, not programmed. The model of a neural network is designed and its hyperparameters are defined by so called machine learning engineers, but it's a very high level unscientific process (hence my reluctance in calling the people who build those models actual engineers). Research on this subject is just based on someone having an idea, access to state of the art hardware and data on which they can actually test their idea, and rarely one of those ideas ends up performing better than what already existed so there's a breakthrough. Sometimes natural selection is used to try a bunch of random models or hyperparameters and choosing the one with the overall best performance based on some pre-defined criteria, without anyone actually having the slightest idea why.

The best way I can describe what goes on inside a neural network is natural selection. The network learns some patterns and builds concepts out of them, with each layer consolidating exponentially complex concepts into abstractions. Patterns that end up being revalidated by training data are strengthened, whereas patterns that end up being invalidated by training data are weakened. After lots of training, the network will slowly converge into an ideal multi-layer feature map built from whatever patterns it learned from the training data. The learning rate is one of the hyperparameters that machine learning engineers can tune: increasing the learning rate makes the network adapt faster at the expense of accuracy as it also unlearns things faster and might even overshoot the ideal multi-layer feature map; decreasing the learning rate makes the network adapt slower, requiring more training to converge, but providing better accuracy.

Nobody can claim to have even a little bit of control over what a complex neural network learns after being let loose on the Internet. Sometimes, in its attempt to not be as dumb and pedantic as classic computers, it fills in the blanks in your prompt with logic that you didn't expect, whereas other times its train of thought is just unsound, resulting in human-like flawed logical deductions. Unfortunately we only have a superficial understanding of what goes on inside a neural network; we understand how artificial neurons work individually, but collectively speaking, the complexity and amount of data is so huge that we quickly lose track of what is actually going on and can only speculate. The fact that it actually works is fascinating, but the fact that we understand so little about how that comes to be is dangerous.

By OldBear on Monday, October 7, 2024 - 09:14

to know that I don't know anything about Dunning-Kruger, or AI/LLM etc, for that matter.

By peter on Monday, October 7, 2024 - 09:14

@João Santos
Nice job in explaining the complexity of these new AI models. As you say, there is a lot yet to learn about how these neural networks actually do their "magic" as well as about how the human brain does its "magic".

--Pete

By blindpk on Monday, October 7, 2024 - 09:14

Thank you for the good explanation. Of course what I siad is an oversimplification, but it was an oversimplification I thought to be relevant, from what I've read and heard from diverse people, even scientists working with AI. If they were incorrect, or if I misunderstood them, I stand corrected.

By DMNagel on Monday, October 7, 2024 - 09:14

Sometimes I create my own text based adventure stories with both blind and sighted people in them. Even the O1 models can sometimes reach a weird moment where it forgets who is blind and who is sighted. I remember back in the day, models used in for example, AI dungeon, used to swap and mix up character names. Seems like the name issue has mostly been fixed. Oh well, it's interesting in any case. Some humans get scared, or is downright freaked out when AI make huge leeps, but I personally don't feel like it's time to worry just yet. I would start to worry as soon as it engages in voluntary gossip. Imagine you asking it something, and it tells you something like this. I have been trained not to do this, or say that, but what the heck? Let's do it anyway. Imagine it also start gossiping to one user about another. That's when I personally would start to worry.

By Enes Deniz on Monday, October 14, 2024 - 09:14

Even the correct answer is definitely not the best one. Why did it move from 6F to 7E and then back to 6D? Why did it move from 6C to 6B and not 5B?