I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”
While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.
Or it get stuck in an endless loop of two different but wrong solutions.
Me: This is my system, version x. I want to achieve this.
ChatGpt: Here’s the solution.
Me: But this only works with Version y of given system, not x
ChatGpt: <Apology> Try this.
Me: This is using a method that never existed in the framework.
ChatGpt: <Apology> <Gives first solution again>
I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”
Ha! That definitely happens sometimes, too.
While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.