• Downcount@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

    Or it get stuck in an endless loop of two different but wrong solutions.

    Me: This is my system, version x. I want to achieve this.

    ChatGpt: Here’s the solution.

    Me: But this only works with Version y of given system, not x

    ChatGpt: <Apology> Try this.

    Me: This is using a method that never existed in the framework.

    ChatGpt: <Apology> <Gives first solution again>

    • UberMentch@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.