Full image and other similar screenshots

  • apftwb@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    LLM exploits? X manipulating public opinions? X leveraging AI to manipulate public opinion? Israel/Palestine conflict? This post has everything.

  • BigDiction@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    If true, I’d expect Furkan to be upset, but I suppose he just respects the technology behind the algo 🤷

  • Robin@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    1 day ago

    Likely just hallucinations. For example, there is no way they would store a confidence score as a string

    • decrochay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      It’s also possible that it retrieved the data from whatever sources it has access to (ie as tool calls) and then constructed the json based on its own schema. That is, the string value may not represent how the underlying data is stored, which wouldn’t be unusual/unexpected with llms.

      But it could definitely also just be a hallucinations. I’m not certain, but since it looks like the schema is consistent in these screenshots, it does seems like the schema may be pre-defined. (But even if this could be verified, it wouldn’t completely rule out the possibility of hallucinations since grok could be hallucinating values into a pre-defined schema.)

    • geneva_convenience@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      20 hours ago

      If it were hallucinations which it very well could be, it means the model has learned this bias somewhere. Indicating Grok has either been programmed to derank Palestine content, or Grok has learned it by himself (less likely).

      It’s difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.