• onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    6 days ago

    We’ve led the industry in building and adopting Rust

    Yeah, then you fired the team to pay the CEO a few million more.

  • vermaterc@lemmy.ml
    link
    fedilink
    arrow-up
    16
    ·
    6 days ago

    Defenders finally have a chance to win, decisively

    I’m curious how it will turn out to be in a long term. Are we going to have safer software? Because not only defenders will have a powerful tool, but attackers too. But at the same time, number of bugs is finite… Can we in theory one day achieve literally zero bugs in codebase?

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      6 days ago

      It does seem advantageous to the defender.

      Another factor Mozilla didn’t mention (and that Anthropic wouldn’t like to emphasize) is that major LLMs are pretty similar. And their development is way more conservative than you’d think. They use similar architectures and formats, train from the same data, distill each other, further pollute the internet with the same output and so on. So if (for example) Mozilla red teams with Mythos, I’d posit it’s likely that attacker LLMs would find the same already-patched bugs, instead of something new.

      …So yeah. I’d wager Mozilla’s sentiment is correct.

    • Tinidril@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      Cyber security in general is going to get interesting. Breaking into protected systems often requires more patience than expertise. Attackers often get detected when they take short cuts because of laziness and overconfidence. AI agents have unfathomable patience and attention to detail.l

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        6 days ago

        I don’t really agree with the attention to detail part from my experience. AI agents love to take shortcuts from what I’ve seen, and you have to pay a lot of attention to what they’re doing to make sure they do the right thing.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        They have attention to detail, just not the right details. It’s super easy for them to get lost in a never ending train of tangents.

    • Nobody@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      Not zero bugs, but it should help. A benefit for defenders is that they can use AI review on code before they make it public or release it in a stable release

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 days ago

    How many vulnerabilities would’ve been found if we had spent several million dollars on human security researchers though?

    • Nobody@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      6 days ago

      Mythos Preview is better at finding real vulnerabilities than existing public models and, for now, only a few have access to it.

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        5 days ago

        I’m aware (unfortunately) of the marketing claims and even if they might be true, as you say it is “for now”. So if it’s only temporary for that arm race, especially if held by a company who leaked its own code just days ago, then I have a hard time understanding why ‘zero-days are numbered’ because this title claims the dynamic itself is gone. That’s now my understanding, especially if other models are just marginally (which is hard to prove with models, finding proper metrics) worst than it.

        See comment that shared https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims just few hours ago, and that’s not even sophisticated.

        Anthropic and OpenAI have multiple times used this arm race rhetoric before and it worked. Their models are supposedly “too dangerous” to be released thus consequently they have to control access.

        It might be true but so far what we have witnessed is that roughly equivalent models get released by others merely weeks or maybe months after, sometimes open, but the “moat” never lasted long so I’m questioning why it would be different this time.

    • Alex@lemmy.ml
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      6 days ago

      If it’s finding valid vulnerabilities then it’s just another tool like static analysis, fuzzers and sanitizers. There definitely seems to be a difference in quality compared to earlier generations that were behind the sloppy avalanch of reports.