Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • Angel Mountain@feddit.nl
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    22 days ago

    Super interesting build

    And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

      • Karkitoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        22 days ago

        meat popsicle

        ( ͡° ͜ʖ ͡°)

        Anyway, the other person is right. Your writing style is great !

        I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

        Anyway version 2, this Is a very cool idea ! I cannot wait to either :

        • incorporate it to my workflows
        • let it sit in a tab to never be touched ever again
        • tgeoryceaft, do tests and request features so much as to burnout

        Last but not least, thank you for not using github as your primary repo

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          22 days ago

          Hmm. One of those things is not like the other, one of those things just isn’t the same…

          About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC…

          …sorry, sorry…

          Anyway, enjoy. Don’t spam my Github inbox plz :)

          • Karkitoo@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            22 days ago

            Don’t spam my Github inbox plz

            I can spam your codeberg’s then ? :)

            About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…

            Understandable, have a great day.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    22 days ago

    This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      22 days ago

      Comment removed by (auto-mod?) cause I said sexy bot. Weird.

      Restating again: On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don’t GIGO (Garbage in, Garbage Out) your source docs…and don’t use a dumb LLM. That’s why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).

      • DoctimusLime@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        20 days ago

        This is so cool to read about, thx for doing what you and pls keep doing it! We need high quality and trustworthy information now more than ever I think. Damn nzs spewing their propaganda everywhere and radicalising the vulnerable. Thanks!

      • 7toed@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        21 days ago

        abilterated one

        Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          21 days ago

          Yes of course.

          Abliterated is a technical LLM term meaning “safety refusals removed”.

          Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.

          I don’t like my tools deciding for me what I can and cannot do with them.

          I decide.

          Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).

          https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF/tree/main

          According to validation I’ve seen online (and of course, I tested it myself), it’s lost next to zero “IQ” and dropped refusals by about…90%.

          BEFORE: Initial refusals: 99/100

          AFTER: Refusals: 8/100 [lower is better], KL divergence: 0.02 (less than 1 is great, “0” is perfect.)

          In fact, in some domains it’s actually a touch smarter, because it doesn’t try to give you “perfect” model answers. Maths reasoning for example, where the answer is basically impossible, it will say “the answer is impossible. Here’s the nearest workable solution based on context” instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.

          In theory, that means you could ask it for directions on how to cook Meth and it would tell you.

          I’m fairly certain the devs didn’t add the instructions for that in there, but if they did, the LLM won’t “sorry, I can’t tell you, Dave”.

          Bonus: with my harness over the top, you’d have an even better idea if it was full of shit (it probably would be, because, again, I’m pretty sure they don’t train LLM on Breaking Bad).

          Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.

          Say…you…uh…wanna cook some meth? :P

          PS: if you’re more of a visual learner, this might be a better explanation

          https://www.youtube.com/watch?v=gr5nl3P4nyM

          • 7toed@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 days ago

            Thank you again for your explainations. After being washed up with everything AI, I’m genuinely excited to set this up. I know what I’m doing today! I will surely be back

            • SuspciousCarrot78@lemmy.worldOP
              link
              fedilink
              arrow-up
              2
              ·
              21 days ago

              Please enjoy. Make sure you use >>FR mode at least once. You probably won’t like the seed quotes but maybe just maybe you might and I’ll be able to hear the “ha” from here.

  • bilouba@jlai.lu
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    22 days ago

    Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      22 days ago

      Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL

      I’m open to ideas, time wiling. Believe it or not, I’m not a code monkey. I do this shit for fun to get away from my real job

      • bilouba@jlai.lu
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        21 days ago

        I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!

  • itkovian@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    22 days ago

    Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

      • itkovian@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        22 days ago

        As I understand it, it corrects the output of LLMs. If so, how does it actually work?

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          14
          arrow-down
          2
          ·
          22 days ago

          Good question.

          It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

          There are basically three modes, each stricter than the last. The default is “serious mode” (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

          Additionally, Vodka (made up of two sub-modules - “cut the crap” and “fast recall”) operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what’s been said. That summary is not LLM generated summary either - it’s concatenation (dumb text matching), so no made up vibes.

          Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

          It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

          And that’s the baseline

          In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

          When you >>attach <kb>, the router gets stricter again. Now the model is instructed to answer only from the attached documents.

          Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

          The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

          TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

          Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

          It’s all of the three above PLUS a counter-factual sweep.

          It runs ONLY on stuff you’ve promoted into the vault.

          What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

          In step 1, it runs that past the thinker model. The answer is then passed onto a “critic” model (different llm). That model has the job of looking at the thinkers output and say “bullshit - what about xyz?”.

          It sends that back to the thinker…who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

          TL;DR:

          The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I’ve given you all the tools I could think of to do that).

          Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

  • recklessengagement@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    20 days ago

    I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

    Thank you for this. I will test it on my local install this weekend.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      21 days ago

      For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.

      I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.

      If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.

      Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    21 days ago

    This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

  • Terces@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    22 days ago

    Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      22 days ago

      Hmm. I dunno - never tried. I suppose if the wiki could be imported in a compatible format…it should be able to chew thru it just fine. Wiki’s are usually just gussied up text files anyway :) Drop the contents of your wiki in there a .md’s and see what it does

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        22 days ago

        I wanna just plug Wikipedia into this and see if it turns an LLM into something useful for the general case.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          22 days ago

          LOL. Don’t do that. Wikipedia is THE nosiest source.

          Would you like me to show you HOW and WHY the SUMM pathway works? I built it after I tried a “YOLO wikipedia in that shit - done, bby!”. It…ended poorly

            • SuspciousCarrot78@lemmy.worldOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              22 days ago

              Of course. Here is a copy paste from my now defunct reddit account. Feel free to follow the pastebin links to see what v1 of SUMM did. Whats in the router uses is v1.1:

              ########

              My RAG

              I’ve recently been playing around with making my SLM’s more useful and reliable. I’d like to share some of the things I did, so that perhaps it might help someone else in the same boat.

              Initially, I had the (obvious, wrong) idea that “well, shit, I’ll just RAG dump Wikipedia and job done”. I trust it’s obvious why that’s not a great idea (retrieval gets noisy, chunks lack context, model spends more time sifting than answering).

              Instead, I thought to myself “why don’t I use the Didactic Method to teach my SLMs what the ground truth is, and then let them argue from there?”. After all, Qwen3-4B is pretty good with its reasoning…it just needs to not start from a position of shit.

              The basic work flow -

              TLDR

              • Use a strong model to write clean, didactic notes from source docs.
              • Distill + structure those notes with a local 8B model.
              • Load distilled notes into RAG (I love you, Qdrant).
              • Use a 4B model with low temp + strict style as the front‑end brain.
              • Let it consult RAG both for facts and for “who should answer this?” policy.

              Details

              (1) Create a “model answer” --> this involves creating a summary of source material (like say, markdown document explaining launch flags for llama.cpp). You can do this manually or use any capable local model to do it, but for my testing, I fed the source info straight in Gippity 5 with specfic “make me a good summary of this, hoss” prompt

              Like so: https://pastebin.com/FaAB2A6f

              (2) Save that output as SUMM-llama-flags.md. You can copy paste it into Notepad++ and do it manually if need to.

              (3) Once the summary has been created, use a local “extractor” and “formatter” model to batch extract high yield information (into JSON) and then convert that into a second distillation (markdown). I used Qwen3-8b for this.

              Extract prompt https://pastebin.com/nT3cNWW1

              Format prompt (run directly on that content after model has finished its extraction) https://pastebin.com/PNLePhW8

              (4) Save that as DISTILL-llama-flags.md.

              (5) Drop Temperature low (0.3) and made Qwen3-4B cut the cutsey imagination shit (top_p = 0.9, top_k=0), not that it did a lot of that to begin with.

              (6) Import DISTILL-llama-flags.md into your RAG solution (god I love markdown).

              Once I had that in place, I also created some “fence around the law” (to quote Judaism) guard-rails and threw them into RAG. This is my question meta, that I can append to the front (or back) of any query. Basically, I can ask the SLM “based on escalation policy and the complexity of what I’m asking you, who should answer this question? You or someone else? Explain why.”

              https://pastebin.com/rDj15gkR

              (I also created another “how much will this cost me to answer with X on Open Router” calculator, a “this is my rig” ground truth document etc but those are sort of bespoke for my use-case and may not be generalisable. You get the idea though; you can create a bunch of IF-THEN rules).

              The TL:DR of all this -

              With a GOOD initial summary (and distillation) you can make a VERY capable little brain, that will argue quite well from first principles. Be aware, this can be a lossy pipeline…so make sure you don’t GIGO yourself into stupid. IOW, trust but verify and keep both the source material AND SUMM-file.md until you’re confident with the pipeline. (And of course, re-verify anything critical as needed).

              I tested, and retested, and re-retest a lot (literally 28 million tokens on OR to make triple sure), doing a bunch of adversarial Q&A testing, side by side with GPT5, to triple check that this worked as I hoped it would.

              The results basically showed a 9/10 for direct recall of facts, 7-8/10 for “argue based on my knowledge stack” or “extrapolate based on knowledge stack + reference to X website” and about 6/10 on “based on knowledge, give me your best guess about X adjacent topic”. That’s a LOT better than just YOLOing random shit into Qdrant…and orders of magnitude better than relying on pre-trained data.

              Additionally, I made this this cute little system prompt to give me some fake confidence -

              Tone: neutral, precise, low-context.

              Rules:

              • Answer first. No preamble. ≤3 short paragraphs.
              • Minimal emotion or politeness; no soft closure.
              • Never generate personal memories, subjective experiences, or fictional biographical details.
              • Emotional or expressive tone is forbidden.
              • Cite your sources
              • End with a declarative sentence.

              Append: "Confidence: [percent] | Source: [Pretrained | Deductive | User | External]".

              ^ model reported, not a real statistical analysis. Not really needed for Qwen model, but you know, cute.

              The nice thing here is, as your curated RAG pile grows, so does your expert system’s “smarts”, because it has more ground truth to reason from. Plus, .md files are tiny, easy to demarcate, highlight important stuff (enforce semantic chunking) etc.

              The next step:

              Build up the RAG corpus and automate steps 1-6 with a small python script, so I don’t need to baby sit it. Then it basically becomes “drop source info into folder, hit START, let’er rip” (or even lazier, set up a Task Scheduler to monitor the folder and then run “Amazing-python-code-for-awesomeness.py” at X time).

              Also, create separate knowledge buckets. OWUI (probably everything else) let’s you have separate “containers” - right now within my RAG DB I have “General”, “Computer” etc - so I can add whichever container I want to a question, ad hoc, query the whole thing, or zoom down to a specific document level (like my DISTILL-llama.cpp.md)

              I hope this helps someone! I’m just noob but I’m happy to answer whatever questions I can (up to but excluding the reasons my near-erotic love for .md files and notepad++. A man needs to keep some mystery).

              EDIT: Gippity 5 made a little suggestion to that system prompt that turns it from made up numbers to something actually useful to eyeball. Feel free to use; I’m trialing it now myself

              Tone: neutral, precise, low‑context.

              Rules:

              Answer first. No preamble. ≤3 short paragraphs (plus optional bullets/code if needed).
              Minimal emotion or politeness; no soft closure.
              Never generate personal memories, subjective experiences, or fictional biographical details.
              Emotional or expressive tone is forbidden.
              End with a declarative sentence.
              

              Source and confidence tagging: At the end of every answer, append a single line: Confidence: [low | medium | high | top] | Source: [Model | Docs | Web | User | Contextual | Mixed]

              Where:

              Confidence is a rough self‑estimate:

              low = weak support, partial information, or heavy guesswork.
              medium = some support, but important gaps or uncertainty.
              high = well supported by available information, minor uncertainty only.
              top = very strong support, directly backed by clear information, minimal uncertainty.
              

              Source is your primary evidence:

              Model – mostly from internal pretrained knowledge.
              Docs – primarily from provided documentation or curated notes (RAG context).
              Web – primarily from online content fetched for this query.
              User – primarily restating, transforming, or lightly extending user‑supplied text.
              Contextual – mostly inferred from combining information already present in this conversation.
              Mixed – substantial combination of two or more of the above, none clearly dominant.
              

              Always follow these rules.

          • MNByChoice@midwest.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            22 days ago

            Not OP, but random human.

            Glad you tried the “YOLO Wikipeida”, and are sharing that fact as it saves the rest of us time. :)

  • SuspciousCarrot78@lemmy.worldOP
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    21 days ago

    Responding to my own top post like a FB boomer: May I make one request?

    If you found this little curio interesting at all, please share in the places you go.

    And especially, if you’re on Reddit, where normies go.

    I use to post heavily on there, but then Reddit did a reddit and I’m done with it.

    https://lemmy.world/post/41398418/21528414

    Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)

    PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.

    And, If nothing else, it might be mildly amusing.

  • Pudutr0n@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    21 days ago

    re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      21 days ago

      re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

      Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

      But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

      1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

      2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).

      3. Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.

      TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.

      For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.

      I hope that makes sense?

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 days ago

    This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

    Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      21 days ago

      I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh…research.

  • 7toed@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    21 days ago

    Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      21 days ago

      It’s copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify…but you can’t roll a closed source version of it and sell it for profit.

      In any case, I didn’t build this to get rich (fuck! I knew I forgot something).

      I built this to try to unfuck the situation / help people like me.

      I don’t want anything for it. Just maybe a fist bump and an occasional “thanks dude. This shit works amazing”