lidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 10 months agoOffline version of Chat GPTlemmy.mlimagemessage-square9fedilinkarrow-up1253arrow-down13
arrow-up1250arrow-down1imageOffline version of Chat GPTlemmy.mllidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 10 months agomessage-square9fedilink
minus-squareIgnotum@lemmy.worldlinkfedilinkarrow-up4·10 months ago70b model taking 1.5GB? So 0.02 bit per parameter? Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
minus-squareNoiseColor @lemmy.worldlinkfedilinkarrow-up4·10 months agoAh yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
minus-square9point6@lemmy.worldlinkfedilinkarrow-up4·10 months agoLess than half of a BDXL though! The dream still breathes
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
Ah yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
Less than half of a BDXL though! The dream still breathes