

Indeed, I think vulgarization of CVEs for a broader audience should start with requirements.


Indeed, I think vulgarization of CVEs for a broader audience should start with requirements.


Few seem to address the issue here : it does not work 100% of the time for you.
It might work for everybody else but that doesn’t help you much. You have your setup, no theirs.
So… you need to investigate. When it works, great, nothing to learn from. When it fails though… can you find a pattern? Does it always fail after you have use something specific? Check https://lemmy.ml/post/46800646/25494455 which gives examples of potential failure point and journalctl logs. You can then check what failed and if not you can at least know when then backtrack to others logs, e.g. dmesg.
They key take away is that when things do not behave as expected you need to put a detective hat on and you investigate :
journalctl or dmesg and typically in /var/log/grep and other toolsYou also have limited times because the logs will, just like on a real crime scene, get contaminated or rotated or deleted. So… if you do encounter the problem do not rush to the next tasks at hand because you are wasting an opportunity to learn and there is vanishing window.
TL;DR : grep logs


what about just using Debian? It’s a bit hassle
What hassle? Genuinely curious.


A lot of already great advice here, often clarifying that a computer that is not yours… is not yours.
What I would still add though is that you are NOT, and I’m very confident in saying this, the only one there, in your very school, to ask that question. In fact I would argue MOST users have the exact same concerns but they might even be aware that alternatives exist.
So… do not push back, or even just avoid, all this alone. Find others who have similar problems and solve them together.
There might be a Linux User Group already, join them. If there isn’t one, consider making it. It might just be you for few weeks, even month, but at least you will dedicate time and space to improve YOUR situation. Chances are though that others, even if only curious at first, might check what you are up to, if they can replicate that, etc.
Don’t feel isolate, move the needle for yourself first, in your corner, but be welcoming to others who are eager to contribute.
It’s a challenge, but it’s a fun challenge while trying to tackle it with others.


Try GNU Taler https://www.taler.net/


I’m aware (unfortunately) of the marketing claims and even if they might be true, as you say it is “for now”. So if it’s only temporary for that arm race, especially if held by a company who leaked its own code just days ago, then I have a hard time understanding why ‘zero-days are numbered’ because this title claims the dynamic itself is gone. That’s now my understanding, especially if other models are just marginally (which is hard to prove with models, finding proper metrics) worst than it.
See comment that shared https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims just few hours ago, and that’s not even sophisticated.
Anthropic and OpenAI have multiple times used this arm race rhetoric before and it worked. Their models are supposedly “too dangerous” to be released thus consequently they have to control access.
It might be true but so far what we have witnessed is that roughly equivalent models get released by others merely weeks or maybe months after, sometimes open, but the “moat” never lasted long so I’m questioning why it would be different this time.


That doesn’t make sense. Don’t the attackers have the same tools?
IMHO the question depends on :
So… sure Signal is not perfect but if you can’t convince your family members to move to DeltaChat it sure beats using WhatsApp, Telegram, etc.


I’ll simplify for the downvotes : “a De-Googled OS […] cannot compromise […] Google Maps”


"considering setting up a De-Googled OS as well, but there are a few things that I cannot compromise on:
Sorry but … is this a joke?


No sabotage required. It’s typically poor tool with no strategy behind the so called deployment. AI snake oil salesmen claimed that AI could boost productivity AND be a scapegoat to cull the workforce, shareholders demand both, managers obliged, now the shit show is everywhere with no gains in sight.


OK. I’ll just claim French privacy law is better than GDPR then. If you ask I’ll just point you to French law. If you tell me that doesn’t help I’ll call you a troll.
I mean honestly if that’s how you interact with people I’d rather just block you, I don’t need more noise in my life. Take care.
IMHO LLM usage isn’t coherent with independence. That being said I wrote quite a bit on self-hosting LLMs. There are quite a few tools available, like ollama itself relying on llama.cpp that can both work locally and provide an API compatible replacement to cloud services. As you suggested though typically at home one doesn’t have the hardware, GPUs with 100+GB of VRAM, to run the state of the art. There is a middle ground though between full cloud, API key, closed source vs open source at home on low-end hardware : running STOA open models on cloud. It can be done on any cloud but it’s much easier to start with dedicated hardware and tooling, for that HuggingFace is great but there are multiples.
TL;DR: closed cloud -> models on clouds -> self-hosted provide a better path to independence, including training.
Paying for DRM-free quality content https://www.defectivebydesign.org/guide/ and pirating the rest. Also promoting the concept of Big Content from Chokepoint Capitalism https://www.penguinrandomhouse.com/books/710957/chokepoint-capitalism-by-rebecca-giblin-and-cory-doctorow/


How is asking to justify a position trolling? You are the one who claimed that Danish law is better than GDPR. I didn’t claim you lie or that law elsewhere was better, I solely asked for the proof. It’s not because I mistrust you, I just want to learn and you saying it is so without an actual comparison is not enough. If you don’t want to help that’s perfectly OK you can just say so. It’s fine to say you prefer Danish product because they are better and refuse to give proof that it’s the case. It won’t help me nor others though.
It’s the Privacy community on Lemmy, I bet others would love to learn too.


AI tools can find bugs faster than they can be patched
Not a security expert but wasn’t that the case already? It feels like before AI there were already a lot more bugs, security related or not, on backlogs. That’s precisely why there are metrics like severity.


Bought a 2nd-hand Pixel 8 to put GrapheneOS on it, not sure if that counts. Feels old, more ecological, cheaper and more private. Not sure how repairable it is but in theory I can use it for up 7 years so hopefully by the time I need to repair it I wouldn’t even want to.


Thanks, skimmed through https://www.recordinglaw.com/world-laws/world-data-privacy-laws/denmark-data-privacy-laws/ but it’s quite difficult to do a “diff” between one and the other. From reading it I didn’t notice significantly better for my normal usage but I’m not a lawyer. It also makes me wonder, if you have done it, how do you know it’s not better than say another random EU country also national specific modifications, e.g. Slovenia? Is there any “benchmark” somewhere that identifies which national changes are better?


Which Danish law go beyond GDPR?
Interesting, I didn’t see it in the documentation so if you didn’t document that already, you can have your local instance as search suggestion for Firefox on mobile and desktop. I use it for my own wiki, e.g. https://mastodon.pirateparty.be/@utopiah/116351732150481942
Also how I would imagine it is default search there and if no hit then fallback to a default search engine, e.g. DDG.