Literally posted a suggestion yesterday for a sport project to use https://developer.mozilla.org/en-US/docs/Web/API/Accelerometer so I definitely see reasons. It might be better with a permission prompt first but still it’s not without reason.
Literally posted a suggestion yesterday for a sport project to use https://developer.mozilla.org/en-US/docs/Web/API/Accelerometer so I definitely see reasons. It might be better with a permission prompt first but still it’s not without reason.
What makes you think they aren’t?
I participated to W3C workshops and privacy data was definitely part of most if not all discussions.
That being said each browser vendor have their own strategy and opinion based on their business model and culture.


That’s positive indeed. After Signal, maybe it’s time we all add PQC to our ssh, HTTPS, etc.
In fact if you are wondering OpenSSL supports PQC since 3.5 the current LTS and Debian stable relies on it https://packages.debian.org/stable/openssl
So… you might already be PQC-ready. In fact if you also run Debian on your server (or its exposed containers) maybe you connected over HTTPS already in a PQC-ready compliant fashion.


Hey there, I wrote https://forum.techreclaimers.club/public/d/36-reclaming-for-kids and few replied back. You might find that useful.


Interesting, I didn’t see it in the documentation so if you didn’t document that already, you can have your local instance as search suggestion for Firefox on mobile and desktop. I use it for my own wiki, e.g. https://mastodon.pirateparty.be/@utopiah/116351732150481942
Also how I would imagine it is default search there and if no hit then fallback to a default search engine, e.g. DDG.


Indeed, I think vulgarization of CVEs for a broader audience should start with requirements.


Few seem to address the issue here : it does not work 100% of the time for you.
It might work for everybody else but that doesn’t help you much. You have your setup, no theirs.
So… you need to investigate. When it works, great, nothing to learn from. When it fails though… can you find a pattern? Does it always fail after you have use something specific? Check https://lemmy.ml/post/46800646/25494455 which gives examples of potential failure point and journalctl logs. You can then check what failed and if not you can at least know when then backtrack to others logs, e.g. dmesg.
They key take away is that when things do not behave as expected you need to put a detective hat on and you investigate :
journalctl or dmesg and typically in /var/log/grep and other toolsYou also have limited times because the logs will, just like on a real crime scene, get contaminated or rotated or deleted. So… if you do encounter the problem do not rush to the next tasks at hand because you are wasting an opportunity to learn and there is vanishing window.
TL;DR : grep logs


what about just using Debian? It’s a bit hassle
What hassle? Genuinely curious.


A lot of already great advice here, often clarifying that a computer that is not yours… is not yours.
What I would still add though is that you are NOT, and I’m very confident in saying this, the only one there, in your very school, to ask that question. In fact I would argue MOST users have the exact same concerns but they might even be aware that alternatives exist.
So… do not push back, or even just avoid, all this alone. Find others who have similar problems and solve them together.
There might be a Linux User Group already, join them. If there isn’t one, consider making it. It might just be you for few weeks, even month, but at least you will dedicate time and space to improve YOUR situation. Chances are though that others, even if only curious at first, might check what you are up to, if they can replicate that, etc.
Don’t feel isolate, move the needle for yourself first, in your corner, but be welcoming to others who are eager to contribute.
It’s a challenge, but it’s a fun challenge while trying to tackle it with others.


Try GNU Taler https://www.taler.net/


I’m aware (unfortunately) of the marketing claims and even if they might be true, as you say it is “for now”. So if it’s only temporary for that arm race, especially if held by a company who leaked its own code just days ago, then I have a hard time understanding why ‘zero-days are numbered’ because this title claims the dynamic itself is gone. That’s now my understanding, especially if other models are just marginally (which is hard to prove with models, finding proper metrics) worst than it.
See comment that shared https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims just few hours ago, and that’s not even sophisticated.
Anthropic and OpenAI have multiple times used this arm race rhetoric before and it worked. Their models are supposedly “too dangerous” to be released thus consequently they have to control access.
It might be true but so far what we have witnessed is that roughly equivalent models get released by others merely weeks or maybe months after, sometimes open, but the “moat” never lasted long so I’m questioning why it would be different this time.


That doesn’t make sense. Don’t the attackers have the same tools?
IMHO the question depends on :
So… sure Signal is not perfect but if you can’t convince your family members to move to DeltaChat it sure beats using WhatsApp, Telegram, etc.


I’ll simplify for the downvotes : “a De-Googled OS […] cannot compromise […] Google Maps”


"considering setting up a De-Googled OS as well, but there are a few things that I cannot compromise on:
Sorry but … is this a joke?


No sabotage required. It’s typically poor tool with no strategy behind the so called deployment. AI snake oil salesmen claimed that AI could boost productivity AND be a scapegoat to cull the workforce, shareholders demand both, managers obliged, now the shit show is everywhere with no gains in sight.


OK. I’ll just claim French privacy law is better than GDPR then. If you ask I’ll just point you to French law. If you tell me that doesn’t help I’ll call you a troll.
I mean honestly if that’s how you interact with people I’d rather just block you, I don’t need more noise in my life. Take care.
IMHO LLM usage isn’t coherent with independence. That being said I wrote quite a bit on self-hosting LLMs. There are quite a few tools available, like ollama itself relying on llama.cpp that can both work locally and provide an API compatible replacement to cloud services. As you suggested though typically at home one doesn’t have the hardware, GPUs with 100+GB of VRAM, to run the state of the art. There is a middle ground though between full cloud, API key, closed source vs open source at home on low-end hardware : running STOA open models on cloud. It can be done on any cloud but it’s much easier to start with dedicated hardware and tooling, for that HuggingFace is great but there are multiples.
TL;DR: closed cloud -> models on clouds -> self-hosted provide a better path to independence, including training.
Paying for DRM-free quality content https://www.defectivebydesign.org/guide/ and pirating the rest. Also promoting the concept of Big Content from Chokepoint Capitalism https://www.penguinrandomhouse.com/books/710957/chokepoint-capitalism-by-rebecca-giblin-and-cory-doctorow/
Indeed, I try to have as little apps as possible… because I don’t trust them.
Now that I mostly rely on F-Droid it’s a bit different but my default behavior when I have to use an app is “Oh no… you’re going to siphon all my data in exchange for mediocre service I’ll still have to pay for” whereas I trust my browser a lot more.