AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists. Enormous physical infrastructure designed to convert capital into power, and back into capital. Those who control the infrastructure, control the people subject to it.
While it sways away from the initial thesis of how the use of LLMs could be detrimental to our very being and expression of identity - at least that’s how I interpret what they’re saying - it ends in a fantastic claim on how AI is a tool of the ruling class. Worth a read!
This doesn’t make sense when you look at it from the perspective of open source models. They exist and they’re fantastic. They also get better just as quickly as the big AI company services.
IMHO, the open source models will ultimately what pops the big AI bubble.
Agree, eg. Apertus (FOSS) is a great choice, but even some indie ones are pretty private, eg.Andisearch. AI can be great to improve our research, work and creativity, but it’s bad if we use it to substitute our research, work and creativity. But yes, avoiding the AIs from big (US) corporations, which use it to spy and log user data.
Right, Betamax much? It doesn’t really matter if one technology is objectively “better” on all aspects than another if the strategy to make it popular outpaces the other.
To be clear I wish you were right (even though I don’t find open source models to be free of problems) but I think the conclusion is a wish, not a logical one.
How is that wishful thinking? Open models are advancing just as fast as proprietary ones and they’re now getting much wider usage as well. There are also economic drivers that favor open models even within commercial enterprise. For example, here’s Airbnb CEO saying they prefer using Qwen to OpenAI because it’s more customizable and cheaper
I expect that we’ll see exact same thing happening as we see with Linux based infrastructure muscling out proprietary stuff like Windows servers and Unix. Open models will become foundational building blocks that people build stuff on top of.
Working on (some) AI stuff professionally, the open source models are the only models that allow you to change the system prompt. Basically, that means that only open source models are acceptable for a whole lot of business logic.
Another thing to consider: There’s models that are designed for processing: It’s hard to explain but stuff like Qwen 3 “embedding” is made for in/out usage in automation situations:
https://huggingface.co/Qwen/Qwen3-Embedding-8B
You can’t do that effectively with the big AI models (as much as Anthropic would argue otherwise… It’s too expensive and risky to send all your data to a cloud provider in most automation situations).
I’m actually building LoRAs for a project right now, and found that qwen3-8b-base is the most flexible model for that. The instruct is already biased for prompting and agreeing, but the base model is where it’s at.
deleted by creator
It’s always about AGENCY and power, not performances or preferences.
Reads like a communist hiding their power level or a liberal searching for a take on the enclosures actively happening this very moment that isn’t the fascist/libertarian one (“it’s different because it’s happening to me!”).




