GraphineOS
GrapheneOS
GraphineOS
GrapheneOS
Honestly… I come from iOS, using for nearly a decade. Yes that stuff is secure, yes that stuff is (or at least was) stable, yes that stuff is slick to the point of being a status symbol… but DAMN does it suck for interoperability!
Every success of bringing the Apple ecosystem to interact with anything is just so ridiculously hard… for in the end bringing very little.
Do yourself a favor, switch to (deGoogled) Android to enjoy KDE Connect, adb, scrcpy, etc just working out of the box, copying normal files the normal way, however you want. Try “just” Linux if you can’t but on mobile that’s not for everyone.
Again, I celebrate this success and all ways, e.g. iSH or Homebrew, that help to tinker, manage, work with Apple hardware but honestly I suggest ignoring it entirely. Just rely on software and hardware that actually provides the bare minimum to be interoperable. Not this.
Instead use this, and iSH, Homebrew, libimobiledevice, and the rest to transition AWAY from that locked ecosystem.


Hard to fall behind what? None of them is making anything interesting. Best they can do is provide some text that sound superficially plausible, is statistically correct and yet have 0 reasoning.
Nobody is “ahead” of anybody except is managing to do so with even more data while wasting even more resources.
Maybe more importantly of the participants in that race demonstrated that to keep on doing so will actually solve any of the problems that have been discovered along the way.
FWIW rsync also works on mobile phones and VR standalone HMDs, via e.g. termux or ish … so it’s really on working fine on… pretty much anything with a terminal and a connection really.
Warmly recommended.
Also if you need more than solely the last version, check rdiff-backup.


All my services are fine. I self host. Yes I’m quite pedantic about it. :D


Group read on “Surveillance Capitalism” but in truth…
… so it was rather coherent with related yet orthogonal efforts.


A friend of my is a researcher working on large scale compute (>200 GPUs) perfectly aware of ROCm and sadly he said last month “not yet”.
So I’m sure it’s not infeasible but if it’s a real use case for you (not just testing a model here and there but running frequently) you might have to consider alternatives unfortunately, or be patient.


Treating Google and Meta as apolitical …
I didn’t.


Tracking from WHOM and thus WHY should be the question.
It’s different to be tracked for profit, e.g. Google or Meta, versus for political or corporate espionage purposes.
The former is basically volunteering information through bad practices. Those companies do NOT care about “you” as an individual. In fact they arguably do not even know who you are. Avoiding their services is basically enough. It might be inconvenient but it’s easy : just do not.
The later is a totally different beast. If somehow the FSB, because you criticized Putin, or NSO Group, for something similar or because you have engineer something strategic to a business competitor who is a client of theirs, then you will be specifically targeted. This is an entirely different situation and IMHO radically more demanding. You basically don’t have to just care about privacy good practices, which is enough for the former, but rather know the state of the art of security.
So… assuming you “just” worry about surveillance capitalism and hopefully live in a jurisdiction benefiting from the Brussels effect with e.g GDPR related laws, either way is fine.


I’m new to Linux from about 3 months ago, so it’s been a bit of a learning curve on top to learning VE haha. I didn’t realize CUDA had versions
Yeah… it’s not you. I’m a professional developer and have been using Linux for decades. It’s still hard for me to install specific environments. Sometimes it just works… but often I give up. Sometimes it’s my mistake but sometimes it’s also because the packaging is not actually reproducible. It works on the setup that the developer used, great for them, but slight variation throw you right into dependency hell.


I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it. […] The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
Sorry I’m confused, which is it?
doing this as a favour […] Honestly I hate the style haha
I’m probably out of line for saying this but I recommend you reconsider.


Exactly, and it works quite well, thanks for teaching me something new :)


There’s no getting around using AI for some of this, like subtitle generation
Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.
Anyway, if you do want to go that road still you could try
.mkv? Depends on context obviously)*.srt *.ass *.vtt *.sbv formats

Oh…, that’s neat thanks!
So in my use case I made a template for prototype metadata, add a menu action could be to generate the file instead of creating from the template via Exec= field. This would prepopulate the metadatafile with e.g. the list of selected files thanks to %F.



Sad but unsurprising.
I did read quite a lot on the topic, including “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.” (2019) and saw numerous documentaries e.g. “Invisibles - Les travailleurs du clic” (2020).
What I find interesting here is that it seems the tasks go beyond dataset annotation. In a way it is still annotation (as in you take a data in, e.g. a photo, and your circle part of it to label i.e. e.g “cat”) but here it seems to be 2nd order, i.e. what are the blind spots in how this dataset is handled. It still doesn’t mean anything produced is more valuable or that the expected outcome is feasible with solely larger datasets and more compute yet maybe it does show a change in the quality of tasks to be done.
Enforcing GDPR fines would be a great start, only adding more if need be.
I feel like we could are more laws but if they are not enforced it’s pointless, maybe even worst but it gives the illusion of privacy while in reality nothing changes.
None of your requirements are distribution specific. I do all (Steam, non Steam, Kdenlive, Blender/OpenSCAD, vim/Podman, LibreOffice, Transmission) of that and I’m running Debian with an NVIDIA GPU. Consequently I can personally recommend it.


FWIW MSN is from Microsoft so IMHO might be better to link to the original source and, if need be, remind people they can either pay for journalist content or use services like archive.is which will bypass paywalls.
No worries!