• 0 Posts
  • 254 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • The main thing I want is small file size for the quality. Netflix, YouTube, and me agree on that.

    Most of my stuff is AV1 today even though the two TVs I typically watch it on do not support it. Most of the time, what I am watching is high-bitrate H.264 that was transcoded from the low-bitrate AV1.

    I will probably move to AV2 shortly after it is available. At least, I will be an early adopter. The smaller the files the better. And, in the future when quality has gone up everywhere, my originals will play native and look great.





  • I moved my mother to Mint a few months ago. I have not had a single tech support call. She uses it daily. About a week in I asked her how it was going. She liked that printing worked more reliably and wished the scroll bars in Facebook were a bit thicker. Her printer used to show as offline sometimes in Windows but that issue has gone away under Mint. I was going to look for a theme with thicker scroll bars but she told me not to bother.

    Granted she was a Firefox and Thunderbird user already so that helped with the transition.






  • This is not the correct take.

    For example, the suggestion to put /home on a partition to allow switching distros without data loss is an example of flexibility Windows does not have.

    Most of the configuration that makes your desktop unique is held in your home directory, unlike Windows that spreads things across the system (such as the registry).

    That said, if you do not know Linux, it is difficult to explain your options in a comment.

    I am not sure what Windows automation you are referring to. If you mean upgrades between versions, Linux distros do that too. If you mean automatic migration from other operating systems, I am not aware of any Windows functionality for that.



  • Your mate is running a Jellyfin client on Unraid? Or the server? Unraid is a NAS that can run VMs and containers. It is not a desktop system.

    If you were only running server stuff on that machine, I would recommend Proxmox.

    As others have said though, basically any Linux distro can do what you are looking for.

    If you are going to run it as a desktop, pick a distro that has a desktop environment (GUI) that you like and go from there.

    Fun fact: Unraid is really just Slackware Linux running the Unraid application on top


  • I was intrigued by XFCE on Wayland so I looked into it.

    XFCE is not really available on Wayland yet. XFWM is X11 only and there is no XFCE compositor.

    What Leap is doing is running the XFCE panel and apps on Labwc. When I have tried this, “it works” but it is certainly not as polished as XFCE on Xorg.

    I am a Wayland fan so overall I support OpenSUSE moving to Wayland. This seems like a bit of disservice to XFCE fans though as I am not sure the DE is ready yet. And the take-away is going to be that it is Wayland that is not working.


  • Of course there is lots to criticize. And it does not get worse than electron. But it is pretty easy to run a fairly lean desktop in 2025. And bloated applications are not a new invention.

    I guess we can talk about the “rise” of interpreted languages. As long as we ignore that the Lingua Franca of the 8 bit era was BASiC I guess. Or Logo! We also have to ignore hugely popular languages in their era like Perl 5, Lisp, TCL, Scheme, and PHP. How about all those Bash scripts? And Javascript is less interpreted than it used to be as you say. I assume you mean Python but it is over 30 years old and PyPy is a thing. Most newer languages are JIT or fully compiled. Rust, Go, Swift, Carbon, and Zig are all compiled languages. Kotlin, Gleam, and Elixir are JIT. What are all the new interpreted languages? If anything, I would say the trend is towards performance and efficiency.

    JavaScript works against his point in a big way. Javascript was released 30 years ago and yet javascript code runs dramatically faster (on the same hardware) in a modern web browser than it will on one from back then. JavaScript engines are VERY heavily optimized and browser devs will move mountains for another percent or two. And WASM is even faster.

    You can build Rust applications on Windows 95 and they are faster than C++ was back then. Not everyone has given up on performance.

    Modern code can be much more parallel and asynchronous (faster). And there is a strong recent focus on memory safety and efficiency.

    Networking and file systems are both much faster and more efficient than they used to be.

    And of course modern processors are not just faster but have many more performance focussed instructions (SIMD, AVX, vector extensions, etc). And we have hardware acceleration for media codecs and of course virtualization which speed up applications dramatically. And technologies like hypervisor clusters and containers can lead to significantly better resource utilization in practice.

    Anyway, his point is obvious and of course true to an extent. Not nearly to extent he claims though.



  • It is a well known risk but not something that was a real risk numerically. I mean, it still isn’t given the number of packages in the AUR.

    This is a couple of malicious packages discovered in a short period though. Not a good sign. It was really impact the AUR if polluting it with malware became common.

    You should always inspect AUR packages before installing them but few people do. Many would not even know what they were looking at.


  • Are you going to dedicate an entire machine to this?

    First, you can run Docker on any distro. Although Debian is great, the version of Docker in the repos is not. So, for Debian, you are going to want to download and install Docker from Docker. Docker is a company.

    There is also Podman. This is a competitor to Docker written by Red Hat. It has some technical advantages. I use Podman myself. The command line is basically the same. They host the same containers (OCI images).

    If you are going to run a lot of images on a single machine, management can get complicated. many people like Portainer for that.

    https://thenewstack.io/an-introduction-to-portainer-a-gui-for-docker-management/

    However, if you are going to dedicate a machine, I recommend Proxmox.

    Proxmox takes over the hardware. It runs a hypervisor that lets you deploy virtual machines and containers easily. It gives you a great web-based UI to manage everything. Technically, it runs on Debian but you do not even need to know that. It deploys as on OS.

    Proxmox actually has nothing to do with Docker. It allows you to deploy virtual machines (eg. Full Linux distributions or even Windows or other operating systems). It also allows you to create containers. However the container technology is not Docker but actually LXC.

    https://linuxcontainers.org/

    When you deploy an LXC container in Proxmox, it is like launching a Linux VM. You get a full Linux distro that looks like a virtual machine and that shows up on your network like a full computer. But, it shares the kernel with Proxmox and so is incredibly light and resource efficient.

    You can connect to Proxmox via a web browser and see any of your virtual machine or container desktops in your web browser (even if just command line).

    Proxmox itself is always online. But you can start and stop individual machines (vm or container) whenever you want.

    You really cannot appreciate how powerful all this is until you try it.

    So, how does this help you run Docker?

    Well, for many things, you may actually find it easier to just use a VM or LXC to install and run whatever it is you want. For many applications, I find it easier to manage a Linux distro than a Docker container.

    Or, you create a VM or an LXC and run Docker inside of it. You can even run Portainer. You can run many Docker containers in a single VM. Or, create a new VM or LXC if that makes things easier.

    But it is so much easier to manage in Proxmox.

    For example, I run a Debian LXC container to run PiHole as an ad blocker on my network. It is super lightweight and I launched it by running a script like they suggest on the PiHole website. And I created a VM (with its own virtual disk for storage) to run Immich (photo management). Even though I run Immich with Docker compose, it is just nicer and easier to manage when it is the only thing running on the “machine” (a QEMU VM in Proxmox) with its own filesystem. I can pull up the Immich machine whenever I want and I am at the command-line where the last command was the the Docker up that I ran months ago). Same story for Jellyfin.

    Do you also want a NAS? You can run one under Proxmox. But another thing to consider would be running TrueNAS as a NAS and using its built-in Docker support to run your containers.

    https://www.truenas.com/truenas-community-edition/