• 0 Posts
  • 68 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle

  • Will my ability to play games be significantly affected compared to Windows?

    Depends on what you play. As a general rule I would say that unless you like competitive multiplayer games you’re probably going to be fine. That being said the vast majority of games don’t support Linux natively so you need to use workarounds. Steam has a workaround built-in, so if most of your gaming is through Steam it should be an almost seamless transition (all you need to do is enable a checkbook in the settings). But like I said, it depends on what you play, I recommend you check out https://www.protondb.com/ and look for the games you play to see how they run on Linux.

    Can I mod games as freely and as easily as I do on Windows?

    Same answer as before, if the game runs okay then modding it would also work okay, but if not it might worsen an already bad situation. Also be very careful here, because when you run Windows games on Steam they’re sort of sandboxed, i.e. they’re running isolated from other stuff, so installing mods is not as straightforward as it would be on windows where binaries are installed globally. It’s not a big deal, but just the other day someone was complaining that they installed a launcher needed for a game and the game wasn’t finding it and this was the reason.

    If a program has no Linux version, is it unusable, or are there workarounds?

    As a general rune there’s a workaround, it’s called WINE (which is an acronym for WINE Is Not an Emulator) which is an “emulator” for Windows (except it’s not really an Emulator as the name implies). Then there are some apps built on top of that like Proton (which is what Steam has embebed) that include other libraries and fixes to help. It’s not perfect, but unless the program is actively trying to detect it or uses very obscure features on Windows it should work.

    Can Linux run programs that rely on frameworks like .NET or other Windows-specific libraries?

    Yes, you can use WINE like mentioned above to run Windows binaries that use .NET, but also .NET core is available for Linux.

    How do OS updates work in Linux? Is there a “Linux Update” program like what Windows has?

    Oh boy, this is the big one, this is the Major difference for m Windows to Linux. Linux has a thing called a package manager, ideally everything you install gets installed via that package manager. This means that everything gets updated together. And here’s the thing, we’re not talking OS only stuff, new version of the kernel (Linux)? New version of the drivers? New version of Firefox? New version of Spotify? All gets updated together when you update your system. This is crucial to the way Linux works, since it allows Linux to have only one copy of each library. For example, if you have 5 different programs that use the same library, in Windows you’ll have 5 copies of that same library, because each program needs their own in the specific version, but in Linux since they will all update together it’s easier to have just one library that gets updated together with the programs. This makes maintaining Linux a piece of pie in comparison, just one command or one click of a button and you’re all up to date with everything you have installed.

    How does digital security work on Linux? Is it more vulnerable due to being open source? Is there integrated antivirus software, or will I have to source that myself?

    As a general rule open source programs are more secure than their counterparts. Closed source programs always remind me of Burns going through several security measures, that sort of thing is imposible in open source because if everyone can see all of the security measures, so someone would notice the gaping hole in the back, whereas in closed source only attackers might have found it. Like cyber security experts say: Security by obscurity is not security. As for Antivirus you don’t need to worry, Linux is inherently more secure than Windows, and also has a small enough user base (most of whom are security experts) so the number of virus written for Linux is extremely small. Also because you should install stuff through a package manager it’s very difficult to get someone to download a bad binary since there’s lots of security in the package manager to prevent this sort of thing. In short almost every antivirus program for Linux checks your computer for Windows viruses to avoid being used to store or transmit viruses to Windows computers, so it’s completely pointless in your home machine (it’s used for example in email servers).

    Are GPU drivers reliable on Linux?

    Yes… But actually no. It depends, if you have a relatively modern AMD GPU (as in last 10 years) the answer is a resounding YES, AMD currently has wonderful Linux support and their cards work excellently with drivers being fully open source and integrated into the Linux Kernel. For Nvidia the story is unfortunately not as nice. Essentially there are 2 drivers available, nouveau (open source driver written by the community and purposefully hampered by Nvidia) and nvidia (closed source driver written by Nvidia that has gaping incompatibilities with Linux). Since you game your only option is nvidia, while nouveau is great for several reasons it can’t match the performance of the nvidia driver. For 99% of stuff the nvidia driver should work fine, but I haven’t had good luck with getting Wayland to run on it, which means you’re probably stuck in X11 (I know this doesn’t mean much to you, but in short it means that you’re somewhat limited in your choice for graphical interface and have to use stuff that people are trying to deprecate but can’t because of Nvidia)

    Can Linux (in the case of a misconfiguration or serious failure) potentially damage hardware?

    Technically yes, so can Windows by that matter. But realistically no, unless you’re writing your own kernel drivers you won’t be in any position to cause hardware damage.

    And also, what distro might be best for me?

    I would probably go with Mint, it’s beginner friendly and I’ve been recommending it for decades. One thing to bear in mind is that in your knowledge level the distro you choose won’t make that big of a difference, try to pick something beginner friendly and you should be fine, no need to overthink this.

    PS: some extra notes that you didn’t asked but I think are good to know:

    • Any Linux can look like any other, it’s just a matter of installing the right packages
    • You should keep your / and /home in separate partitions, this makes it possible for you to reinstall (or even change distros entirely) without losing your files and configuration. This is due to how Linux manages partitions, which in short is not like on Windows where you have a C and D drives but instead any folder can be a different partition or disk.
    • You can dual boot, i.e. have 2 OS and choose which one to use every time you turn on your computer.
    • You should probably install Linux on a virtual machine first to check it out safely. And do a backup before installing it on your computer just in case you make a mistake.


  • First to answer your main question if I were you I would try NixOS, because it’s declarative so it’s essentially impossible to break, i.e. if it breaks for whatever reason a fresh reinstall will get you back to exactly where you were.

    That being said, I know it’s anecdotal but I have been using Arch for (holy crap) 15 years, and I’ve never experienced an update breaking my system. I find that most of the time people complain about Arch breaking with an update they’re either not using Arch (but Manjaro, Endeavor, etc) and rely heavily on AUR which one should specifically not do, much less on Arch derivatives. The AUR is great, but there’s a reason those packages are not on the main repos, don’t use any system critical stuff from them and you should be golden. Also try to figure out why stuff broke when it did, you’ll learn a lot about what you’re doing wrong on your setup because most people would have just updated without any issues. Otherwise it really doesn’t matter which distro you choose, mangling a distro with manual installations to the point where an upgrade breaks them can be done on most of them, and going for a fully immutable one will be very annoying if you’re so interested in poking at the system.





  • You’re focusing too much on the installation process, if installing Arch was the whole of the problem things like Endeavor would be a good recommendation for newbies, but they’re not. Arch has one giant flaw when it comes to being beginner friendly, and it’s part of what makes it desirable for lots of us, and that is the bleeding edge rolling release model. As a newcomer you probably want something that works and is stable. Arch is not, and will never be, that, because the core philosophy is to be bleeding edge rolling release. If you’re a newcomer who WANTS to have that and doesn’t mind the learning curve then go ahead, but Linux has enough of a learning curve already, so it’s better to get people started with something they can rely on and afterwards they can move to other stuff that might have different advantages/disadvantages.

    We’re talking about the general case here, I’ve recommend Arch to a newcomer in the past, he was very keen on learning and was happy with reading wikis to get there stuff sorted, but realistically most people who’re learning a whole new OS don’t want to ask questions and be told RTFM, and RTFM is core to the Arch philosophy.


  • Nibodhika@lemmy.worldtoLinux@lemmy.mlAMD vs Nvidia
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    I don’t want any proprietary drivers (so I am talking about Nouveau or any other FOSS Nvidia driver if it exists)

    In that case AMD, no doubt about it.

    If you were considering proprietary drivers it would still be AMD but there would be some discussion about it.


  • There are a lot of moving parts, so let’s start from the ground up. Processors are glorified input-output machines, you put electricity in some pins, and it gives you back electricity in other pins. Some of those pins define which operation you want and others give the input, so for example sending 00000010 to the operation could mean addition, so the output pins will show the result of the addition of your inputs. Each binary number can be interpreted as a decimal or hexadecimal number, but people are bad at remembering numbers, so instead we can have a table of conversion that says for example that ADD means 00000010, so you write a program saying ADD 2 3 and that’s called assembly language.

    Each processor has their own table of which operations it can do, so writing assembly is tedious since you need to know and account for all of that. Instead you can write in a higher level language where a program called a compiler will translate your code into assembly taking all of the considerations for different processors.

    So far, so good, but there is some stuff which is recurrent and requires special care. For example a processor knows nothing of the disks or memory in the system, so you need a program to be running there to manage stuff. We call that program an Operating system.

    Different operating systems do things differently, one might store things in any order on the disk to save on write speed while others might choose to align data where suitable to save on read speed. And they provide different high level APIs for it, e.g. one OS might have the open_file(char* full_path) while other could have open(char* folder, char* file). So if a program tries to call open in an OS that uses open_file the program won’t know what to do.

    Then just like OS sometimes programs try to use libraries that they expect to be installed in your system, such as DirectX on Windows. These libraries also have their own functions that the program tries to call.

    So now we get to a game which is trying to call a function from DirectX which is trying to call something native to Windows. There’s no way Linux knows what to do with that.

    So a few people realized that if they reimplemented the functions from windows but calling the equivalent functions on Linux you could get the programs to run. They also realized that you can reimplement DirectX using OpenGL calls, or more recently Vulkan. Putting those stuff together almost every call a game is likely to make calls one of these reimplementation which in turn calls the Linux kernel, which in turn calls the corresponding set of instructions on the CPU to do stuff the Linux way. The end result is that most things work, however sometimes the game developer tries to be smart and makes assumptions about how the OS will do something, and then face some errors because Linux did something slightly different.

    But the VAST majority of times when a game doesn’t work is because the game developer is actively trying to ensure you’re not doing anything weird, such as running the game on a different OS.



  • Just don’t let it go too stale, I recommend updating it a few days or a week after a release gets made, since sometimes there are patches for important stuff released the next day or so after a minor one. That being said what I do is I have an RSS feed for their releases so I get a notification when a new release has been made and can check the changelog for important information, most of the times it’s just bumping the version on the .env file.


  • Realistically the best option here is to not have the data in the laptop. So they would remote into a machine you control to access the data, or something of the sort. Regardless the laptop should have full disk encryption so if it gets stolen no data is accidentally leaked.

    Other than that the best way I can think of is giving the user a non-root account and have the laptop connect to tailscale automatically so you can always ssh into it and control it if needed. But this is not ideal, because a malicious person could just not connect to the internet and completely block you from doing anything. This is true for almost any sort of remote management tool you would be able to find.


  • There’s nothing like it, nor will it ever be, for a couple of reasons.

    Programming is a long running task

    Distros like Kali are meant to be used for quick tasks where you don’t need data preservation (or when data preservation is a bad thing). Programming is the opposite of this, it’s only about data (the program) preservation. Programming something that will get erased on the next boot is pointless on the long run if you need to program that again, and if you don’t then what you’re doing is not programming but something else that requires some programming.

    Programming is a wide term

    There are multiple languages/IDEs/Workflows/etc, ranging from fully free and open source to paid closed source, whichever you will use depends entirely on you, having all of that pre installed would be 99% garbage since you will only care about 1 or 2 of them.

    Programming requires setup

    Even if you had whatever workflow you use pre installed, to work on something you would need to setup git keys, install dependencies, compile the first version, etc… and that’s all before you can start doing stuff. And you would have to do this again and again since distros like Kali are not meant to be installed (if they were they wouldn’t need to come with all those packages pre-installed because you could just install the ones you cared about)


  • That’s one of the things I miss the most in Gentoo, having the packages of your system defined in text files so a fresh install was just copying those files and running an update.

    I’ve tried similar things with other distros, but it’s never the same, the list of packages ends up getting out of date or ends up with too much garbage.

    Currently I have a home server so I took the time to get an Ansible playbook setup for running, maintaining, and maybe migrating the server if needed. Since some stuff is also run on other machines that I have (update system, update some docker images I run in multiple systems, etc) I did setup some minimal packages that I need on my main system, it’s easy enough but I wouldn’t recommend using Ansible just for this (but if you also have dotfiles it’s a great tool for automating lots of the initial setup).

    All of that being said, the reason I never bothered with this until I had a home server is that usually there are years between system installs, so even if what you had was exactly what you wanted the last time you installed your system, it’s unlikely to be exactly what you want next time you do. Since the last time I installed my main system I switched from X to Wayland, from i3 to Hyprland and then Sway, etc, etc…



  • Nibodhika@lemmy.worldtoLinux@lemmy.mlLinux middle ground?
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    7 months ago

    Replace Arch with Ubuntu and the answer is yes. Arch based that’s not a good idea.

    The reason is that in 6 months lots can have changed, and Arch is not guaranteed a stable base, so updates might assume you have certain versions or things might break because you should have done a middle step during the upgrades that you didn’t which is now buried in months of update news in the wiki.

    If you want to only update your system every six months, Arch is not ideal, it’s likely to work, but not guaranteed.


  • I just got to work and plugged my surface pro into my external monitor. It didn’t switch inputs immediately, and I thought “Linux would have done that”. But would it?

    Nope. My laptop for example doesn’t automatically use an output when plugged in, but that doesn’t bother me because I know other DEs would do that, and it’s my choice of having a minimal window manager that causes that.

    And this goes into your next point, because I know that this comes from decisions I made, I’m okay with that. I also know I could probably fix it somehow, even if just by running a script in the background that checks if an output is plugged and tries to use it.

    And for me that’s the big difference. As a general rule when things break or don’t work are not the fault of Linux as a general, but of a specific piece of the stack, and more often than not it’s because that piece was backwards engineered without any help from the manufacturers of the hardware it’s meant to be controlling, so I can be very tolerant of these errors since the bad guys here are the third-party who’s refusing to make their things work on Linux. But even things that don’t work as I want to, I can make them do so, and that’s a huge change in viewpoint.

    In other words, on Windows I used to be of the thought of things you can do, and things you can’t, with time I noticed that in Linux this thought shifted, to the point that the only question I ever ask myself is: “HOW do I do this?”. This implies that there are no impossible things in Linux, which is obviously false, but I would argue that the correct way to think about this is “things that are impossible on Linux, for now”, and that’s a huge difference, because Linux is always evolving and getting better and better, things you thought are impossible now might be trivial in a few months or years whenever someone with the knowledge to fix it gets bothered with it.


  • At the same time I think most people don’t think about how much prior knowledge you need to just be able to use Windows or Mac. And for someone without ANY prior knowledge all of them are the same.

    Story time, my MiL is a zero when it gets to computer literacy, to the point that every week I had to solve something for her. Eventually I gave her a laptop with Linux in it to make it easier for me to do support, and to my surprise she had lots of problems the first months when setting things up and until learning the ropes, but afterwards there were almost no problems.

    The thing is that people have a lot of Windows knowledge, so when they try Linux they expect it to be Windows and get frustrated when it’s not.


  • Realistically whatever problems you see in python will be there for any other language. Python is the most ubiquitously available thing after bash for a reason.

    Also you mentioned provisioning scripts, is that Ansible? If so python is already there, if you mean really just bash scripts I can tell you that does not scale well. Also if you already have some scriptsz what language are they on? Why not write the function there?

    Also you’re running syncthing on these machines, I don’t think python is larger than that (but I might be wrong).