It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
BIOS is up to date, CPU model explicitly listed as supported, memtest ran fine, not using XMP profiles.
Yep, it’s explicitly listed in the supported list and BIOS is up to date.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping A0
, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.
even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
Linux printing is very complex. Before Foomatic came along you got to experience it in all it’s glory and setting up a working printing chain was a pain. The Foomatic Wikipedia page has a diagram that will make your head spin.
If you end up with resizing /var as the only solution, please post your partition layout first and ask, don’t rush into it. A screenshot from an app like Disk Manager or Gparted should do it, and we’ll explain the steps and the risks.
When you’re ready to resize, you MUST use a bootable stick, not resize from inside the running system. You have to make a stick using something like Ventoy, and drop the ISO for the live version of GParted on the stick, then boot with it and pick the Gparted live. You’ll have to write down the instructions and be careful what you do, and also hope that there’s no power outage during.
The safest method, if your /home has enough space, is to use it instead of /var for (some) Flatpak installs. You can force any Flatpak install to go to /home by adding --user
to the command.
If you look at the output of flatpak list
it will tell you which package is installed in user home dir and which in system (/var). You can also show the size of each package with flatpak list --columns=name,application,version,size,installation
.
I don’t think you can move installed apps directly between system/user like Steam can (Flatpak is REALLY overdue for a good package manager) but you can uninstall apps from system, then run flatpak remove --unused
, then install them again with --user
.
Please note that apps installed with --user
are only seen by the user that installed them. Also you’ll have to cleanup separately for system and user(s) in the future (flatpak remove --unused
for system, then flatpak remove --unused --user
for each user).
Third party package mechanism is fundamentally broken in Ubuntu (and in Debian).
Third party repos should never be allowed to use package names from the core repos. But they are, so they pretend they’re core packages, but use different version names, and at upgrade time the updater doesn’t know what to do with those version and how to solve dependencies.
That leaves you with a broken system where you can’t upgrade and can’t do anything entirely l eventually except a clean reinstall.
After this happened several times while using Ubuntu I resorted to leaving more and more time between major upgrades, running old versions on extended support or even unsupported.
Eventually I figured that if I’m gonna reinstall from scratch I might as well install a different distro.
I should note I still run Debian on my server, because that’s a basic install with just core packages and everything else runs in Docker.
So if you delegate your package management to a completely different tool, like Flatpak, I guess you can continue to use Ubuntu. But it seems dumb to be required to resort to Flatpak to make Ubuntu usable.
You were merely lucky that they didn’t break.
Lucky… over 5 years and with a hundred AUR packages installed at any given time? I should play the lottery.
I’ve noticed you haven’t given me any example of AUR packages that can’t be installed on Manjaro right now, btw.
it wasn’t just a rise in popularity of Arch it was Manjaro’s PAMAC sending too many requests DDoSing the AUR.
You do realize that was never conlusively established, right? (1) Manjaro was already using search caching when that occured so they had no way to spam AUR, (2) there’s more than one distro using pamac, and (3) anybody can use “pamac” as a user agent and there’s no way to tell if it’s coming from an actual Manjaro install.
My money is on someone actually DDoS’ing AUR and using pamac as a convenient scapegoat.
Last but not least you’re trying to use this to divert from the fact AUR packages work fine on Manjaro.
Manjaro has no purpose, it’s half-assed at being arch and it’s half-assed at being stable.
My experience with Manjaro and Fedora, OpenSUSE etc. contradicts yours. Manjaro has the best balance between stability and rolling out of the box I’ve seen.
“Out of the box” is key here. You can tweak any distro into doing anything you want, given enough time and effort. Manjaro achieves a good balance without the user having to do anything. I remind you that I’ve tested this with non-experienced users and they have no problem using it without any admin skills (or any admin access).
Debian testing is a rolling.
It is not.
AUR isn’t a problem in Manjaro because of lack of support, it’s a problem because packages there are made with Arch and 99.999% of its derivatives in mind, aka latest packages not one week old still-broken packages.
And yet I’ve managed to install dozens of AUR packages just fine. How do you explain that?
Matter of fact, I’ve never run into an AUR package I couldn’t install on Manjaro. What package is giving you trouble?
Manjaro literally accidentally DDoSes the AUR every now and then because again they’re incompetent.
You’re being confused.
AUR had very little bandwidth to begin with and could not cope with the rise in popularity of Arch-based distros. That’s a problem that needs to be solved by the AUR repo first and foremost. Manjaro did what they could when the problem became apparent and has added caching wherever it could. Both Manjaro and Arch devs have worked together to improve this.
We don’t know yet, the first frame has been rendering for the last two weeks.
There is no other Arch-based distro that strives to achieve a “rolling-stable” release.
Alternatives like Fedora have already been mentioned by other comments.
Debian testing is not a rolling release. Its package update strategy is focused on becoming the next stable so the frequency ebbs and flows around stable’s release cycle.
manjaro since it manages to be less stable than Arch specifically because of their update policy
This is false. Their delayed updates mitigate issues in latest packages. Plasma 6 was released late but it was a lot more usable, for example.
I mean why even be on Arch if you can’t use the AUR and have the latest packages?
Anybody who wants Arch should use Arch. Manjaro is not Arch.
Some of us don’t want the latest packages the instant they release, we’re fine with having them a week or a month late if it means extra stability.
There’s nothing magical about what Manjaro is doing, it stands to reason that if you delay packages even a little some bugs will be fixed.
Also you can use AUR on Manjaro perfectly fine, I myself have over 100 AUR packages installed. But AUR is not supported even by Arch so it’s impossible to offer any guarantees for it.
There’s also Flatpak and some people may prefer that since it’s more reliable.
Manjaro has been specifically designed to have fresh packages (sourced from Arch) but to be user friendly, long term stable, and provide as many features as possible out of the box.
It requires some compromises in order to achieve this, in particular it wants you to stick to its curated package repo and a LTS kernel and use it’s helper apps (package/kernel/driver manager) and update periodically. It won’t remain stable if you tinker with it.
You’ll get packages slower than Arch (depending on complexity, Plasma 6 took about two months, typically it’s about two weeks) but faster than Debian stable.
I’m running it as my main driver for gaming and work for about 5 years now and it’s been exactly what I wanted, a balanced mix of rolling and stable distro.
I’ve also given it to family members who are not computer savvy and it’s been basically zero maintenance on my part.
If it has one downside is that you really have to leave it alone to do its thing. In that regard it takes a special category of user to enjoy it — you have to either be an experienced user who knows to leave it alone or a very basic user who doesn’t know how to mess with it. The kind of enthusiastic Linux user who wants to tinker will make it fall apart and hate it, and they’d be happier on Arch or some of the other distros mentioned here.
Try using an addon like Basic Automatic Tabs Unloader, it will kill tabs completely a while after they’ve been closed. You can set the grace period as low as you want.
The Firefox native tab unloader is extremely permissive and only kills tabs when the whole system starts running low on RAM.
I get your point, but this feature is being pushed to users prominently, and it turns out it doesn’t do anything with the search results on both Youtube and Amazon, which are pretty much THE most likely sites you could think of, that anybody’s going to be using. That seems like a pretty glaring omission to me.
There are lots of bug reports already opened about it not working as intended on various large sites, including Facebook, Google Images etc.
It’s pretty obvious to me that such sites are going to keep changing their parameters because they’re privacy predators. If Mozilla is not willing or able to keep the parameter definitions up to date then this feature can end up doing more harm than good.
pp
has been introduced 3 years ago and it’s a known tracking parameter. And it’s not some obscure website we’re talking about, it’s the largest website in the world…
If they’re not going to keep up with parameters after so many years I think it’s very misleading and potentially even harmful to keep offering this feature.
Yes, for example Youtube video links are copied with the &pp=
tracking information. Search for something on Youtube, right-click on a result title, and copy with or without tracking gives you the same thing (with the pp=).
Looking through the packages available for OpenWRT I would suggest Tcl, Lua, Erlang or Scheme (the latter is available through the Chicken interpreter). Try them out, see what you like.
Repology artificially reduces the number of packages instead of reporting the actual number. Which I find highly dubious because most packages have a purpose. In particular for repositories like the AUR artificially eliminating packages goes against everything it stands for. Yes it’s supposed to have alternative versions of something, that’s the whole point.
If there wasn’t for this the ranking would be very different. Debian for example maintains over 200k packages in unstable.
Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.