What a terrible article. The margins for board partners is small, but Nvidias margin is huge.
stuner
Yes, I would recommend creating a backup (perhaps on your phone or a different computer over the network) and then upgrading to 21 and then 22. IMHO Mint has steadily gotten better and there is typically no reason to stay on an older version.
Currently, my favorite ways of running non-Steam games are the Heroic Games Launcher and Bottles. Heroic is especially nice if you have games from GOG or EGS. However, looking at ProtonDB, it seems that both DCS and Flight Sim 2024 don't work too well on Linux. Overall it sounds like it might be challenging for you to switch to Linux, but you can always give it a try and see how much works.
Given that you installed Linux on a separate drive, it's likely that the Windows bootloader is perfectly fine but your BIOS chooses to prioritize the Linux disk. I would check if you can still select the Windows drive / installation in the BIOS / boot media selection.
Typically, Fedora should also add the Windows installation to its bootloader (https://docs.fedoraproject.org/en-US/quick-docs/grub2-bootloader/#_adding_other_operating_systems_to_the_grub2_menu). It uses os-prober
to find other operating systems. Can you post the output of sudo os-prober
?
Edit: The output of lsblk -f
would also be useful (though you may want to anonymize it first).
This seems to be a limitation of Intel host controllers. The USB 2.0 specification (including 12 Mbps Full Speed) allows for up to 127 devices. Each of those devices can have up to 16 IN and 16 OUT endpoints, c.f. https://www.usbmadesimple.co.uk/ums_3.htm Depending on how you count, that would be a maximum of 2k to 4k endpoints in total. I guess Intel thought it wasn't worthwhile supporting that many endpoints.
Some quick searching turned up this post that claims that USB3 controllers often support up to 254 endpoints (in total). https://www.cambrionix.com/a-quick-guide-to-usb-endpoint-limitations/ Other posters have also said that AMD appears to have higher limits. You could also consider adding more USB root hubs to your system (with PCIe cards).
Yield is the percentage of chips that are functional. Roughly, you can think of it as the probability of a chip having 0 defects. The bigger the chip, or the higher the defect density, the lower this probability becomes. Chip designers will also include mitigation techniques (e.g. redundancy) to allow chips to work even with some defects.
Talking about the "yield" of a process doesn't make any sense. Yield is a metric for a specific chip fabricated on a given process. This depends heavily on the size of the chip and mitigation techniques.
The "correct" metric to compare processes is defect density (in defects per square cm). Intel claims that their defect density is below 0.4 defects/cm²: https://www.tomshardware.com/tech-industry/intel-says-defect-density-at-18a-is-healthy-potential-clients-are-lining-up. This would be relatively high but not much worse than what TSMC has seen for their recent nodes: https://www.techpowerup.com/forums/threads/intel-18a-process-node-clocks-an-abysmal-10-yield-report.329513/page-2#post-5387835).
Unfotunately, I can help you with that. The machine is not running any VMs.
It's possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.
I'm seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:
- 169 MB/s write
- 254 MB/s read
What's your setup?
With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.
The article mentions that it works fine on Ubuntu 24.04 LTS, so it shouldn't be an issue unless you're running something older than that.