The Major Linux Problems on the Desktop... or sort of

It should not be surprising to easily come across an article written by a longterm user about the apparent failure of Linux for Desktop PC's, so it is somewhat of a breath of fresh air to find someone going out of their way to make a list of the shortcomings of both Windows AND Linux. Still, comparing both operating systems to each other is, at least from a technical standpoint, a rather difficult task, which did not prevent Artem S. Tashkinov from doing it anyway. In his text Major Linux Problems on the Desktop, Tashkinov provides a detailed list of the potential negative aspects of the kernel and major projects associated with it. Potential, as some points can be interpreted as mere subjective preferences or appear to be plain wrong conclusions, which can already be encountered in the preface.

The preface highlights a few benefits of using a Linux distribution, including “excellent package management within one distro, no widely circulating viruses or malware, complete system reinstallation (sic) is almost never required, besides, Linux is extremely customizable, easily scripted and it's free as in beer”. It should be mentioned that Linux is only free because major companies such as Intel, Amazon, Facebook and Microsoft contribute massively to the development of the Linux kernel, both technically and financially. And while large projects such as Debian can rely on its community donating some money, smaller projects in particular seldom receive any from end users. I would even go so far as to claim that most Linux users are leechers; it is up to you to decide whether this is good or bad. What is not entirely up to you, however, to decide is whether Android, the most popular operating system for smartphones, is part of the Linux family. While Tashkinov claims it is not, he ignores that Linux, in reality, is nothing but the kernel; any other projects like the GNOME Project providing the GNOME Desktop Environment are Free Open Source Projects that also are compatible with other kernels such as the one provided by OpenBSD. Redirecting to another article claiming that Android cannot be Linux because it has got little in common with the kernel provided for Desktop usage is like saying that the BlackBerry Bold 9900 is not a smartphone because of its physical keyboard. It also does not count that one (!) Google engineer and the GNU Project claim the same, despite Android not violating the license – GNU GPLv2 – it has inherited from the main branch and being listed as being based on Linux.

Keep this section in mind because we will come back to it.

The Shortcomings

To not blow this text out of proportion, I will not address all points and instead focus on some deemed as important. Unfortunately, nearly all supposed shortcomings are labeled as such, which is far from practical.

Rather unsurprisingly, Hardware support discusses the downsides of Linux for users with NVIDIA GPU's. It is far from a myth that Linus Torvalds himself is not a big fan of NVIDIA, and speaking from experience, I do not remember having a good time with their GPU's, either. The last time I relied on one of their graphics card ended in their official drivers shutting my GPU off five minutes after booting Windows Vista. On the other hand, AMD and Intel have provided me with smooth systems largely regardless of operating system – only Archcraft, an Arch-based distribution, suffers from screen tearing on AMD-powered machines. But if you are familiar with Arch Linux, it is not the most complicated thing to fix, as this is a driver issue quite specific to Archcraft (and if you use either vanilla Arch or one of its derivates, you should be capable of tweaking it yourself). Based on my tests of several Linux distributions over the spawn of one year, I seldom encountered any issues regarding drivers, only finding the correct drivers for my printer proving to be more complicated on less popular distributions. Ubuntu, for example, instantly recognized my HP Officejet 2620, while Arch-based distributions required me to manually search for drivers. A lack of tools to tweak one's GPU also is a distro-specific issue, as distributions targeting advanced end users such as Puppy Linux offer GUI tools to easily adjust parameters. Surprisingly enough, I also had no issues with PulseAudio, so far, so again assuming that it is a distro-specific problem. It is not to say that there are not any regressions during development of any project, and I do agree that not every user should be forced to file a report on sites like GitHub, however do not expect any decent support by venting to an official account of a Linux distribution on Twitter. A variety of forums, subreddits and means of contact exist to get some help – and not all forums are dominated by hoards of pricks, see EndeavourOS and its reputation for providing a friendly community (and they will be friendly enough to properly report a bug for you if you open a thread and discuss the problem with others).

One might think now that the majority of issues listed above are not hardware problems, per se, and indeed, driver issues in particular actually belong to software support. And this is where we enter the “true hell” of Linux, as Tashkinov already failed at keeping Linux (the kernel) apart from other FOSS projects providing the tools to make the kernel useful to non-kernel developers. This is not to say that he is the only one doing this, in fact it is a rather defining characteristic of the GNU/Linux community.

He describes the display server X.Org as “largely outdated, unsuitable and even very much insecure for modern PCs and applications”, hyperlinking to two discussions in a forum (which debunk his first and second claim), an archive link leading to a session at linux.conf.au 2013 (a conference that, like all conferences, is not worth knowing about), an opinion piece, a theoretical framework for X12 (which will likely never happen and would only be a response to the diversity of devices), another opinion piece dealing with screen locker, an X.Org announcement saying that Oracle discovered several vulnerabilities that will be fixed (!) by the team behind X.Org, a FAQ dealing with XScreenSaver (screen locker), and another forum discussion. Taking a closer look at the following claims, it is very apparent that Tashkinov merely tries to justify his negative opinion, rather than objectively summarizing all of the disadvantages of X.Org. A similar thing can be said about his opinion on Wayland, which still is a rather new project competing against X.Org and is just seeing widespread adaptation.

“General graphics APIs issues” continues to cover drawbacks of X.Org and Wayland, whilst also including shortcomings of the widget toolkits GTK and Qt. Again, not part of Linux but two Open Source projects providing tools for Linux, Windows and macOS. “Font rendering” highlights ClearType issues and the “ugliness” of some standard fonts.

Kernel shortcomings

The kernel – Linux – itself receives its own section divided into 17 points. Up until now, Tashkinov entirely focused on other projects that can be shipped with Linux (the kernel) but are not necessary to use and build an independent system around it.

  • ! The kernel cannot recover from video, sound and network drivers' crashes (I'm very sorry for drawing a comparison with Windows Vista/7/8/10 where this feature has been implemented for ages and works beautifully in a lot of cases).

Again only linking to a discussion in a forum, ironically the same one from earlier. If he desperately wants this feature to be implemented, he could contact the developers and recommend such a feature. Or even compile his own kernel, where he can implement such things himself. That's the beauty of Open Source.

  • KMS exclusively grabs video output and disallows VESA graphics modes (thus it's impossible to switch different versions of graphics drivers on the fly).

Bugzilla states that this issue has been resolved.

  • For most intents and purposes KMS video drivers cannot be unloaded or reloaded as this involves killing all running graphics applications and using console.

Well, at least console still works, which lets users re-activate GUI.

  • !! KMS has no safe mode: sometimes KMS cannot properly initialize your display and you have a dead system you cannot access at all (a kernel option “nomodeset” can save you, but it prevents KMS drivers from working at all – so either you have 80x25 text console or you have a perfectly dead display).

A Google search leading to a variety of forum posts from years ago labeled “[SOLVED]“? The first result even leads to a discussion in which the original poster realized that his GRUB configuration and X11 caused his issues, not KMS.

  • Traditional Linux/Unix (ext4/reiser/xfs/jfs/btrfs/etc.) filesystems can be problematic when being used on mass media storage.

“Add an option to disable file/directory access controls for USB stick-type storage devices” was an idea proposed by Tashkinov himself that got rejected on Bugzilla. And I could not agree more with Alan, saying in 2013:

If you want a feature, then write it or pay someone to write it. It's 'Free Software' not 'Charity'.

You can keep re-opening the bug if it amuses you but nobody else actually cares so you are just wasting everyones time.

This is being followed by Tashkinov venting about the apparent lack of popularity of Linux among Desktop users. I doubt anyone outside Linux geeks would give a damn about something like this.

  • When a specific KMS driver cannot load for some reasons, the Linux kernel leaves you with a black screen.

“Status: Fix Released”. Also seems to only have affected i915 (Intel GPU driver).

  • File descriptors and network sockets cannot be forcibly closed – it's indeed unsafe to remove USB sticks without unmounting them first as it leads to stale mount points, and in certain cases to oopses and crashes. For the same reason you cannot modify your partitions table and resize/move the root partition on the fly.

By now, I am pretty much convinced that many Linux developers consider Tashkinov annoying for repeatedly requesting features on their Bugzilla site. I also seem to be one of the few people who does unmount their USB drives before removing them and would never dare to mess with their partition table when the installed OS is running (I have got a solid USB drive with a solid system rescue distribution included GParted just for such tasks, which you likely won't need as much, unless you engage in excessive “distro-hopping” and REALLY do not know what to do with your computer).

  • In most cases kernel crashes (= panics) are invisible if you are running an X11/X.org or Wayland session. Moreover KMS prevents the kernel from switching to plain 640x480 or 80x25 (text) VGA modes to print error messages. As of 2021 there's work underway to implement kernel error logging under KMS.

They are not invisible, as the whole system, including mouse and keyboard, freezes. If it occurs during boot, systemd, for example, will prompt an error message and abort the boot process (and freeze). Kernel panics are largely caused by corrupted initramfs or improperly installed kernels. KMS has got very little to do with it.

  • !! Very incomplete hardware sensors support, for instance, HWiNFO64 detects and shows ten hardware sensor sources on my average desktop PC and over fifty sensors, whilst lm-sensors detect and present just four sources and twenty sensors. This situation is even worse on laptops – sometimes the only readings you get from lm-sensors are cpu cores' temperatures.

The only time I got to witness sensors not being recognized happened on my rather exotic MEDION Akoya tower that was up for sale in 2012. I have not had any issues with unrecognized sensors on both of my laptops (an Acer Aspire from 2012 and an Asus XF553MA from 2015) and my mother's HP Pavilion tower (2016). I do not know what hardware he uses but maybe some of his sensors are VERY exotic, reaching their EOL, or he did not install his Linux system properly. Either way, it is not like he can tweak his system to let it find all sensors instantly. And again, no one outside sysadmins will care about such things.

  • ! A number (sometimes up to dozens) of regressions in every kernel release due to the inability of kernel developers to test their changes on all possible software and hardware configurations. Even “stable” x.y.Z kernel updates sometimes have serious regressions.

What regressions in particular? Bugs should be expected at such a size and are not much of a problem as long as they get fixed. So far, I have had no issues with any kernel release (LTS; their main stable one, which I am currently running on; Zen). Even my crappy Medion tower has handled every kernel update without issues and I have yet to witness things like a declining performance. What the source code looks like quality-wise, is a topic for another discussion.

  • ! The Linux kernel is extremely difficult and cumbersome to debug even for the people who develop it.

Yeah, it consists of approximately 30 million lines of code. It is monolithic, after all. And the hyperlink leads to an archive link of a thought piece about crashdumps from 2014.

  • !! Critical bug reports filed against the Linux kernel often get zero attention and may linger for years before being noticed and resolved. Posts to LKML oftentimes get lost if the respective developer is not attentive or is busy with his own life.

No other source for this claim cited besides the very short Wikipedia article about the Linux kernel mailing list. One could argue that LKML is a poor place to submit bugs but blaming developers for having a life outside of kernel development, with the kernel being free to use and tweak by anyone owning a computer, and not spending 24/7 at their machines fixing bugs and security vulnerabilities? Using some common slang you will only encounter on sites like Tumblr and YouTube's gamer bubble: Go touch some grass, man.

  • The Linux kernel contains a whole lot of very low quality code and when coupled with unstable APIs it makes development for Linux a very difficult error prone process.

Again leading us back to the very same forum from earlier, yet now it shows a discussion about why C is a “bad” programming language. Tashkinov surely did not even read the sources he cites.

  • The Linux kernel forbids to write to CPU MSRs in secure UEFI mode which makes it impossible to fine-tune your CPU power profile. This is perfectly possible under Windows 10.

Then disable secure mode?

  • !! ACPI's Collaborative Processor Performance Control (CPPC) is not supported for Intel and AMD CPUs which means power management for the only x86 desktop CPUs under Linux is in a bad shape. AMD submitted patches in 2019 and they are yet to be mainlined. Microsoft Windows has supported this ACPI feature for half a decade already.

Citing the provided link:

CPPC (Collaborative Processor Performance Control) offers optional
registers which can be used to tune the system based on energy and/or
performance requirements.

Skimming the entire thread, it seems like the idea got dumped. Likely because it is not as relevant as fixing bugs and patching security holes.

Many of the issues listed above would not be issues if Tashkinov would tweak his own Linux kernel and release it. Other points, such as the one addressing kernel panics, are just wrongfully attributed to other aspects of the kernel or brought into connection with third-party software.

Other apparent shortcomings

Tashkinov further goes on by considering the diversity of Linux distributions a downside to Linux itself. He desires an operating system similar to Windows and macOS, i.e. only one Linux distribution or at least a few sharing all the same third-party software, including daemons and package management. I doubt that those religiously defying systemd like the developers of Devuan will agree with this.

Of course, gaming on Linux is not the same as on Windows. Personally, I doubt there is much difference between Linux and macOS, in this regard. Maybe it is just me but I do not need to play the latest resource-hungry games (and as a retro gamer myself, I largely consider gamers that seemingly do nothing but play the latest video games and brag about being a gamer online a bunch of idiots with too much free time and rather poor spending habits). Tashkinov makes it sound like that dual-booting Windows and Linux is too much work, in contrast to having to buy a different console just because the game you desire has not been released for yours.

“Problems stemming from low Linux popularity and open source nature” fails to mention that Valve wants to release a console with an operating system based on Arch Linux. This does not sound like low popularity at all.

“General Linux problems” addresses many more points, yet for the sake of keeping this post as short as possible, I will only cite the ones marked with exclamation marks.

  • !! Linux lacks an alternative to Windows Task Manager which shows not only CPU/RAM load, but also Network/IO/GPU load and temperature for the latter. There's no way to ascertain the CPU/RAM/IO load of processes' groups, e.g. web browsers like Mozilla Firefox or Google Chrome.

As far as I know, GNOME's System Monitor and KDE's System Monitor do show Network load. Both are nearly identical to Task Manager, besides some obvious graphical differences. Tools to show GPU stats also exist, such as gpustat for NVIDIA GPU's, intel-gpu-tools for Intel graphics cards, and radeontop for AMD GPU's relying on mesa (open source drivers).

  • !! There's no concept of drivers in Linux aside from proprietary drivers for NVIDIA/AMD GPUs which are separate packages: almost all drivers are already either in the kernel or various complementary packages (like foomatic/sane/etc). It's impossible for the user to understand whether their hardware is indeed supported by the running Linux distro and whether all the required drivers are indeed installed and working properly (e.g. all the firmware files are available and loaded or necessary printer filters are installed).

Mesa for AMD exists. And it was fairly easy for me to figure out that all distributions are able to run on the hardware I get to access when I started to mess with Linux a year ago. The only persisting issue you often get to hear about are solely related to NVIDIA, which usually can be solved, despite Linus Torvalds' public dismissal of NVIDIA.

  • !! There's no guarantee whatsoever that your system will (re)boot successfully after GRUB (bootloader) or kernel updates – sometimes even minor kernel updates break the boot process (except for Windows 10 – but that's a new paradigm for Microsoft). For instance Microsoft and Apple regularly update ntoskrnl.exe and mach_kernel respectively for security fixes, but it's unheard of that these updates ever compromised the boot process. GRUB updates have broken the boot process on the PCs around me at least ten times. (Also see compatibility issues below).

Again, it seldom is caused by the main kernel but either by messed-up initramfs or kernel tweaks implemented by distribution maintainers. The only few times an update requiring a new GRUB file broke my bootloader happened on systems that were not installed properly due to issues with my cheap USB drives. Only once did I get to witness Ubuntu overwriting Archcraft's bootloader, yet this is the result of Ubuntu always checking if it is the first boot entry, meaning that whenever Ubuntu is being booted on a dual-boot or multi-boot machine, it sets itself to the first entry. I have yet to test another distribution that disregards user settings to such a degree.

  • !! LTS distros are unusable on the desktop because they poorly support or don't support new hardware, specifically GPUs (as well as Wi-Fi adapters, NICs, sound cards, hardware sensors, etc.). Oftentimes you cannot use new software in LTS distros (normally without miscellaneous hacks like backports, PPAs, chroots, etc.), due to outdated libraries. A not so recent example is Google Chrome on RHEL 6/CentOS 6.

You can see that Tashkinov hardly bothered to update his list. The very point of LTS releases is the bigger focus on stability, rather than always being on the front line in terms of latest software. This is why “rolling release” distributions exist. If stability is not your primary concern, you do not have to opt for, let's lay, Debian Bullseye; Debian Testing and Debian Unstable offer a very similar or identical experience to full-fledged “bleeding edge” distributions such as Arch, Solus, Void and Gentoo. And speaking from personal experience, the differences between LTS and rolling release in terms of performance are pretty much non-existent on hardware that's older than a year. Calling PPA's and chrooting “hacks” demonstrate Tashkinov's lack of knowledge regarding the Linux ecosystem.

  • !! Linux developers have a tendency to a) suppress news of security holes b) not notify the public when the said holes have been fixed c) miscategorize arbitrary code execution bugs as “possible denial of service” (thanks to Gullible Jones for reminding me of this practice – I wanted to mention it aeons ago, but I kept forgetting about that).

Here's a full quote by Torvalds himself: “So I personally consider security bugs to be just “normal bugs”. I don't cover them up, but I also don't have any reason what-so-ever to think it's a good idea to track them and announce them as something special.”

The first archive link includes another statement by Torvalds, stating that his “responsibility is to do a good job. And not pander to the people who want to turn security into a media circus”. The second link leads to an article from 2012 about poor communication between Torvalds and distribution maintainers regarding security fixes. No idea how bad the situation was back then but nowadays it appears that the most popular distributions do notify pretty decently about bugs and vulnerabilities.

Personally, I agree with Torvalds' view insofar, as the biggest security threat still happens to sit in front of the machine, which, if being strict, is just a large and more complex calculator you often cannot carry in a pocket.

  • Year 2014 was the most damning in regard to Linux security: critical remotely-exploitable vulnerabilities were found in many basic Open Source projects, like bash (shellshock), OpenSSL (heartbleed), kernel and others. So much for “everyone can read the code thus it's invulnerable”. In the beginning of 2015 a new critical remotely exploitable vulnerability was found, called GHOST.

That was also the period in which everyone realised that the team behind OpenSSL, consisting of 17 developers and seven committee managers, desperately needs some solid funding. The patch to stop heartbleed was released six days after it was discovered by a member of Google's security team. Back then, “OpenSSL only has two [fulltime] people to write, maintain, test, and review 500,000 lines of business critical code” – obviously, improvements have been made since then and despite this, a patch was made available in a relatively short time. Who clearly did not bother to download and install the patch were system administrators, either due to having a dumb boss or out of sheer laziness (who really knows, the reasons can differ vastly in practice):

As of 20 May 2014, 1.5% of the 800,000 most popular TLS-enabled websites were still vulnerable to Heartbleed.[9] As of 21 June 2014, 309,197 public web servers remained vulnerable.[10] As of 23 January 2017, according to a report[11] from Shodan, nearly 180,000 internet-connected devices were still vulnerable.[12][13] As of 6 July 2017, the number had dropped to 144,000, according to a search on shodan.io for “vuln:cve-2014-0160”.[14] As of 11 July 2019, Shodan reported[15] that 91,063 devices were vulnerable. The U.S. was first with 21,258 (23%), the top 10 countries had 56,537 (62%), and the remaining countries had 34,526 (38%).

Will Tashkinov blame sysadmins not doing their job properly on Linux, as well?!

He further goes on by listing the amount of vulnerabilities discovered since 2016, calling Linux “one of the most vulnerable pieces of software in the entire world” and considering Windows to be more secure, disregarding the possibility that he simply cannot know what he does not know in regards to security holes in closed-source software. And a bunch of bugs – minor and severe – that get patched properly in a few days are, at least in my perspective, less of an issue than a severe vulnerability still not being fixed even after several patches have been released.

Many Linux developers are concerned with the state of security in Linux because it is simply lacking.

Maybe the very complex nature of Linux also makes it harder for criminals to develop and deploy exploits. And very much less attractive, as long as Linux is not as popular as Windows. No matter how much you twist it, the most secure data is the one you never store on a computer.

  • Linux servers might be a lot less secure than ... Windows servers, “The vast majority of webmasters and system administrators have to update their software manually and test that their infrastructure works correctly”. Seems like there are lots of uniquely gifted people out there thinking I'm an idiot to write about this. Let me clarify this issue: whereas in Windows security updates are mandatory and they are usually installed automatically, Linux is usually administered via SSH and there's no indication of any updates at all. In Windows most server applications can be updated seamlessly without breaking services configuration. In Linux in a lot of cases new software releases require manual reconfiguration (here are a few examples: ngnix, apache, exim, postfix). The above two causes lead to a situation when hundreds of thousands of Linux installations never receive any updates, because their respective administrators don't bother to update anything since they're afraid that something will break.

Well, Tashkinov demonstrated some rather severe lack of knowledge earlier and pretty much complains about Linux (distributions) not being identical to Windows. The highlighted “might be” further proves that he offers little to no decent sources to back his claims up and resorts to speculations to make certain points. Forced updates, at least for Desktop users of Windows, often cause more problems than they attempt to solve, with the prominent release of “Redstone” 1511 to prevent machines from booting (I was unfortunate enough to be greeted by a trashed OS that required a fresh install) and the April 2021 release (KB5001330 & KB5001337) causing DNS and folder issues. You cannot really blame server administrators for having to focus more on stability, especially within critical environments that require guaranteed stability, instead of guaranteed bleeding edge experience.

  • Ubuntu, starting with version 16.04 LTS, applies security updates automatically except for the Linux kernel updates which require reboot (it can be eliminated as well but it's tricky). Hopefully other distros will follow.

Which can be annoying if you are running a test system or are used to manually handling updates, hence the existence of dozens of tutorials explaining how to turn automatic updates off. Personally, I am not a fan of automatic updates, especially since Redstone 1511, and I certainly am not the only one who would rather check updates manually before installing them (and I use Arch, btw).

  • ! Fixed applications versions during a distro life-cycle (except Firefox/Thundebird/Chromium). Say, you use DistroX v18.10 which comes with certain software. Before DistroX 20.10 gets released some applications get updated, get new exciting features but you cannot officially install, nor use them.

Why say “DistroX” when you clearly are talking about Ubuntu?! I doubt that he hardly ever needs some of those “new exciting features”, otherwise he would give rolling release distributions more attention.

  • ! Let's expand on the previous point. Most Linux distros are made such a way you cannot upgrade their individual core components (like kernel, glibc, Xorg, Xorg video drivers, Mesa drivers, etc.) without upgrading your whole system. Also if you have brand new hardware oftentimes you cannot install current Linux distros because almost all of them (aside from rare exceptions) don't incorporate the newest kernel release, so either you have to use alpha/development versions of your distro or you have to employ various hacks in order to install the said kernel.

Does brand new hardware really require the very latest kernel that was released, like, five minutes ago? Again, he acts like rolling release distributions, such as Debian Testing and Debian Unstable, do not even exist. Citing Rick Spencer, who commented on the linked thread:

This is not a bug, this is the part of the Ubuntu way of handling updates to applications. The default for users is that their experience does not change unless and until they upgrade to a new release. This is very intentional and this stability is part of the core value proposition for users.

By now, I am confused what Tashkinov thinks Linux is and wants Linux to be. I am guessing that he really just expects a second Windows just with a different name. So far, he criticized the monolithic kernel for being monolithic and all the third-party projects contributing to the Linux ecosystem for... not being Adobe or a very popular video game studio releasing the next big cash cow.

  • ! Just (Gnome) not enough (KDE) manpower (X.org) – three major Open Source projects are seriously understaffed.

What?! So, he ignores that the staff of OpenSSL was severely understaffed and undervalued financially when heartbleed occured, yet considers KDE – one of the most popular Desktop Environments that's might be beating GNOME in terms of popularity someday due to GNOME developers gaining a reputation of being a bunch of arrogant pricks – to not receive enough support?! Dude, how “out of the loop” are you?!

Tashkinov lists several more personal issues, some of which are not even directed at Linux but Open Source at large, not knowing the meaning of a “steep learning curve” (which actually means something is easy to get into but hard to master – not the opposite “hard to get into but easy to master” which he is implying), and, at one point, even bringing FreeBSD into the mix.

To prevent readers from criticizing his list by claiming that they have never encountered any of those issues listed above, he further goes on by listing missing components of browsers (and accusing YouTube of draining batteries faster on Linux than on Windows), keyboard shortcuts, NVIDIA, X.Org (again), some software not being available as binaries (which the “Average Joe” more often than not will never need in the first place), lack of “AAA games” (that's some next-level whining), Microsoft Office not being available for Linux systems (you have got some very serious issues if your life depends on MS-Office), and ultimately Linux not being Windows.

This opinion piece continues with a dig at critics calling him out of attempting to convince people that his opinions are more valuable than those of other Linux users not considering the majority of supposed drawbacks negative aspects of Linux and Open Source as whole.

Some comments just astonish me: “This was terrible. I mean, it's full of half-truths and opinions. NVIDIA Optimus (Then don't use it, go with Intel or something else).” No shit, sir! I've bought my laptop to enjoy games in Wine/dualboot and you dare tell me I shouldn't have bought in the first place? I kindly suggest that you not impose your opinion on other people who can actually get pleasure from playing high quality games. Saying that SSHFS is a replacement for Windows File Sharing is the most ridiculous thing that I've heard in my entire life.

This guy has got to be trolling.

It's worth noting that the most vocal participants of the Open Source community are extremely bitchy and overly idealistic people peremptorily requiring everything to be open source and free or it has no right to exist at all in Linux. With an attitude like this, it's no surprise that a lot of companies completely disregard and shun the Linux desktop. Linus Torvalds once talked about this: There are “extremists” in the free software world, but that's one major reason why I don't call what I do “free software” any more. I don't want to be associated with the people for whom it's about exclusion and hatred.

Does this also include himself, considering he wants Linux to be exactly like Windows...? Over 200 currently maintained distributions exist to fill the need of every user desiring a special set of software. Those seeking bleeding edge can opt for rolling release distributions; people loathing systemd more than politicians and journalists can use Devuan, antiX or MX Linux; and those despising certain Desktop Environments can either install a different DE on any distribution or instantly go for one that offers their desired DE (or just a Window Manager, if DE's ain't doing it for them).

Most importantly this list is not an opinion. Almost every listed point has links to appropriate articles, threads and discussions centered on it, proving that I haven't pulled it out of my < expletive >. And please always check your “facts”.

This list is an opinion piece, as it tries to imply that every single computer user has got the same needs and desires as Tashkinov. And this could not be further from the truth – no additional sources needed, as I am the source of my experiences and how I would like to use my computer. The laptop I am writing this on (the Acer Aspire) currently runs on Archcraft's June release, which still receives necessary updates (albeit the installation itself was a little faulty, thus I often encounter few glitches and miss out on a bunch of themes I would not have used anyway). Archcraft is an Arch-based distribution shipping with the Window Manager Openbox, some themes provided by its maintainer, AUR support “out of the box”, and some tools to get started. It is a minimalistic OS for those actively seeking minimalism but not wanting to set up Arch Linux from scratch. I also use EndeavourOS, another Arch-based OS, as a backup system for my Medion Akoya tower (safely installed on an external HDD that's unplugged most of the time). My setup fulfills my needs much better than Windows 10, which I only boot when needing Lightroom or want to play Deltarune and is permanently disconnected from the internet.

  • I'm not really sorry for citing slashdot comments as a proof of what I'm writing about here, since I have one very strong justification for doing that – the /. crowd is very large, it mostly consists of smart people, IT specialists, scientists, etc. – and if a comment over there gets promoted to +5 insightful it usually* means that many people share the same opinion or have the same experience. This article was discussed on Slashdot, Reddit, Hacker News and Lobste.rs in 2017.

Stack Overflow, Stack Exchange, subreddits dedicated to specific aspects of the Linux ecosystem, forums like the German ubuntuusers or distribution-specific ones are not harboring any smart people? Does Tashkinov really consider only those “sharing the same opinion” as smart?! It surely does not help that this is followed by a section very briefly highlighting perks and “improvements” of Linux.

Sometimes I have reasons to say that indeed Linux f*cking sucks shit and I do hate it. “I'm a developer – I know better how users want to use their software and systems”, says the average Linux developer. The end result is that most innovations draw universal anger and loathing – Gnome and KDE are the perfect examples of this tendency of screwing Linux users.

I do not know what this guy is smoking when making such posts.

Linux has a tendency to mess with your data. Over the past several years there have been found at least three critical errors which led to data loss. I'm sorry to say that but that's utterly unacceptable. Also ext4fs sees a scary number of changes in every kernel release.

Separate home partition? Backups, which should be done anyway, regardless of how “secure” one's file system is??

There are two different camps in regard to the intrinsic security of open and closed source applications. My stance is quite clear: Linux security leaves a lot to be desired. There are no code analyzers/antiviruses so you have no way to check if a certain application, which is published as a source code or binaries, is safe to use. Also time and again we've seen that open source projects are hardly reviewed/scrutinized at all which also means that an attacker can send a patch to Linus Torvalds and add a backdoor to the Linux kernel.

Why YouTuber SomeOrdinaryGamers does not trust AntiVirus sotware.

Also, it should be highly doubted that Torvalds himself would let a backdoor sneak into the kernel. Kernel development and maintenance relies on a hierarchical structure, so good luck trying to sneak some malicious code into the kernel after a research team got heavily criticized and banned from contributing for making a similar attempt.

But to finally end this nonsense, he wraps his rant up by offering a plan on how to turn Linux into the second Win-, I mean, “solve Linux”:

At first, you have to have very deep pockets: we are talking about at least a billion USD in cash for the first five years. When you have that kind of money you create a Linux company.

Then you hire at least 90% of open source developers. You'll have to poach quite a lot of them from RedHat/Intel/Ubuntu/etc., including Linus Torvalds.

Then you start developing a Linux platform while sticking to these principles (also outlined here and here):

  • Implement a stringent QA/QC process.
  • Closely work with IHVs/ISVs while listening to what they want.
  • Create an extensible base platform (IDE/libraries/kernel/etc.) with a strict set of APIs/ABIs which are adhered to for at least five to ten years.
  • Create a universal packaging format for bundling software which supports signatures, weak dependencies, isolation (aka sandboxing/virtualization), clean uninstallation and standard APIs to make it possible to integrate an application with your DE.
  • Create an open application store where applications and libraries could be published and sold. This store must be integrated with GitHub or any other development platform to make it possible to fetch application sources, file bug reports and request new features.
  • Certain Linux subsystems must be abandoned/reworked/created from the ground up: • Audio (ALSA/PulseAudio); • Security model; • The X.org server (IMO, Wayland is not the right solution); • Linux kernel must gain microkernel abilities (safe drivers reloading in case of their crash); • Font rendering; • Hardware accelerated video encoding/decoding; • Window manager (?); • Common extensible controls for file open/save as dialogs, window title bars, system tray, etc.; • A full set of rich APIs for creating games; • Simple encrypted local file sharing (akin to CIFS) and many others.
  • Other distros may actually exist but must contain all the defaults this Linux One distro sets by default: libraries (APIs), sound system, graphical server, desktop environment, etc.

Again, it is highly doubtful that Devuan users, for example, would be supporting something like that. While it might make Linux more popular among casual users, it would also make it more attractive for criminals and dumbasses to develop malicious software, regardless how much “security” developers would implement. Many long term users likely also would not agree with Linux becoming yet another monopolistic company similar to Microsoft and Apple, in fact since Linux – once again – is just the kernel, you would have to convince third-party projects to abandon their independence and adjust their projects entirely to the biggest three operating systems at the expense of neglecting or even abandoning projects such as FreeBSD and Solaris.

(The next paragraph largely about Steam, so I will skip this one.)

Some people argue that flatpak, snap, appimage are exactly what Linus has been looking for. Only, why do we have ... the three of them? Why each of them looks, feels and behaves like a virtual machine which means Linux distros solved the compatibility issue ... by making you install an extra Linux distro? A lot of disk space is, of course, lost, these apps take a lot longer to launch and have a quite higher RAM consumption. Ultimately the long-standing issue hasn't been resolved to any capacity, instead it has been hidden, pushed aside and virtualized. Only you can have a Linux distro installed in Windows 10 as part of WSL. Linux on the desktop itself remains a huge incompatibility mess.

A lot of users and many distribution maintainers despise Snap and flatpak is far from secure ways to distribute and execute packages. AppImage might be the “least evil among the three”, though there is no direct update path, meaning that each new version of a program is being released as an entirely new AppImage.

Back in July, I had a brief (rather one-sided) discussion with a female Sachbearbeiter from my health insurance, who unironically started to show me how well she can use her phone after she learned that I am interested in computer technology. When I told her that my insurance's new app cannot run on my iPhone, which was still running on iOS 11 at that time due to performance concerns, I briefly had to explain that some apps are dependent on specific software versions of the operating system. This alone confused her to such a degree that she switched the topic.

If the average end users can not grasp the concept of software versions, how the hell are they supposed to figure out AppImages?!

If you watched the entire video you'd notice some guy mentioning he could perfectly run Linux applications from 1995 in his 2014 Debian system. That's a good point. Notice, however that all the applications he mentioned were console applications with most likely a bare minimum of dependencies, e.g. they didn't use anything outside Glibc. I dare him run a KDE1.0 application in his Debian 2021 system. It will fail spectacularly. Meanwhile most properly written Win32 applications (using only the official APIs and not using drivers) written for Windows 95 work just fine in Windows 10 26 years later. At least for X11 we had its own GUI APIs, including Xlib (LibX11) and XCB. Wayland on the other hand offers absolutely nothing aside from pushing bitmaps onto the screen, so native Wayland applications will have an even shorter life span.

If KDE can't be arsed to provide backwards compatibility, it's KDE's fault. Same applies to X.Org and Wayland. For something you can use without paying a buck yourself, you rather should be glad that CLI programs are this reliable and pretty much immune to planned obsolescence.

But let's jump straight to the last section since it is pointless to repeatedly note his own misconceptions:

At the same time if you're buying and deploying new workstations you might consider installing Linux. By doing so you'll be helping the open source community by increasing the userbase and possibly finding, reporting and even eliminating bugs in case you have software developers in your organization. Of course, you might want to run applications which have no equivalents under Linux. In this case you have two options: you may either run Windows as a virtual machine or you may try using Wine. Wine is very powerful software which allows you to run Windows applications under Linux at near native speed (sometimes even faster).

A bigger userbase ≠ higher chance of bug discovery. Average users likely will only notice graphical glitches and more often than not fear forums and mailing letters. The majority of those likely do not even know what GitHub is and submit an issue. And depending on the software developer and their working environment, I would not count on them contributing much to the Linux ecosystem, besides some “one-commit-wonders” that already is common practice.

And I personally would advise against using Wine, as Wine requires people to voluntarily compile Windows applications to assist porting. There also are additional obstacles faced by Wine developers:

The project has proven time-consuming and difficult for the developers, mostly because of incomplete and incorrect documentation of the Windows API. While Microsoft extensively documents most Win32 functions, some areas such as file formats and protocols have no publicly available specification from Microsoft, and Windows also includes undocumented low-level functions, undocumented behavior and obscure bugs that Wine must duplicate precisely in order to allow some applications to work properly.

Another concern is the security of Wine:

Because of Wine's ability to run Windows binary code, concerns have been raised over native Windows viruses and malware affecting Unix-like operating systems as Wine can run limited malware made for Windows. A 2018 security analysis found that 5 out of 30 malware samples were able to successfully run through Wine, a relatively low rate that nevertheless posed a security risk. For this reason the developers of Wine recommend never running it as the superuser. Malware research software such as ZeroWine runs Wine on Linux in a virtual machine, to keep the malware completely isolated from the host system. An alternative to improve the security without the performance cost of using a virtual machine, is to run Wine in an LXC container, as Anbox software is doing by default with Android.

Another security concern is when the implemented specifications are ill-designed and allow for security compromise. Because Wine implements these specifications, it will likely also implement any security vulnerabilities they contain. One instance of this problem was the 2006 Windows Metafile vulnerability, which saw Wine implementing the vulnerable SETABORTPROC escape.

Considering Tashkinov consideres the Linux kernel “one of the most vulnerable pieces of software in the entire world”, he seemingly does not care at all about security when it comes to package management and compatibility layers for Windows programs.

Surprisingly enough, this article still is not finished, yet. And as you can tell, I was on the brink of losing my cool entirely the further I got into this.

Additions to and well-grounded critiques of this list are welcomed. Mind that irrational comments lacking substance or factual information might be removed. Anonymous comments are pre-moderated disabled. I'm tired of anonymous haters who have nothing to say. Besides, Disqus sports authentification via Google/Twitter/Facebook and if you don't have any of these accounts then I'm sorry for your seclusion. You might as well not exist at all.

This is isn't a work in progress any longer (however I update this list from time to time). There is nothing serious left that I can think of.

Whoever uses the term “hater” unironically, should not be trusted with critical tasks or anything serious in general. The last time I encountered a guy calling everyone disagreeing with him a “hater”, he first assumed I was fat as a teenager, then laughed about some serious family drama I went through, and then blocked me for playfully correcting one of his typos on Twitter.

Anyone not being able to comment anonymously should be glad to not give this guy any direct attention. He believes he's right and nothing will change that. If this piece will get him a permanent job in Australia, is something I would rather not comment (.. okay, I admit that I would not hire him – and that'd be a bold move from an amateur with only one year of experience like me).

And just as I wanted to skim the comments, the latest one came directly from Tashkinov sharing a criticial article about flatpak. He still won't update his article to include that piece, which will stay in the comments and not be seen by those using NoScript to block unnecessary JavaScript.


The main difference between Windows/macOS and Linux/BSD-based operating systems is that the latter allow you to mess around and customize your OS to your liking. Linux and BSD distributions do not ship with pre-installed freemium software like Candy Crush, do not try to nudge you into paying for software subscriptions, and also do not need telemetry data to work decently enough to get various tasks done.

So far, I tested dozens of Linux distributions (Debian, Ubuntu, elementary, Mageia, MX, Arch, Manjaro, Kodachi, Kali, Mint, Solus, Fedora, OpenSUSE, and even Void, to name a few) and even though I ended up settling with Archcraft and EndeavourOS, this does not mean that I do not see their individual potentials and downsides. Some are great (Debian, MX), while others are just... not that great (elementary, which corrupted my main driver when editing my partition table). This also means that my current main drivers are not flawless: My Endeavour often encounters the pacman lock file issue and my Archcraft suffers from mild screen tearing on both my AMD and Intel-powered machines, yet both issues can be fixed easily by myself and I do not have to wait for updates to fix one thing and break two others in the process. Many times, when I do encounter other issues, they often were (or are) a result of my USB drive partially failing during the install process.

Still, what works best for me – a minimal setup with just a window manager (Openbox) – might and often will not work for everyone but that is the great thing about the Linux ecosystem. No one is forced to work with my setup, just like no one forces me to use something like elementary. Taskinov perceives diversity to be an entirely negative aspect, when, in reality, it can be a blessing and a curse simultaneously. I can imagine that the situation was far worse in the 90s and has improved massively since then, especially in regards to Debian that now is one of the most popular Linux distributions. Sure, the size of the kernel is very much a concern I share, which is why I want to give OpenBSD a fair test at one point to have a better understanding of both Linux and BSD and their possibilities.

BUT most of the stuff Tashkinov addresses are things people like the clerk from my health insurance will not comprehend without experiencing a mental BSOD. All they care about is an operating system that can boot and do their tasks without causing regular headaches. They do not give a shit about what display manager is running in the background or what specific drivers power their machine's components. Hell, I am sure they quickly would get adjusted to a different OS if said OS does not differ significantly from what they are used to – the LiMux project demonstrated that end users are very much open to using something that is not Windows and that the main problem often resides one or several positions above them. Tashkinov is not only out of touch with the communities surrounding Linux and Open Source but also with end users that only know how to start a computer and make a spreadsheet in Excel. This is why my personal advice would be to test things yourself according to your abilities, seek additional guidance if necessary, and do not listen to people like Tashkinov.

(Ironically enough, some people on Hacker News noted that Tashkinov's original site was filled with malware back in 2015).