I’d say actually a bit of the opposite. Generally speaking we don’t need a new package manager or init system, and better hardware support is almost entirely a kernel concern (one might make an argument that the loose bits of key management and tpm2 tools and authentication agents could be better integrated for “Windows Hello” type function I suppose, but I doubt that’s what the meme had in mind.
Not really needing to reinvent the wheel on those, we got a variety of wheels, sometimes serving different sensibilities, sometimes any difference in capability went away long ago (rpm/dnf v. deb/apt).
The best motivation I can think of at this point is to make specialty distribution that is ‘canned’ toward a specific use case. Even then it’s probably best to be an existing distribution under the covers. I think Proxmox is a good example, it’s just Debian but installer made to just do Proxmox. You want to do automated installation? Just use Debian and then add Proxmox (the official recommendation), because they have no particular insight on automated deployment, so why not just defer to an existing facility?
The biggest conceptual change in packaging has been “waste as much disk as you like duplicating dependencies to avoid conflicting dependencies”, maybe with “use namespace and cgroup isolation to better control app interactions” and we have snap, flatpak, appimage, and nix very well covering the gamut for that concept.
For init, we have the easy to modify sysv init, or the more capable but more inscrutable systemd. I don’t see a whole lot of opportunity between those two sorts of options already.
It’s usually easier to criticize something than to go through the effort of understanding it. Posts like the OP are an example of that.
… And ironically, your post is doing the same thing here with software packaging:
The biggest conceptual change in packaging has been “waste as much disk as you like duplicating dependencies to avoid conflicting dependencies”,
Nobody is perfect, so it’s important to keep an open mind about things, especially when one don’t understand them, and especially² when one thinks they understand them as it’s always possible to be wrong (unless they don’t care about going through life as an ignorant asshole. Plenty of people thrive like that.)
I understand it fine, and it’s not just a packaging phenomonon, all sorts of software developers have stopped trying to have consensus on platform and instead ‘just ship the box’. 99% of the time a python application will demand at least virtualenv. Golang, well, you are just going to staticly build (at least LTO means less unrelated stuff comes along for the ride). Of course docker style packaging is bring the whole distro. I’ll give credit to snap and flatpak that at least allow packaging to have external dependency packages to mitigate it somewhat.
i like novel implementations of these things, it’s the reason why linux as it is today is so good, people were willing to try novel methods of package management, and the repo worked great.
I’d say actually a bit of the opposite. Generally speaking we don’t need a new package manager or init system, and better hardware support is almost entirely a kernel concern (one might make an argument that the loose bits of key management and tpm2 tools and authentication agents could be better integrated for “Windows Hello” type function I suppose, but I doubt that’s what the meme had in mind.
Not really needing to reinvent the wheel on those, we got a variety of wheels, sometimes serving different sensibilities, sometimes any difference in capability went away long ago (rpm/dnf v. deb/apt).
The best motivation I can think of at this point is to make specialty distribution that is ‘canned’ toward a specific use case. Even then it’s probably best to be an existing distribution under the covers. I think Proxmox is a good example, it’s just Debian but installer made to just do Proxmox. You want to do automated installation? Just use Debian and then add Proxmox (the official recommendation), because they have no particular insight on automated deployment, so why not just defer to an existing facility?
The biggest conceptual change in packaging has been “waste as much disk as you like duplicating dependencies to avoid conflicting dependencies”, maybe with “use namespace and cgroup isolation to better control app interactions” and we have snap, flatpak, appimage, and nix very well covering the gamut for that concept.
For init, we have the easy to modify sysv init, or the more capable but more inscrutable systemd. I don’t see a whole lot of opportunity between those two sorts of options already.
It’s usually easier to criticize something than to go through the effort of understanding it. Posts like the OP are an example of that.
… And ironically, your post is doing the same thing here with software packaging:
Nobody is perfect, so it’s important to keep an open mind about things, especially when one don’t understand them, and especially² when one thinks they understand them as it’s always possible to be wrong (unless they don’t care about going through life as an ignorant asshole. Plenty of people thrive like that.)
I understand it fine, and it’s not just a packaging phenomonon, all sorts of software developers have stopped trying to have consensus on platform and instead ‘just ship the box’. 99% of the time a python application will demand at least virtualenv. Golang, well, you are just going to staticly build (at least LTO means less unrelated stuff comes along for the ride). Of course docker style packaging is bring the whole distro. I’ll give credit to snap and flatpak that at least allow packaging to have external dependency packages to mitigate it somewhat.
i like novel implementations of these things, it’s the reason why linux as it is today is so good, people were willing to try novel methods of package management, and the repo worked great.