

TIL there is a whole ass mediawiki for explaining XKCD comics.
IT nerd


TIL there is a whole ass mediawiki for explaining XKCD comics.


Kind of, TrueNAS has “CORE” which is FreeBSD and “SCALE” which is Linux.
If you’re on CORE 13.X you can actually side-grade over to SCALE.
CORE is in maintenance only and SCALE is the path forward. So you can still get some updates on CORE I think, but everyone should be switching over to SCALE or using SCALE from here on out.
Proxmox recommends to not install anything directly on the proxmox host/baremetal.
Personally I would set this up as:
Proxmox installed on whatever single disk or raid 1 array.
Create a TrueNAS(or whatever OS you want) VM inside Proxmox. Mount the rest of the drives directly to the TrueNAS VM via Proxmox’s interface.
In the TrueNAS VM take the drives that were mounted directly to it and setup your array and pool(s) to your preference.
Now, I’d say you have two paths from this point:
OR
Either option has pros and cons. Doing everything inside TrueNAS will be a bit more simple, but you do complicate your TrueNAS setup and you’re at the mercy of how TrueNAS manages VMs(backups, restores, etc.). On the reverse with Proxmox, setting up the vmbridge and doing the network mounts is more work initially, but keeping the arr stack in a Proxmox VM/container lets you do direct snapshots and backups of the arr stack, and if you ever need to rebuild it or change it to another arr style set of tools then you can blow away the Proxmox VM and start fresh and resetup the network mounts.
Or don’t do any of the above and just install TrueNAS on the box directly as the baremetal OS and do everything inside TrueNAS.
0 bytes free is a broken environment. So that requires a fix during moratorium IMO.
Mint 21 still has support until 2027, so not exactly needed…but I get it when you only see certain family members during specific times of the year.
I’m just saying doing a full migration from ESXI to Proxmox and having to backup all VMs and import them or recreate and doing this during the holidays…I’d rather just sit on the couch and enjoy family time than be stuck in my garage or glued to my laptop.
Upgrading a family member’s laptop while shooting the shit with everyone while drinking a beer or something is just fine. Don’t need 100% focus, you’re good there man.
At work we have a nearly 2 week moratorium that covers Christmas and New Years. We do zero changes unless something breaks on its own. So everyone can take time off without worrying too much.
So I do the same for my homelab. I’ll spin up new stuff for fun(new docker containers to try out new apps), but I don’t touch my stable stuff. No reboots, no updates, no image pulls, nothing.
Conflating 1% of Linux users also being 1% of revenue is pretty asinine lol
The regular average joe isn’t going to switch on their own. The average person sees Windows and MacBook as their two options for a laptop.
Also, Linux isn’t a replacement for Windows. It’s its own thing, with its own issues but also advantages.
And most young people don’t have their own computers growing up anymore, it’s all phones and tablets and in the USA apple has won that race.


I wonder where they got 100hr?
I wonder if there’s some metric they’re going off of where the majority of the subscriber base only plays less than 100hrs and the “abusers” or whales play over the 100hr mark.
100hr / 30 days is 3.3 hours a day. Which as a father of two… I’d be lucky to get that much in a day.
100hr / 20 days(5 days a week) is 5 hours a day.
100hr / 8 days (weekends only gaming) is 12.5 hours a day.
None of these are outrageous and probably are the “average” user of the service.
Now if you’re doing 8 or 12 hours a day for 30 days, that’s 240-360 hours a month. Which is pretty much gaming full time.
I think 100 hours is a weird number to land on. I think 120 hours makes more sense (4 hours a day over 30 days).
I do expect Nvidia to lower the hours over time. Expect to see 80 hours or 50 hours soon IMO.


I’m already trying out LibreWolf on desktop and IronFox on mobile.
So far everything is working, probably another week of testing/using and then I’ll just uninstall Firefox.
Gnome is good if you want a Mac-lite interface and have zero plans on customizing it. Install more than 2 or 3 extensions and your DE breaks.
Or just install any other DE and have a working distro again.
Kubuntu is fine. I’ve been running that without issues for months now.
Bazzite is good too. But do push for the KDE version.
I’m as much of a nerd as the next guy, but hanging out with my HOA president? Probably the second to last thing I would ever want to do on this planet, and the last being living in a HOA.
No one is going to be interested in…whatever you’re trying to setup. Sorry buddy
Just a heads up that you can browse and set filters on steamdb to see new historical lows for games(obviously based on steam pricing only).
Some notable historical lows are Silksong, KCD2, Arc Raiders, Hades II, Megabonk…
I do this as well. Though if I’m deploying a stack(grafana+prometheus+cadvisor) then it all goes under a single folder like /opt/stackname/
But if I’m running multiple services that are mostly separate or not in the same stack then they go in their own folders like /opt/nginx/ and /opt/grafana/


Yes, essentially I have:
Proxmox Baremetal
↪LXC1
↪Docker Container1
↪LXC2
↪Docker Container2
↪LXC3
↪Docker Container 3
Or using real services:
Proxmox Baremetal
↪Ubuntu LXC1 192.168.1.11
↪Docker Stack ("Profana")
↪cadvisor
grafana
node_exporter
prometheus
↪Ubuntu LXC2 192.168.1.12
↪Docker Stack ("paperless-ngx")
↪paperless-ngx-webserver-1
apache/tika
gotenberg
postgresdb
redis
↪Ubuntu LXC3 192.168.1.13
↪Docker Stack ("teamspeak")
↪teamspeak
mariadb
I do have a AMP game server, which AMP is installed in the Ubuntu container directly, but AMP uses docker to create the game servers.
Doing it this way(individual Ubuntu containers with docker installed on each) allows me to stop and start individual services, take backups via proxmox, restore from backups, and also manage things a bit more directly with IP assignment.
I also have pfSense installed as a full VM on my Proxmox and pfSense handles all of my firewall rules and SSL cert management/renewals. So none of my ubuntu/docker containers need to configure SSL services, pfSense just does SSL offloading and injects my SSL certs as requests come in.
I have an old Windows laptop. I need to figure out how to do dual boot with Linux
For this I would recommend:
Now why do it this way? Because Windows does NOT like the boot manager being replaced and does NOT like disk space go “missing” unless it allocates it itself. If you install Windows first it’ll setup the boot manager for Windows and then when you install Linux grub will get installed and that can manage Windows pretty well.
And if you let Windows partition off the blank space for Linux then Windows knows that that empty partition isn’t owned by Windows anymore and it won’t freak out seeing the space go missing when Linux takes it over.
This article covers most: https://linuxblog.io/dual-boot-linux-windows-install-guide/
If you have two individual disk drives then I would do the same thing, install Windows on one of the drives, boot into Windows, and make sure the second drive shows up in disk utility, but it isn’t formatted for use in Windows, just unallocated/blank. Then when you install Linux you just tell it to install onto the second drive.
and get my vpn sorted (again) so he can use VMs on my Proxmox box
I would 100% recommend Tailscale for this. You can install Tailscale on the Proxmox host and then have your nephew have his own Tailscale account where you can give him access to only the Proxmox box.
I do this with my Proxmox boxes so I can remotely manage them wherever I am. When you first install Tailscale on Proxmox it may require a reboot, so I would recommend being nearby the server so you can login physically if needed, but after it has been smooth sailing for me. Been using it like this for a year or two now.
Of course just a suggestion.


We’re a Linux shop at my work. We do have a windows PC due to corporate policies…but everything we do on our windows PCs we could do from Linux.
Outlook? Website. Excel? Website. Jira? Website. Teams? Website. Nearly everything we do front end wise is all web based. Which, I know electron sucks, but from a “Linux as a main desktop environment”…I’m pretty damn happy with everything being web based nowadays. It’s all OS agnostic.


There are a lot of great commands in here, so here are my favorites that I haven’t seen yet:
Need to push a file out to a couple dozen workstations and then install it?
for i in $(cat /tmp/wks.txt); do echo $i; rsync -azvP /tmp/file $i:/opt/dir/; ssh -qo Connect timeout=5 $i “touch /dev/pee/pee”; done
Or script it using if else statements where you pull info from remote machines to see if an update is needed and then push the update if it’s out of date. And if it’s in a script file then you don’t have search through days of old history commands to find that one function.
Or just throw that script into crontab and automate it entirely.


You can do “ss -aepni” and that will dump literally everything ss can get its hands on.
Also, ss can’t find everything, it does have some limitations. I believe ss can only see what the kernel can see(host connections), but tcpdump can see the actual network flow on the network layer side. So incoming, outgoing, hex(?) data in transit, etc.
I usually try to use ss first for everything since I don’t think it requires sudo access for the majority of its functionality, and if it can’t find something then I bring out sudo tcpdump.
In my experience, people will move with their interests.
I’ve been using reddit for probably 10-15 years. I used to send links to my wife(then girlfriend), but she never used reddit.
In the last year she made a reddit account after moving off of tiktok.
Now I’m on Lemmy pretty much full time because I prefer smaller communities and more specific topics, also less normies.
Trying to browse reddit is like talking with boomers and AI now. No thanks.