

Huh? Yeah it is… It’s a self-signed intermediate CA, signed by a self-signed rootCA.
In my case a miniCA in my lan.
Huh? Yeah it is… It’s a self-signed intermediate CA, signed by a self-signed rootCA.
In my case a miniCA in my lan.
Jellyfin doesn’t accept self-signed certs.
Huh?? My jellyfin.home.lab
self-signed certificate would like a word… Just put everything behind a reverse proxy (in a self-hosted community you will sooner or later be confronted to one anyway…) And you get all your services behind self-signed certs. Doesn’t matter if Jellyfin accept or not… It’s encrypted through your reverse proxy !
I don’t get that…
I have self-signed SSL certificate and intermediateCA installed on all my devices and works flawlessly with every application that accept those (on android the manifest.XML has to allow user based certificate which is in most cases).
One exception on Android was the use of MPV which doesn’t do that and never will? However, the web player video type from official application works without issues…
I have navidrome, jellyfin, Ironfox, LibreTube, KoReader, Findroid… All work flawlessly with self-signed certs !
The issue here (as said in the second answer of his linked jellyfin post) is that them needs a reverse proxy that takes care of the SSL handshake and not jellyfin directly. So OP was missing a lot of good information in them’s first post…
I though that the recommended swap partition was to double until 16 GB? So at 32GB of ram use 32GB of swap?
Production is my testing lab, but only in my homelab ! I guess I don’t care to perfectly secure my services (really dumb and easy passwords, no 2fa, not hiding plain sight passwords…) because I’m not directly exposing them to the web and accessing them externally via Wireguard ! That’s really bad practice though, but any time soon will probably clean up that mess, but right now I can’t, I have to cook some eggs…
There are 2 things though I actually do have some more complex workflow:
Rather complex incremental automated backup script for my docker container volumes, databases, config files, compose files.
Self-hosted mini-CA to access all my services via a nice .lab domain and get rid of that pesky warning on my devices.
I always do some tests if my backups are working on a VM on my personal desktop computer, because no backup means that all those years of tinkering for nothing… This will bring up some nasty depression…
Edit: If have a rather small homelab, everything on an old laptop, still quite happy with the result and works as expected.
If you change your mind someday, just send me a PM !
Just create a wildcard domain certificate !
I access all my services in my lan through https://servicename.home.lab/
I just had to add the rootCA certificat (actually the intermediate certificate) into my trust store on every device. That’s what they actually do, just in automated way !
Never had an issue to access my services with my self-signed certs, neither on Android, iOS, windows, linux ! Everything served from my server via my reverse proxy of choice (Treafik).
However I do remember that there was something of importance to make my Android device accept the certificate (something in certificate itself and the extension).
If you’re interested I can send you the snipped of a book to fully host your own CA :). It’s a great read and easy to follow !
Ohhh thanks for the clarification ! As you guessed I’m not into dev/programming so I wasn’t aware of this kind of detail !
Thank you :)
Edit: Now semver makes sense !
I mean, where else should they show that warning? It’s also posted in the forum. They also edited the documentation page.
Maybe you’re more into mailing list or the like? I’m genuine curious on what/ how/ where you expected getting this kind of information.
Really cool stuff !! Something I need to try out for sure !
Just to bad they didn’t add a multiuser setup example :( !
If you are doing any kind of multiuser mail node, you should have a separate SMTP system in front of this one that performs any necessary validation.
Maybe something worth a shot is a direct Wireguard server/client connection. While I don’t know how it works with double NAT (wireguard client with double nat) making your home server act as a direct tunnel would solve all your issues.
IIR, tailscale uses wireguard under the hood and you’re already hosting things on your home server, so maybe this could be worth a try :) !
Yeah individually your data is approximately worth 200$/year (that’s a real estimation I read somewhere, not something I spit out of my ass).
So yeah not much worth you’re right. But if you stop being selfish for a moment and think as a community and take that portion for 1 billion people on earth, how much is that worth? Yeah you guessed it… It’s a huge amount of money ^^ !
So stop thinking only for your self and start to think how we are all involved in this shit and should fight back as much as possible…
Ohhhh ! Docker container are awesome. If you have an old spare laptop lying around (or you know someone who has) give it a try it’s fantastic ! It similar to a virtual machine but different ! It solves the big issue virtual machine have: fast, portability, lightweight, memory efficient… It shares the underling OS !
I have a 10 years old laptop which is going strong with over 21 docker containers which couldn’t be possible with VMs ! You can host any imaginable service (if available as docker image) in seconds, behind a reverse proxy and access it through your LAN (or externally over a Wireguard connection).
Let’s take a media workflow example, if you want to get rid of something like YouTube music, spotify, deezer… and maintain your music library and own your music:
You can self-host:
Install NewPipe (Hope you’re on Android :s) and HTTP-shortcuts to glue everything together ! HTTP-shortcuts allow to communicate with your self-hosted MeTube service via POST/GET requests and send directly your files to your MeTube instance via NewPiped. You can than have a background script on your server which: Removes and changes the pesky YouTube metadata, send your files to your Navidrome service !
This is a rather “complex” workflow but just to say it’s possible. Sure depending your skills with your OS it will take some time to get accustomed to docker containers and the like ! It took me approximately 1 year to really get accustomed to all this new workflow (and get the hang of linux), but now it’s only a matter of minutes !
Another use case for your phone: encrypted backup for docker containers ! Nowadays they come with a lot of spare space (over 120 GB). Encrypted, scrambles file/directory names and archived !
I wouldn’t backup any critical data this way though ! It’s more an “in case” emergency backup for docker database and config volumes !
Same thought here ! Wireguard being based on private/public key, even if the port is open every request that doesn’t have a valid private/public key gets dropped !
From a bot’s perspective this means the port is closed !
I’m not an export in the field but there’s also a way to only use key-based connection with SSH, but I’m not sure how good/secure it is compared to wireguard.
As you said, I’m also too scared to let a open SSH server running on my small home lab 😅 !
That’s some crazy stuff ! Being able to completely change/repair every part is something every smartphone should be capable off…
We are in a buy/throw away generation amidst a big climate change issue/rare ore depletion… That’s depressing.
You’re right ! And because OP want to archive Reddit pages I propose an alternative to reduce that bloated site to a minimum :).
From my tests, it can go from 20MB to 700Bytes. IMO still big for a chat conversation but the readability from the alternative front-end is a + !
For reddit, SingleFile HTML pages can be 20MB per file ! Which is huge for a simple discussion…
To reduce that bloated but still relevant site, redirect to any still working alternative like https://github.com/redlib-org/redlib or old reddit and decrease your file to less than 1MB/file.
Back in the day, that’s what I did ALOT on Windows. Specially because of piracy and my younger me having no idea what he was doing XD !
Still happens on Linux with EndeavourOS but not for the same reasons ! There are millions times more ways to break stuff on Linux but I always learn Something new during the process.
Story time:
Learned the other day that some config files are loaded in a specific order and depending what display manager is installed. That was kinda eye opening to understand cause my system didn’t load .profile
when .bash_profile
was present and I didn’t understood why ! Thanks Archwiki !
Ohhhhh ! Sometimes I just need to sh*up !
Thanks for the clarification.