𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 2 Posts
  • 24 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle

  • I think you’re probably looking a step more “enterprise” than I am. I’m doing nightly backups to B2; if one of my servers dies, my recovery is to spend a couple minutes re-installing Arch and then couple hours restoring from backup. My services are predominantly in containers, so it really is just a matter of install, restore, reboot. There are things I inevitably miss, like turning on systemd’s persistent user services; my recovery times is on the order of hours, not including the fact I’m not actively monitoring for downtime and it could take a few hours for me to even notice something is down.

    Like I said: totally not enterprise level, but then, if I have a day’s outage, it’s an annoyance, not a catastrophe.




  • Lots of good ideas.

    I’m a fan of stow-like tools, but there are advantages to using something like Salt (or similar) if you’re dealing with VPSes that share don’t common configs like firewalls. There’s a lot to learn with things like salt/chef/puppet/attune/ansible, whereas something like yas-bdsm, which is what I’m currently using, is literally just:

    1. Keep your configs in a git repos, in a structure that mirrors your target
    2. Run a command and it creates symlinks for the destination files
    3. Commit your changes and push them somewhere. Or just restic-backup the repos.

    The config file formats are irrelevant; there’s no transformation logic to learn. Its greatest feature is its simplicity.


  • But the way you described it sounded more nefarious

    Oh. Yeah, I don’t think they’re being malicious; I just get frustrated with that sort of behavior. The primary DNS servers for usps.com, neakasa.com, and vitacost.com all block DNS queries from Mullvad’s DNS servers, and one of them blocks all traffic from at least some of Mullvad’s exit nodes. It means I have to waste time working around these blocks, because I’ll be damned if I’m going to take down the house VPN just to visit their stupid sites. So, I hard-code DNS entries for them, and route traffic to the one through one of my VPSes. It’s annoying, a waste of my time, and I’m just generally offended by the whiff of surveillance state about it, even when that’s not the reason why they’re doing it.

    Really, it boils down to the fact that I’m offended by the presumption that their (not OP, but VPN-hostile companies in general) anti-spam or whatever they’re trying to accomplish takes priority over my right to privacy. So, yeah; I generally have a bone to pick with any site that’s hostile to VPNs.

    Maybe that’s just my perception.

    I have no doubt at all that you’re right. And, they have no obligation to accommodate me (which I think is not true for companies I’m trying to do business with).

    I’m just uppity about the topic, is all.

    I enjoy these discussions. I sometimes gain some new knowledge out of them.

    I’ll happily have a cordial disagreement with anyone arguing in good faith. It’s echo-ey enough, and these are good conversations.


  • Hold on a tick.

    Specifically blacklisting a group of users because of the technology they use is, by definition, “targeting”, right? I mean, if not, what qualifies as “targeting” for you?

    And, yeah. Posting a sign saying “No Nazi symbolism is allowed in this establishment” is - I would claim - targeting Nazis. Same as posting a sign, “no blacks allowed” - you’re saying that’s not targeting?

    I know we’re arguing definitions and have strayed from the original topic, but I think this is an important point to clarify, since you took specific objection to my use of it in that context; and because I’m being pedantic about it.


  • No.

    Use S/MIME or PGP and directly encrypt emails to your recipient. This is the only E2E encryption available to email.

    The best metaphor for email I’ve found is that you’re writing your message on a postcard and handing it to your neighbor closest to the destination, who hands it to her neighbor, and so on, until it gets there. There are usually fewer hops, but also your email is broken into packets which could go through god knows how many routers, each of which can read your email.

    E2E requires setting up a private key; RFC 821 provided no such mechanism. Your only option is out-of-band negotiation, like PGP.

    There is a good proposal out there that sets mail headed announcing that you accept encrypted emails, and includes information about your ID, which clients could parse and verify against public key servers; it hadn’t really gained a lot of traction, as it causes issues for data harvesters but also at the end user side. Like, how is notmuch and mairix supposed to handle these? They’d need permanent access to your private key to decrypt and index the emails, and then now your index is unencrypted.

    There’s been a fair amount of debate about this, and it’s a lot of work that would need coordinating between teams of volunteers… it hasn’t made much progress because of the complexity, but it’s a nice solution.


  • I know that none of them use a VPN for general-purpose browsing.

    Interesting. The most common setup I encounter is when the VPN is implemented in the home router - that’s the way it is in my house. If you’re connected to my WiFi, you’re going through my VPN.

    I have a second VPN, which is how my private servers are connected; that’s a bespoke peer-to-peer subnet set up in each machine, but it handles almost no outbound traffic.

    My phone detects when it isn’t connected to my home WiFi and automatically turns on the VPN service for all phone data; that’s probably less common. I used to just leave it on all the time, but VPN over VPN seemed a little excessive.

    It sounds like you were a victim of a DOS attack - not distributed, though. It could have just been done directly; what about it being through a VPN made it worse?








  • If Jekyll isn’t your jam, then Hugo probably won’t be, either.

    I have a simple workflow based on a script on my desktop called “blog”. I Cask it with “blog Some blog title” and it looks in a directory for a file named some_blog_entry.md, and if it finds it, opens it in my editor; if it doesn’t, it creates it using a template.md that has some front matter filled in by the script. When I exit the editor, the script tests the modtime and updates the changed front matter and the rsyncs the whole blog directory to my server, where Hugo picks up and regenerates the site if anything changed.

    My script is 133 lines of bash, mostly involving the file named sanitization and front matter rewriting; it’s just a big convenience function that could be three lines of typing a little thought, and a little more editing of the template.

    There’s no federation, though. I’m not sure what a “federated blog” would look like, anyway; probably something like Lemmy, where you create a community called “YourName”. What’s the value of a federated blog?

    Edit: Oh, I forgot until I just checked it: the script also does some markdown editing to create gem files for the Gemini mirror; that’s at least a third to a half of the script (yeah, 60 LOC without the Gemini stuff), which you don’t need if you’re not trying to support a network that never caught on and that no-one uses.




  • What, exactly, is your end goal? To have a way to play movies that you’re bringing with you on the hotel TV?

    Edit: I only all because this seems like a hell of a lot of work just to play movies while you’re traveling, when you could just play them with VLC directly.

    Or, if you really want to steam movies to your phone, put VLC on your phone, run minidlna on the computer, and plug it into a GL-iNet Slate Plus.

    But if you’re really, like, going to some big get-together and are responsible for media entertainment for a crowd of 20 in a rental, then yeah, taking Jellyfin makes sense. But the hardware doesn’t, unless you make damned sure there’s nothing that’ll need transcoding. One movie, most CPU/GPUs can manage, but if several people are transcoding multiple movies at the same time, it’ll be a fairly beefie machine.


  • Ah, yes. Kobo does, indeed, support DRM. Calibre does a not. You can still use non-DRM books with both.

    Also, it turns out that there is a piece of software that someone built that happens to work with the very excellent Calibre plug-in system which, if you bought the eBook and have the software proof of purchase, will strip out there DRM from books and allow you to read the books with Calibre. I’m not suggesting you do that, because the unethical and corrupt DMCA bought by from crooked politicians by the media industry in 1998 stripped owners of fair use rights which they’d enjoyed until then. But, it’s easy to find and trivial to use, and once you have it you tend to forget you installed it.