

No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
The main issue is UX imo. On Windows 11, it’s “5 clicks”, but you have to open the settings app and find the setting two submenus deep. On KDE, it’s right click > configure application launcher > toggle setting > apply.
I was very annoyed when I got this, but remembered that it’s KDE, and turning it off is 4 clicks. Proprietary software often doesn’t allow you to turn this off (easily). Windows has this “feature”, where is the setting?
I don’t think it’s a productive “feature”, but considering it can be turned off so easily I don’t consider it a complete showstopper.
Scrapers can send these challenges off to dedicated GPU farms or even FPGAs, which are an order of magnitude faster and more efficient.
Lets assume for the sake of argument, an AI scraper company actually attempted this. They don’t, but lets assume it anyway.
The next Anubis release could include (for example), SHA256 instead of SHA1. This would be a simple, and basically transparent update for admins and end users. The AI company that invested into offloading the PoW to somewhere more efficient now has to spend significantly more resources changing their implementation than what it took for the devs and users of Anubis.
Yes, it technically remains a game of “cat and mouse”, but heavily stacked against the cat. One step for Anubis is 2000 steps for a company reimplementing its client in more efficient hardware. Most of the Anubis changes can even be done without impacting the end users at all. That’s a game AI companies aren’t willing to play, because they’ve basically already lost. It doesn’t really matter how “efficient” the implementation is, if it can be rendered unusable by a small Anubis update.
Someone making an argument like that clearly does not understand the situation. Just 4 years ago, a robots.txt was enough to keep most bots away, and hosting personal git on the web required very little resources. With AI companies actively profiting off stealing everything, a robots.txt doesn’t mean anything. Now, even a relatively small git web host takes an insane amount of resources. I’d know - I host a Forgejo instance. Caching doesn’t matter, because diffs berween two random commits are likely unique. Ratelimiting doesn’t matter, they will use different IP (ranges) and user agents. It would also heavily impact actual users “because the site is busy”.
A proof-of-work solution like Anubis is the best we have currently. The least possible impact to end users, while keeping most (if not all) AI scrapers off the site.
“Yes”, for any bits the user sees. The frontend UI can be behind Anubis without issues. The API, including both user and federation, cannot. We expect “bots” to use an API, so you can’t put human verification in front of it. These "bots* also include applications that aren’t aware of Anubis, or unable to pass it, like all third party Lemmy apps.
That does stop almost all generic AI scraping, though it does not prevent targeted abuse.
You don’t get control of the incoming port that way. For LetsEncrypt to issue a certificate primarily intended for HTTPS, they will check that the HTTP server on that IP is owned by the requesting party. That has to live on port 80, which you can’t forward on CGNAT.
Probably because they’re incapable of maintaining a distribution: https://manjarno.pages.dev/
Manjaro is not Arch based. They use pacman, but they use their own repositories. They create a ton of issues that way.
Reminder that the license was changed to a “custom” non-free license.
I own a OnePlus 6 with postmarketOS. My daily driver is a Pixel 7 with CalyxOS and microg turned off.
Despite having effectively only FOSS apps on my Android daily driver, I can’t daily drive postmarketOS. It’s making great progress, but isn’t nearly stable enough as a modern smartphone, and several other issues hold it back;
If you rely on non-foss Android apps, there is Waydroid, but it’s not a perfect solution and might have issues.
It’s not a “waste of money” if you want a device to experiment or tinker with, or if you want to follow progress of Linux mobile, but it is extremely unlikely to replace your daily driver.
Keyguard, which works on Bitwarden-compatible servers like Vaultwarden
Mint under the hood is still Linux, but for basic tasks like webbrowsing, it’s very similar to or easier than Windows.
deleted by creator
Easily set up, and easily attached to other things. Simple notifications about whatever is needed, like service health or updates, new posts on public platforms, etc. A simple curl
is plenty to send and receive notifications, and it works on Android without requiring FCM (Google infrastructure).
Why the downvotes? Apple silicon ARM is not the same ISA as any existing ARM. There’s extra undocumented instructions and features. Unless you want to reverse engineer all that, and make your own ARM CPU, you cannot run (all of) macOS on an off the shelf ARM chip. Making it effectively “impossible”.