

I’m behind SEVEN proxies!
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
I’m behind SEVEN proxies!
Even when consent is informed it can still be fucky. Do you think I want to consent to an arbitration agreement with my employer or a social media platform? Fuck no, but I want a job and interaction so I go where the money/people are. I can’t hunt around for a place that will hire me and also doesn’t have arbitration.
Consent at the barrel of a gun, No matter how well informed, is no consent at all.
This, above any other reason, is why I’m most troubled with AI CSAM. I don’t care what anyone gets off to if no one is harmed, but the fact that real CSAM could be created and be indistinguishable from AI created, is a real harm.
And I instinctively ask, who would bother producing it for real when AI is cheap and harmless? But people produce it for reasons other than money and there are places in this world where a child’s life is probably less valuable than the electricity used to create images.
I fundamentally think AI should be completely uncensored. Because I think censorship limits and harms uses for it that might otherwise be good. I think if 12 year old me could’ve had an AI show me where the clitoris is on a girl or what the fuck a hymen looks like, or answer questions about my own body, I think I would’ve had a lot less confusion and uncertainty in my burgeoning sexuality. Maybe I’d have had less curiosity about what my classmates looked like under their clothes, leading to questionable decisions on my part.
I can find a million arguments why AI shouldn’t be censored. Like, do you know ChatGPT can be convinced to describe vaginal and oral sex in a romantic fiction is fine, but if it’s anal sex, it has a much higher refusal rate? Is that subtle anti-gay encoding in the training data? It also struggles with polyamory when it’s two men and a woman but less when it’s two women and a man. What’s the long-term impact when these biases are built into everyday tools? These are concerns I consider all the time.
But at the end of the day, the idea that there are children out there being abused and consumed and no one will even look for them because “it’s probably just AI” isn’t something I can bear no matter how firm my convictions are about uncensored AI. It’s something I struggle to reconcile.
That self-aware AI’s name? Albert Ketamine.
It’s not sentient and has no agenda. It’s fair to say suggest that advertise themselves as “AI companions” appeal to / prey on lonely people.
It’s not a scam unless it purports to be a real person.
Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots
The important question here is: do lonely people seek out interaction with AI or does AI create lonely people? The article clearly acknowledges this and then treats the latter like the likely conclusion. It definitely merits greater study.
Tweaking weights is no guarantee and can easily affect complete unrelated things.
Right. Well if your service is a well-known bullshiter I wouldn’t give a fuck. That being said, I’d be happy to agree that AI should all be open source and self-hosted. I run local AI myself, but the quality isn’t there. I’d have to rent time on a big boy machine if the big players went away. That would be a little inconvenient because I’d want to have a whole bunch of requests queued up to use maximum power over minimum time and that’s not really how anyone uses AI.
Maybe I could share that rental with other AI enthusiasts… hmmm.
Okay, so I agree with none of that, but you’re saying as long as we host our own AI or rent our own processing from the cloud we’re in the clear? I want to make sure that’s your fundamental argument because that leaves all open models in the clear and frankly I could be down with that. I like AI but I’m not a huge fan of AI companies.
Yeah but I can just ignore the bullets because they are nerf. And I have my own nerf guns as well.
I mean at some point any analogy fails, but AI is nothing like a gun.
I thought about that.
The definition of publish could get a little murky here. Actually the best defense here is that, so far as we know, this was not disclosed to a third party by ChatGPT (that’s pretty flimsy, though, because it likely has no idea who it is talking to.)
I acknowledge there is some level of nuance here, which is why I come back to no one should have any expectation that AI will be factual. The disclaimers are everywhere. There is really no excuse for anyone to treat the output as gospel.
Seems to me libel would require AI to have credibility, which it does not.
It’s a tool. Like most useful tools it can do harmful things. We know almost nothing about the provenance of this output. It could have been poisoned either accidentally or deliberately.
But above all, the problem is ignorant people believing the output of AI is truth. It’s pretty good at some things, but the more esoteric the knowledge, the less reliable it is. It’s best to treat AI as a storyteller. Yeah there are a lot of facts in there but when they don’t serve the story they can be embellished. I don’t see the harm in just acknowledging that and moving on.
It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.
I could write a fucking book here and I had to delete about a chapter just to get to the point here so this would be readable.
The current Russian and US gov’ts are forces for evil in the world. If they say jump, I’m looking for a shovel.
You’re not wrong that corporations are also a real problem—they are the surveillance arm of world governments. That doesn’t really intersect with what I was trying to say.
Until recently, I had the luxury of knowing my government doesn’t give a shit if I have queer kids. But now they do, at the same time that there is a push against encrypted communication. And I’m really paying attention to the signals (hah!) they are sending, because I’m mentally preparing for shit to turn really dark, really fast, and I don’t want to be caught with my pants down.
Figure they’ve penetrated telegram or someone and are trying to drive people to use compromised messaging? Idk but when Russia and Musk both target Signal that makes me think I should be using it. (But maybe that’s the play lol.)
I didn’t like Twitter as a social platform, but I did use it a lot for news on current events, such as how is the traffic on my route home, and why am I stuck in traffic, and how many miles ahead of me is the fucking accident?
Handy for communication during some kind of emergency that floods the phone network, but that’s pretty niche. Anyway, I interact a little on Bluesky but mostly it’s just a time killer like TikTok or whatever. Twitter was super easy to quit between the Musk take over and moving away from DC.
Have they considered that a chain of reasoning can actually change the output? Because that is fed back into the input prompt. That’s great for math and logic problems, but I don’t think I’d trust the alignment checks.