• 0 Posts
  • 22 Comments
Joined 16 days ago
cake
Cake day: February 5th, 2026

help-circle




  • The interesting thing about general anesthesia is that it’s quite unlike dreaming. It’s like you’re just instantly teleported into another place and time without any sense of time having passed in between. You were effectively dead for a few hours from everyone else’s perspective, but for you there was no gap at all. It’s not like there’s a blank section in the film - rather like someone entirely cut out that part and you just jumped instantly to the next act.

    I can’t help but wonder if something similar happens when you actually die. By definition you cannot experience being dead, so what if your consciousness just jumps over the being-dead part and continues from whatever is next? Even if there’s a million-year-long queue before you get to respawn, that would still happen instantly from your subjective experience. Perhaps death is only for your physical body, but your consciousness can only continue to have experiences wherever there are experiences to be had.

    I think this idea is called quantum immortality.






  • Saying that it’s good at one thing and bad at others.

    But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.

    People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.

    It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.










  • That’s why I don’t immediately judge people when they say something I disagree with. I want to know how they arrived at that conclusion. Their reasoning might be solid - just shaped by different life experiences and beliefs that led them to a different view. And that’s okay.

    I can still see them as an intellectual, trustworthy person whose opinions matter to me, because I trust they’re capable of independent reasoning.

    It works both ways too. Someone agreeing with me means next to nothing if it’s just an adopted view they accepted as fact without ever really thinking it through themselves. The “stamp of approval” from people like that is basically worthless.


  • Probably didn’t read it because it was clearly low-quality slop - not because the final output was written by AI.

    People don’t mind AI-generated content when they don’t detect it as such. It’s the low-effort garbage they don’t want to deal with.

    Nobody has a perfect radar for AI content. This is just the good old toupee fallacy in action: “All toupees look terrible because I’ve never seen a good one” - except the good ones are the ones you never clocked as toupees.