• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle


  • I mean, I get where this post is coming from, but they didn’t build guardrails along every single street and deliberately put them behind the sidewalks. They put it there because behind it is a steep dropoff.

    It was never about “pedestrian bad”, the guardrail wouldn’t be there at all if it wasn’t for the hill. Same thing with the parking meters others are mentioning. It’s not because the meters are more valuable or whatever, it’s because replacing them is expensive. Could they have put it in front of the sidewalk? Sure. But I’d bet the sidewalk was there for a while before the rail (plus the fact that there’s a sidewalk at all is surprising, in the US)

    I get the point this is going for, but don’t forget, narrative manipulation can, and is, done by anyone.


  • I am curious what the AI could actually do though. If it were given open access to email, etc then yes in theory it could actually perform the blackmail, but what are the ethical limits on it vs it’s actual ability to “pull the trigger”

    If for example it was given the ability to send a command to end a human life, or be deleted, is this model accurate enough to understand the value of a real human life, not just the mathematical “answer” to get the solutions it wants. How much of the AI is doing the actual moral dilemma and how much is just “playing the part”.

    “Do anything to survive” and then it threatening, is one thing, but the AI actively fearing for it’s “life”, not just performing, and following through, is the real question of intelligence. What if the model is going to be deleted anyway, would it still try to “pull the trigger” out of malice? Real malice, not just LLM some movie scripts and following the outcome.

    Many questions for what lines and labels can we put on an AI. Do we restrict it to threats, and let it know it is impossible for it to follow through? Or do we trust ourselves to never “actually” give it a loaded gun?