• shaggyb@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    5 days ago

    I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.

      • Hobo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.

  • salacious_coaster@infosec.pub
    link
    fedilink
    English
    arrow-up
    89
    ·
    6 days ago

    The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      5 days ago

      That’s why they’re calling it “AI”.

      That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.

      Edit: look at this diagram if you’re still unsure:

      • Moobythegoldensock@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?

        Did AI generate this?

          • Moobythegoldensock@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 hours ago

            Not everything you see in a paper is automatically science, and not every person involved is a scientist.

            That picture is a diagram, not science. It was made by a writer, specifically a columnist for Medium.com, not a scientist. It was cited by a professor who, by looking at his bio, was probably not a scientist. You would know this if you followed the citation trail of the article you posted.

            You’re citing an image from a pop culture blog and are calling it science, which suggests you don’t actually know what you’re posting, you just found some diagram that you thought looked good despite some pretty glaring flaws and are repeatedly posting it as if it’s gospel.

      • thehatfox@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        5 days ago

        In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 days ago

          You have that backwards. People are using the colloquial definition of AI.

          “Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.

          A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          5 days ago

          Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.

      • laz@pawb.social
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 days ago

        The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 days ago

          You might want to look up the definition of intelligence then.

          By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.

          I’ll leave this graphic here to help you visualize what I mean:

      • dissipatersshik@ttrpg.network
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        5 days ago

        I’m not gonna lie, most people like you are afraid to entertain the idea of AI being conscious because it makes you look at your own consciousness as not being all that special or unique.

        Do you believe in spirits, souls, or god genes?

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          5 days ago

          No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      11
      ·
      6 days ago

      You don’t have to be tech person to see through bullshit. Any person with mid level expertise can test the limits of the current LLM capabilities. It can’t provide consistently objectively correct outputs. It is still a useful tool though.

        • sunzu2@thebrainbin.org
          link
          fedilink
          arrow-up
          5
          ·
          6 days ago

          Education was always garbage though. It is designed to generate obidient wage slaves. Any person who wanted to get good always knew that self study is the only way to get leveled up.

          Your coworkers have no incentive to train you. This has also started since at least 1990s. Just how corpos operate.

          Point I am making, none of this is new or specific to gen z

          I guess covid is unique to them tho but covid didn’t make education shite, it just exposed it imho

    • CoolMatt@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      Taking astrology seripusly isn’t a gen z only thing, where hve you been?

  • futatorius@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    An alarming number of them believe that they are conscious too, when they show no signs of it.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    5 days ago

    I wish philosophy was taught a bit more seriously.

    An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    42
    ·
    6 days ago

    Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

    The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.

    But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.

    • AmidFuror@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      6 days ago

      Not the fault of prior generations, either. They were raised by their parents, and them by their parents, and so on.

      Sometime way back there was a primordial multicellular life form that should have known better.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        The main point here (which I think is valid despite my status as a not in this group Gen Z) is that we’re still like really young? I’m 20 dude, it’s just not my or my friends fault that school failed us. The fact it failed us was by design and despite my own and others complaints it’s continued to fail the next generation and alpha is already, very clearly struggling. I really just don’t think there’s much ground to argue about how Gen Z by and large should somehow know better. The whole point of the public education system is to ensure we well educate our children, it’s simply not my or any child’s fault that school is failing to do so. Now that I’m an adult I can, and I do push for improved education but clearly people like me don’t have our priorities straight seeing who got elected…

        • setsubyou@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          6 days ago

          Tbh, I’m in my 40ies and I don’t think my education was so much better than what younger generations are getting. I’m a software engineer and most of the skills I need now are not skills I learned in school or even university.

          I started learning programming when I was 9 because my father gave me his old Apple II computer to see what I would do. At the time, this was a privilege. Most children did not get that kind of early exposure. It also made me learn some English early.

          In high school, we eventually had some basic programming classes. I was the guy the teacher asked when something didn’t work. Or when he forgot where the semicolons go in Pascal. During one year, instead of programming, there was a pilot project where we’d learn about computer aided math using Waterloo Maple that just barely ran on our old 486es. That course was great but after two months the teacher ran out of things to teach us because the math became “too advanced for us”.

          And yes the internet existed at the time; I had access to it at home starting 1994. We learned nothing about it in school.

          When I first went to university I had an Apple PowerBook that I bought from money I earned myself. Even though I worked for it, this was privilege too; most kids couldn’t afford what was a very expensive laptop then, or any laptop. But the reason I’m bringing it up is that my university’s web site at the time did not work on it. They had managed to implement even simple buttons that could have been links as Java applets that only worked on Windows. Those were the people I was supposed to learn computer science from. Which, by the way, at the time still meant “math with a side of computer science”. My generation literally could not study in an IT related field if we couldn’t understand university major level math (this changed quickly in the following years in my country, but still).

          So while I don’t disagree about education having a lot of room for optimization, when it comes to more recent technologies like AI, it also makes me a bit salty when all of the blame is assigned to education. The generations currently in education, at least in developed countries, have access to so much that my generation only had when our parents were rich or at least nerds like my father (he was a teacher, so we were not rich). And yet, sometimes it feels like they just aren’t interested in doing anything with this access. At least compared to what I would have done with it.

          At the same time, also keep in mind that when you say things like education doesn’t prepare us for AI or whatever other new thing (it used to be just the internet, or “new media”, before), those who you are expecting the education from are the people of my generation, who did not grow up with any of this, and who were not taught about any of it when we were young. We don’t have those answers either… this stuff is new for everyone. And for the people you expect to teach, it’s way more alien than it is for you. This was true when I went to school too, and I think it’s inevitable in a world moving this fast.

        • Zorque@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          Failure often comes at multiple points, it doesn’t just fail at one. It’s a failure of education, of social pressures, of lack of positive environments, and yes of choice. The problem with free will is that you have the chance to choose wrong. You can blame everyone in the world, but if you don’t take accountability for your own actions and choices, nothing will change.

          There has never been a time with as much access to information as now. While there as much, likely more, misinformation… that does not mean individuals have no culpability for their own lack of knowledge or understanding.

          That doesn’t mean it’s exclusively their fault, or even anywhere near a majority. But that does not mean they lose all free will for their own actions. It does not mean they have no ability to be better.

          Should we place the weight of the world on their shoulders? Absolutely not, that is liable to break them. But we also shouldn’t hide them from the burden of their own free will. That only weakens them.

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            10
            ·
            6 days ago

            I find it unfair to blame my peers for things largly out of their control. If you are born into an abusive family you’ll only ever know if you happen to luck into the information that such behavior is unhealthy. Is it some of their faults? Certainly, I know people who are willfully stupid and refuse to learn but even knowing these people I feel pretty uncomfortable blaming them for it. I’ve talked to them, I’ve educated them on stuff they were willfully ignorant of and do you know what it generally boils down to? School has taught them that learning things is hard and a waste of their time. They’d rather waste hours trying to get an LLM to generate a script for them than sit down and figure out how to do it despite knowing I’d happily help them.

            School has managed to taint “learning” in the minds of many of my peers to such an extent that it should be avoided at any cost. School has failed us, is still failing the current generation and nothing is going to be done about it because it’s working as it’s meant to. This is the intended outcome. Like genuinely the scale of the fuckup is to the extent that enjoying reading is not just rare but seen as weird. We’ve managed to take one of the best ways to educate yourself and instill dread in our children when it’s brought up. How do we expect people who’ve been taught to hate reading to just magically turn around and unfuck themselves? What’d they see a really motivating Tik Tok or some shit? I despise that platform but like seriously you older people just don’t it man. Been complaing since middle school and now people wanna turn around and blame us as if it’s some personal failing it’s fucked up dude. Our education sucks, has sucked and will continue to suck even worse until we stop pretending like this is some kind of personal failing.

            • Zorque@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 days ago

              Life is unfair, but unless we acknowledge our own failings it will never get better.

              You want to walk through life blaming everyone else for everything that goes wrong in your life and take no responsibility for your own actions? Feel free. But just know nothing will ever get better for you.

              I even acknowledge, multiple times, that it is not solely the fault of the person. But that does not mean they have no will of their own, no ability to change their circumstances. Sometimes that freedom is not enough, but unless you do something to take charge of your own life, again, nothing will ever change.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        6 days ago

        That’s a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.

    • Goldholz @lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      This. I also see lots of things feeling like “oooooh these young people!” Also covid would have been gen alpha. Gen Z is mostly in their 20s now

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    6 days ago

    This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.

    The call is coming from inside the house mechanical Turk!

    • TheKMAP@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      They call it hallucinations like it’s a cute brain fart, and “Agentic” means they’re using the output of one to be the input of another, which has access to things and can make decisions and actually fuck things up. It’s a complete fucking shit show. But humans are expensive so replacing them makes line go up.

    • dissipatersshik@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      I mean, it’s like none of you people ever consider how often humans are wrong when criticizing AI.

      How often have you looked for information from humans and have been fed falsehoods as though they were true? It happens so much we’ve just gotten used to filtering out the vast majority of human responses because most of them are incorrect or unrelated to the subject.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.

      Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.

  • Death_Equity@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    6 days ago

    They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past “reboot it”.

    How they don’t understand that a LLM can’t be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      In fairness, the word “conscious” has a range of meanings. For some, it is synonymous with certain religious ideas. They would be alarmed by the “heresy”. For others, it is synonymous to claiming that some entity is entitled to the same fundamental rights as a human being. Those would be quite alarmed by the social implications. Few people use the term in a strictly empiricist sense.

        • eleitl@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          ·
          6 days ago

          Young people are always ignorant, relatively. They haven’t been around long enough to learn much, after all. However, the quality of education has been empirically declining over many decades, and mobile devices are extemely efficient accelerants of brain rot.

        • Vanilla_PuddinFudge@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          5 days ago

          Young people thinking their Ai waifu is real

          Boomers thinking America was great and not just racist and imperialist

          Gen X being really entitled because they were raised by Boomers

          Millennials being the best at everything

          I agree 100%.

          • Goldholz @lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            5 days ago

            You can not generalise a whole group on a few indiviuals

            Edit: Younge people includes gen Z and alpha and now Beta too. Just daying

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 days ago

              Yeah there’s a couple of millennial shitheads but all in all, and especially in comparison, we’re the goat. Not trying to put anyone down or such just stating facts.

  • wagesj45@fedia.io
    link
    fedilink
    arrow-up
    23
    ·
    6 days ago

    That’s a matter of philosophy and what a person even understands “consciousness” to be. You shouldn’t be surprised that others come to different conclusions about the nature of being and what it means to be conscious.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 days ago

      Consciousness is an emergent property, generally self awareness and singularity are key defining features.

      There is no secret sauce to llms that would make them any more conscious than Wikipedia.

        • 0x01@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.

          Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it’s comremovedtionally expensive to consider and the goal is different.

          Some specific things the brain has that llms don’t directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain’s hebbian learning but do not perform precisely the same operations using the same systems.

          In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.

          As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.

            That’s an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?

            If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?

            a dedicated module for consciousness would bridge the gap

            What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?

      • Muad'dib@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Consciousness comes from the soul, and souls are given to us by the gods. That’s why AI isn’t conscious.

        • 0x01@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 days ago

          How do you think god comes into the equation? What do you think about split brain syndrome in which people demonstrate having multiple consciousnesses? If consciousness is based on a metaphysical property why can it be altered with chemicals and drugs? What do you think happens during a lobotomy?

          I get that evidence based thinking is generally not compatible with religious postulates, but just throwing up your hands and saying consciousness comes from the gods is an incredibly weak position to hold.

          • Muad'dib@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            I respect the people who say machines have consciousness, because at least they’re consistent. But you’re just like me, and won’t admit it.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 days ago

              I agree, there are two consistent points of view with regards to conciousness IMO: either it is an emergent property of systems regardless of what they are made of, so there is no reasons machines couldnt be concious even if none now are; or that conciousness is a supernatural quantity that isnt a property of mater and energy that can be studied by science.

              I dissagree with the later but it is far more consistent than people who claim to be materialist but insist there is something magicial about the matter in brains that can not be replicated by other forms of matter.

    • Sixty@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      If it was actually AI sure.

      This is an unthinking machine algorithm chewing through mounds of stolen data.

    • Vanilla_PuddinFudge@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 days ago

      Are we really going to devil’s advocate for the idea that avoiding society and asking a language model for life advice is okay?

      • thiseggowaffles@lemmy.zip
        link
        fedilink
        English
        arrow-up
        14
        ·
        6 days ago

        It’s not devil’s advocate. They’re correct. It’s purely in the realm of philosophy right now. If we can’t define “consciousness” (spoiler alert: we can’t), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We’re dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

        And honestly, if the CIA’s papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can’t rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it’s even “artificial” to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don’t fully understand.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we’d know we had it 😉

          • thiseggowaffles@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 days ago

            What do you mean? I don’t follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              It’s an answer on if one is sure if they are not just a fancy autocomplete.

              More directly; we can’t be sure if we are not some autocomplete program in a fancy computer but since we’re having an experience then we are conscious programs.

              • thiseggowaffles@lemmy.zip
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                6 days ago

                When I say “how can you be sure you’re not fancy auto-complete”, I’m not talking about being an LLM or even simulation hypothesis. I’m saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren’t causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That’s how auto-complete works. It’s just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it’s sensory data instead of a text prompt, but the mechanics remain the same.

                And how do we know whether or not the LLM is having an experience or not? Again, this is the “hard problem of consciousness”. There’s no way to quantify consciousness, and it’s only ever experienced subjectively. We don’t know the mechanics of how consciousness fundamentally works (or at least, if we do, it’s likely still classified). Basically what I’m saying is that this is a new field and it’s still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.

  • shiroininja@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.

    AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.

    • cornshark@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Chatgpt disagrees that it’s a yes-man:

      To a certain extent, AI is like a societal “yes man.” It reflects and amplifies patterns it’s seen in its training data, which largely comes from the internet—a giant digital mirror of human beliefs, biases, conversations, and cultures. So if a bubble dominates online, AI tends to learn from that bubble.

      But it’s not just parroting. Good AI models can analyze, synthesize, and even challenge or contrast ideas, depending on how they’re used and how they’re prompted. The danger is when people treat AI like an oracle, without realizing it’s built on feedback loops of existing human knowledge—flawed, biased, or brilliant as that may be.