• artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    126
    ·
    edit-2
    10 days ago

    Hell yeah, let’s hold them accountable for disinformation. They’ll be gone completely in a matter of months.

    Want to get rid of that responsibility? Direct the user to the source. Oh wait, that’s just a search engine.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      10 days ago

      It’s a bit different, because a search engine can give you 0 results. An AI is trained on getting the most correct answers so it always guesses, it’s the best way to score on an evaluation

  • supersquirrel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    102
    ·
    10 days ago

    I think a better solution is to ban techbros from giving serious economic or cultural advice and take computers away from business majors.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      ·
      10 days ago

      Please don’t take them entirely away. Maybe just internet access? 30ish years had to do accounting by hand. In those green ledgers. It took approximately twelve times longer to do it by hand than to do it with a computer. And it made me shrimp like 5 times worse. I needed an architect’s table what angled the top of it in order to work properly but I could neither get one supplied by the employer nor afford to give one to the employer.

      Not all technology is bad

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      I don’t get how some of these tech company CEOs who came up as engineers can be pushing this bullshit. I get once the company got big they started hiring business bros. But some big companies still have CEOs that were once engineers. You’d think they would know better.

      • NannerBanner@literature.cafe
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 days ago

        What kind of engineer? Because while the physical world, with all of its mechanical and civil and aerospace engineers, has its shit figured out with professional standards and very clearly defined responsibilities and duties, the world of social engineers, tire engineers, procurement engineers, supply chain engineers, sandwich engineers, project engineers, lead engineers, and yes, software engineers, definitely is a little too loose with any definition for me to care that these ceos were once ‘engineers.’

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          You can take any of these professional engineers, give them a billion dollars and they’re going to turn into total pieces of shit.

          Power corrupts. Absolute power corrupts absolutely.

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    36
    ·
    10 days ago

    This reads as a way to protect white collar industries from the effects of AI without addressing the root problem–that AI does not actually think, and that it is little more than a meat grinder full of scraped data.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 days ago

        Why is it CALLED intelligent?

        Because it is “intelligent” by definition. You’re conflating the word with “highly intelligent” or just “smart”.

        Dogs are “intelligent” but can’t they write code, but we sometimes refer to dogs as “smart”.

        A flatworm has intelligence but no one would call it smart.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        it had that name for a really long time

        a couple decades ago, a program learning was really impressive

        • SeeMarkFly@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 days ago

          I remember when LISP was available for my Atari 800.

          Yes, I had the FULL 64K of memory installed.

    • entropicdrift@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      It does think, just not very logically.

      To put it another way, it’s like we figured out how to give machines an intuition via Machine Learning. So you’ve got a machine with an intuition trained on all written text that is not literal gibberish, but by default all they know how to do is shoot from the hip with their intuition, and the only feedback they get for whether they said the right thing is whether the human they’re chatting with approves of what they say.

      It’s a bullshitter to the extreme because that was how we built the incentive structure. And now they use the bullshitters to train better bullshitters.

      Is it any surprise that business executives think that these are the ultimate in intelligence? All they do is bullshit.

  • tinkermeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    10 days ago

    I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

    We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

    The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

    As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

    There is value in people having that kind of information at their fingertips.

    Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 days ago

      Are you in the US? My take away here is American healthcare is bad but we’re treating the symptom not the disease.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Yeah, I’m in the US and I agree. Though it is going to take some serious change to treat the problem. In the meantime, this is at least a stopgap solution for people who don’t have a lot of options.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 days ago

      Wait, he thought he could sit that pain through at home? Your son is tough as nails. Give him a hug for me and everyone else who’s had that four day n-g tube delight.

      • tinkermeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        Yeah, he is pretty tough. I wish I could hug him, he is about a 10 hour drive from me. That tube was nightmarish from what he’s told me.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          if i were his parent, i would be giving him gentle reminders to drink more water. after teasing him for eating way too much corn or broccoli or whatever bastard fiber caused his obstruction (assuming he’s in a mental place he can handle the teasing)

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    ·
    10 days ago

    I don’t see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I’m running a local agent? I don’t agree with this.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      31
      ·
      10 days ago

      The law would allow you to sue whoever is running the chatbot. If you run your own LLM locally and take bad advice from it, then it’s your own fault.

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        Walk me through how a company based and operating not in new york would be subject to any actions from this lawsuit.

        • altkey (he\him)@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          10 days ago

          I do agree it’s limited to a small scope of New York-based smaller LLMs, but if you read the news you know why exactly this bill occured - just now Mamdani gave up on a useless chatbot made with local budget by his predecessor Adams: https://www.thecity.nyc/2026/01/30/mamdani-unusable-ai-chatbot-budget/ It was indeed giving inaccurate legal recomendations on city’s website. I think the better result that can happen to that bill is it becoming a trend across cities and states as, I suspect, New York administration wasn’t the only one falling for this scam.

      • how_we_burned@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        So who gets sued. The guy who put the chat bot on the server and is running it or the chatbot software developer themselves?

        Or both?

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    24
    ·
    10 days ago

    If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.

    What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 days ago

      This is a really bad idea.

      First because healthcare is clearly being gatekept from people.

      Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.

      The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.

      And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 days ago

        Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

        You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

        Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

        You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

          • raldone01@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 days ago

            Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

            Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, comremovedtionally accessible and easy to use. (In my limited experience)

  • moroninahurry@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 days ago

    Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don’t even have access to pay out the nose for medical information.

    Then when Google search has been completely replaced with AI, you won’t even be able to search for medical information.

    Healthcare companies aren’t about to provide anything for free.

    • Routhinator@startrek.website
      link
      fedilink
      English
      arrow-up
      14
      ·
      9 days ago

      Most of the medical information coming up these days is garbage and you should be going to a known, reremovedble site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.

      • presoak@lazysoci.al
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        LLMs have been trained on absolute garbage

        It depends on the LLM actually.

        Specialized medical LLMs are actually very accurate.

        • badgermurphy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 days ago

          I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

          However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.

          I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.

          *Nvidia notwithstanding

          • Routhinator@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            Moreover, until you can get the same output from the same input from an LLM consistently, the entire tech is unreliable garbage.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 days ago

      LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

      • moroninahurry@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        Neither should Wikipedia or Google. So I guess by your logic nobody should search or learn about medical conditions on a computer.

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 days ago

          You know damn well there’s an important difference related to the confidence of a bot that has been a key problem since this whole thing started.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          I guess by your logic nobody should search or learn about medical conditions on a computer.

          How else would we know the TRUTH about 5G vaccines and invermectin? Or the cures of Apple Cider vinegar?

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 days ago

        The line between medical advice and personal research is pretty freaking gray, so banning medical advice. Does that also ban talking to llms about anything that is medical adjacent?

        Does medical adjacent mean personal disabilities? Drug related interests? Pet health? Stretches? Pain support?

        Anything that falls under “Health, Wellness, and Fitness”?

        …etc

        It’s a slippery slope and we don’t need to be sliding down it

        • moroninahurry@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 days ago

          People are so vicious over this tech they would rather have disabled poor people with cancer suffer and die under inadequate care than do anything about the inadequate care. Ban the tech, but let this all go on.

          If you are perfectly able and well, you can ignore all advice that isn’t perfect.

          The perspective they seem to lack is frightening. The empathy they refuse to engage is massive. This is able-ism.

          Tech companies are bad, but use of tech will cure and ease cancer, HIV, and chronic disease. Bring on the downvotes.

          • Soup@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            9 days ago

            “Would rather have disabled people with cancer suffer and die…”

            My guy, that’s not a lack of LLM access, it’s a completely fucked US healthcare system that forces people onto the internet because they can’t get what they need from the state, you goofy-ass weirdo.

            • douglasg14b@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              9 days ago

              Well yes of course but also restricting access to information machines doesn’t exactly help much either.

              • Soup@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                9 days ago

                Do hallucinating LLMs, that have done such things as convince a child to commit suicide before, really count as “information machines”? The Mayo clinic website might take a single whole other braincell to read through but at least it’ll be written properly.

                I mean, the fact that you consider these programs to have enough credibility to be called “information machines” is exactly why they’re so potentially dangerous.

              • SLVRDRGN@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 days ago

                I hate to break it to you but… they’re not really “information machines”. Google search is a better information machine.

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 days ago

            I think you may be falling into a false dichotomy. Not only is the choice being presented a bad one, it ignores real solutions to the root problem, leaving us to argue over the crappy “band-aid” solution to it.

            I believe that people needing health care should have no reason to ask a chat bot about their symptoms because they can ask a helpful doctor instead. The fact that they can’t do that is the problem, not their access or lack of it to the chat bot.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    10 days ago

    I mean.

    Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

    [Edit: if you are downvoting this, downvote away, but you owe an argument below as to why. I promise this exact argument will come up in the courts over this issue]

    • LNRDrone@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 days ago

      Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

      That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

        That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

        It seems like your argument is that because Wikipedia “gets it right” and has cited sources, it isn’t liable? Which I promise, is not how liability works.

        What if it was Wikipedia versus “Some random sovcit facebook post” then? Is the Sovcit post liable because its sources are bullshit? Since there sources are random bullshit and or unknown, do they absorb liability? Again, its the same case, that is not how liability works.

        People are going to have to acknowledge you can’t have it both ways.

        Also…

        with no easy way to verify where it is from or if it is at all accurate.

        C’mon. Plenty of LLM’s can also hallucinate sources which are easily verified. And like with Wikipedia, one could go check them.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 days ago

      Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

      Also, people get in trouble for giving legal advice, artificial unintelligence('s companies) should as well.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

        Okay lets try this then:

        Chat bots aren’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

        Show me the difference.

        Also, people get in trouble for giving legal advice,

        No, they don’t, unless they are genuinely misrepresenting their positions. Sovcit influencers are well within their rights to make up all kinds of gobbly-gookey-garbage pseudo-legal advice.

        People who get in trouble are those that follow the gobbly-gookey-garbage pseudo-legal advice.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 days ago

          Chat bots aren’t giving you advice, it’s giving you information.

          They aren’t giving you information either. They’re just compiling tokens.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          10 days ago

          the difference between giving information and giving advice is context. if i know your situation, i am giving advice. if i am just talking about the law in general, i am giving information. the former, i know context. the latter, i don’t.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            Let’s swap out a chatbot with a sloptuber on YouTube making up stuff about sovereign citizen nonsense. How about then.

            • MinnesotaGoddam@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              10 days ago

              again it’s context. specificity might be a better word? both. are they talking about someone’s specific sitiation or are they talking generalities. does the advice they are giving have context. some rando on youtube, if they’re making up stuff in response to people’s specific questions about their problems and “not” telling them what to do, that can fall afoul of illegal practice of law. if they’re talking about general “well you need gold fringe on your conveyor’s license because admiral keystone q transyldracula said…” in the same way some law youtubers talk about “well here’s how due process works”, it sucks but they have free speech. people are free to mislead each other, unfortunately, just when or if you are relying on those misrepresentations for any transactions it becomes fraud (which is where misleading people becomes a crime). just some examples of the limits on free speech. again, not a lawyer, just have been too embroiled in the legal field all my life.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                You aren’t going to get to have it both ways. I promise you, what you are advocating for is such a profound disaster and this whole thing is being astroturfed by tech companies to goad you into limiting your own speech.

    • JoshuaFalken@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      I could see the argument for things that aren’t particularly important, but to continue with the legal example, it seems akin to asking a practicing lawyer a question and asking someone that watched Boston Legal when it aired and can quote James Spader.

      Unfortunately, with the potential for a hallucinatory response, anything beyond quite simplistic queries shouldn’t be relied on with more weight than a crutch of toothpicks.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        I don’t think you are wrong, but again, thats not the case.

        You’re making an argument about speech here.

        Lets say you make a fan website based entirely on fine tuned LLM which acts and responds as James Spader from Boston legal. Are you liable if a user of that website construes that speech as legal advice?

        If you are willing to give up access to speech so easily, I have almost no hope for Americans in the near future.

        What laws like this do is create an incredibly high pass filter to in positions of established power. Its literally suicidal in regards to freedom of speech on the internet.

        The right answer is that if you are dumb enough to have gotten your legal advice from an AI hallucination of James Spader, you get to absorb those consequences. The wrong answer is to tell people they aren’t allowed to build fan websites of James Spader giving questionable legal advice.

        • JoshuaFalken@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 days ago

          Presumably such a site would be visually obvious as parody. Having it give jokey answers in as a caricature would be one thing. If you dressed it up as a professional legal advice service for opinions on criminal law from Alan Shore, that could be problematic.

          At a certain point of information sharing, we should want a high bar for the ones providing the answers. When asking nuanced questions, we should want for the answer to come from knowledge, not memory. I made an example in this other comment.

          I’m not sure I agree with your ‘right answer’ bit. Personally, I’d prefer dumb people to be protected in a similar way that I want the elderly protected from losing their savings from an email scam.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            I promise you, the result of this will be unlimited free speech for corporations and their LLMs, with limited and regulated free speech for you. Save or favorite the comment.

            It’s the same “protect the children” anti free speech advocacy in a different wrapper, but more appealing to this audience because “llm bad”.

            They’re using your emotional response to not liking LLMs as a tool to trick you into giving away your rights.

        • deliriousdreams@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          10 days ago

          In your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.

          Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

          There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).

          We also know that the vast majority of people who use chatbots do not get the sources they come from.

          So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.

          The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

          No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

            Yes. And neither are LLMs or their derivatives.

            The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

            And yet people do, and we accept that as a necessary consequence of maintaining free speech as a principal.

            The exact arguments being accepted in this thread are the same which led directly to crackdowns in Hungary, China, and Russia.

            If you are okay with limiting and regulating LLMs as a form of speech, I promise it’s your speech which will end up limited, and a very small number of companies will control all speech on the internet. You should stop.

            • deliriousdreams@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              10 days ago

              Who’s speach is being limited by limiting LLM’S? Because as a legal entity their speech cannot be infringed because the LLM doesn’t have basic rights in the way that a human does.

              So what you’re saying is that you don’t want these companies to be held to any legal standard for the information they output (which is different from reddit because the companies can’t be held responsible in the US under section 230 for what their users write).

              The chatbot is the output of the company’s data set and somehow you’re saying the company can’t be held responsible for what that output is and if it’s dangerous because it’s curtailing free speech?

              That’s such an interesting take.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                10 days ago

                I’m gaming out the realistic consequences of what a law will mean. It has nothing to do whatsoever if you approve if these companies or not to try and understand the consequences of what will happen if a law like this passes. You don’t get to pick or choose if the speech is from an LLM or a company that gets limited or from an individual. There is no difference from a legal perspective.

                And this law and approach to limiting speech to “protect people” from the stupid consequences of their own action, they aren’t new. And we already know the consequences. Large corporate entities will just get around them or pay an inconsequential fine, and individuals will have their rights curtailed as a result

                The entire thread here is falling for an incredibly obvious astroturfing campaign because they associate LLMs with big bad corporations and the real consequences these bad companies have wreaked. But limiting free speech on the internet won’t stop them, what it will stop is our ability to communicate and resist them.

                • deliriousdreams@fedia.io
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  10 days ago

                  You appear to have gone completely around the twist.

                  You haven’t shown a logical progression of anything you claim. You don’t point to any current legal precedent, clearly aren’t paying attention to the actual wording being used to draft this bill/law proposal, and are spreading what amounts to FUD.

                  About the only truthful logical statement you’ve made is that it’s not about whether you like or dislike these companies.

                  Companies are considered a lawful entity with rights. The supreme Court literally just ruled that LLM’s do not count as the same kind of legal entity because if they did they’d be able to copyright their “work”. So I really do question how you think we go from that to “nobody has free speech because the LLM can’t give legal advice”.

                  Speech that causes harm has pretty much never been a protected form of speech in the US, even if I were to humor you and assume that an LLM could have the rights to it.

                  And you mean the “bad these companies have wrought”.

    • WesternInfidels@feddit.online
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 days ago

      Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

      Is the U.S. House of Representatives [or any equivalent publisher of the law] responsible for you reading the text of a law itself and then taking that as legal advice?

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        That’s a totally irrelevant comparison. There is no equivalent publisher of the law to the US House of reps. Nothing the Wikipedia publishes has legal bearing; Everything the house of Reps publish does have legal bearing.

        • WesternInfidels@feddit.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          Your objection does nothing to address the issue you raised. Where is the line drawn between “information” and “legal advice?”

          Wikipedia and the lawmakers themselves present us with static information that is not specific to us personally or to any particular situation we may find ourselves in, and which generally does not include specific recommendations. I think most people would agree that’s just information, not advice.

          If an LLM can be coaxed into saying things like “you should,” advocating specific courses of action for your circumstances, is that legal advice? I think many of us would agree that would be unlicesenced legal advice.

  • willington@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 days ago
    1. Make laws against chatbots.
    2. Demand proof you are not a chatbot.
    3. Surveillance capitalism.

    The real target here is population control.

    The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 days ago

    Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 days ago

      There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.

    • JoshuaFalken@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      ‘Should I use one teaspoon of salt in this recipe, or two?’

      Two is ideal.

      ‘Do dogs like chicken wings?’

      Wild dogs regularly hunt small animals like hare or chicken for food.

      One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 days ago

        Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 days ago

      Having potentially inaccurate resources might be better than nothing, or is that worse?

      You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.

    • thisbenzingring@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

    • smh@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.

      An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 days ago

    Just have them add a disclaimer or have the hosts be liable for what their chatbots say, stop adding bureaucracy just asking to get selective prosecuted and abused.

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      Section 230 of the dmca is designed to allow platforms to exist because people can say whatever the fuck they want. But nobody should make a machine that says things they can’t control, and if you do you need to be disciplined for such irresponsibility.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 days ago

    Sounds like a start. More is needed though.

    The bill targets AI chatbots that impersonate licensed professionals — such as doctors and lawyers — and bars them from providing “substantive response, information, or advice” that would violate professional licensing laws or constitute the unauthorized practice of law.

    It also mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, with the notice displayed in the same language as the chatbot and in a readable font size. However, the bill clarifies that this notice for users, which indicates that they are interacting with a non-human system, does not absolve the chatbot owners of liability.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 days ago

    In the US especially, medical professionals are overworked and simply don’t have the time and energy properly diagnose. If you have a more complex, chronic issue, there’s a good chance you’ll be waiting months at a time to see various specialists who are only going to spend about 10 distracted minutes thinking about your case and might not even have any useful insights, or they might misdiagnose you and make your condition worse. You basically have to do your own research and show them studies. If you’re a person of color or a woman, etc., there’s a good chance you won’t even be taken seriously. In an ideal world, it would work like it does on TV, but in the real world, it’s all about maximizing profits and the patients be damned. Sure, LLMs are unreliable, but they do at least provide ideas to research.

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      9 days ago

      That’s not why people are using ChatBots, they are using Chatbots because they can’t afford healthcare.

      and before we get out the tiny violins for MDs, they gatekeep the system to keep their salaries high.

      Bad news folks, MDs are using ChapGPT on the sly.

      • melfie@lemy.lol
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 days ago

        they are using Chatbots because they can’t afford healthcare

        Even if they do spend their limited resources on healthcare, there’s a good chance it’s going to be a waste of money.

        before we get out the tiny violins for MDs

        A lot of MDs are pretty useless in the first place, and that’s a big part of the problem. Maximizing the patient load doesn’t help anything. Just because someone can memorize and regurgitate information well, that doesn’t mean they’re going to be effective at their job. It’s often necessary to shop around to find someone who doesn’t suck, which is especially difficult for anyone who can’t afford it.