You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.

      I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation

      • epicshepich@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Yeah, all of your use-cases are what I see as positive use cases for LLMs. I’ve got an Ollama instance hooked up to Home Assistant, but it does not work very well haha. Haven’t had the time to troubleshoot it.

      • epicshepich@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.

        • goat@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Plagiarism if it uses art, yeah.

          For LLMs, not so much since you can’t really own reddit comments