

Yeah, considering it is not impossible to geoblock per instance, they could.
Yeah, considering it is not impossible to geoblock per instance, they could.
Not a single part of your answer is about how the brain works.
Concepts are not things in your brain.
Consciousness is a concept. It doesn’t exist in your brain.
Thinking is how a human uses their brain.
I’m asking about how the brain itself functions to intepret natural language.
That doesn’t answer the question you quoted.
Of course the “understanding” of an LLM is limited. Because the entire technology is new, and it’s far from being anywhere close to being able to understand to the level of a human.
But I disagree with your understanding of how an LLM works. At its lower level, it’s a bunch on connected artifical neurons, not that different from a human brain. Now please don’t read this as me saying it’s as good as a human brain. It’s definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.
If it was just statistics, it wouldn’t be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.
So it does understand, to a limited extent. Just like a human, it won’t understand what it hasn’t been exposed to. And unlike a human, it is exposed to a very limited set of data.
You’re putting the difference between a human’s “understanding” and an LLM’s “understanding” in the meaning of the word “understanding”, which is just a shortcut to say that they can’t be compared. The actual difference is in the scope of understanding.
A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.
They are talking at a technical level only on one side of the comparison. It makes the entire discussion pointless. If you’re going to compare the understanding of a neural network and the understanding of a human brain, you have to go into depth on both sides.
Mysticism? Lmao. Where? Do you know what the word means?
You’re entering a more philosophical debate than a technical one, because for this point to make any sense, you’d have to define what “understanding” language means for a human in a level as low as what you’re describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I’m not saying an LLM is smart, but saying that it doesn’t understand, when having computers “understand” natural language is the core of NLP, is meh.
That is actually incorrect. It is also a language understanding tool. You don’t have an LLM without NLP. NLP includes processing and understanding natural language.
Alright, I read your article. All it says it that there is no study determining a threshold. That’s your source?
Meanwhile, here is the ECHA page for ethanol, the alcohol most present in alcoholic beverages and the only one “safe” for consumption. You will there find various toxicity thresholds established by studies, although none on humans. But unless you are willing to argue that humans don’t have thresholds for alcohol while mice, rats and monkeys do, that doesn’t make a difference to the point.
No need to form a religion, it’s just documented science.
Rather than hailing me, you could learn a bit about toxicology. Because the fact that everything has a threshold is pretty basic.
Everything has a threshold from a toxicology point of view.
Absolutely. Every. Single. Substance.
I haven’t read the article you linked, but it does not matter, as a drop is not an indivisible unit of alcohol. It could already be above the threshold.
If your body accidentally absorbs a single molecule of ethanol, you’ll be just fine.
Obesity is a chronic complex disease. It is not a choice. Keep your ignorance to yourself instead of fueling liberticide ideas with it.
There aren’t really “safe” levels for a toxin
There is, actually. Everything is toxic if you take enough of it. The only difference between what is called “toxic” and is not called “toxic” is that what is called “toxic” has a very low threshold before it is toxic to us.
Now I’m not here to defend alcohol, but that statement is simply wrong.
I’ve heard it before, so I’d say yes.
It can definitely break when mishandled. With kids you probably don’t want glass furniture.
Let’s also not forget that the only reason he is even trying to negotiate and put an end to the war is trying to get the nobel peace prize.
Piece of shit. The US really outdid themselves reelecting that asshole.
Good luck having that shitty tech win over Europe, where fiber is proliferating particularly quickly. We all know satellite internet cannot come close to the speed and reliability of fiber.
Plus we hate Musk.
It’s good for remote areas and at sea, it’s shit everywhere else
It does, but a significantly lower amount than most alternatives (besides burial and similar full body disposals of course).
But indeed burial at sea is even less impactful!
Aquamation is, as far as I know, the most sustainable way to have your remains disposed of, and even reused. You may want to look into that!
For a small to medium company it’s not necessarily worth it.
“no-code” software is a very specific category of software that aims to enable users to build something that usually is built by a dev, without needing one.
And while “no-code” can be a weird name, it makes sense when you read the definition I just gave. Just like “serverless” does not mean there is no server involved (obviously), but simply means you don’t even need to think about the server part.