

The carrots are greens are made out of plastics.
The carrots are greens are made out of plastics.
Which sounds like an excellent reason to go through the pain once and replace Excel with something better standardized.
But that is the entire point of responsibility diffusion. Everyone knows murder is wrong but what about 100 people each making 1% of the decisions that leads to a loss of life? What about 10000 each making 0.01%? At which point does an individual feel sufficiently detached from the consequences of their actions to make choices that would go beyond their own morals if responsibility was concentrated on them alone?
Our age will eventually (assuming our species survives) be known as the age of responsibility diffusion. Companies, bureaucracies, AI, they are all just different mechanisms to achieve the same thing, detach the people from any sense of responsibility for the outcome of their horrible choices.
If anything what we need right now is a more skeptical look at the past and the future as people are trying to sell us utopian visions of either as the solution to all our problems.
Most Indie games work fine. It is just the AAA crap that somehow can’t be bothered to make their stuff work on anything but Windows.
Don’t anthropomorphize companies. They don’t have principles. Companies are essentially nothing but incentive structures designed to maximize profits. You wouldn’t expect an algorithm or a machine to have principles so why would you expect that of a company?
That is before Apple levels of profit added on, right? Otherwise that seems a bit low.
I am not scared, well, except scared that I will have to listen to AI scam BS for the next decade the same way I had to listen to blockchain/cryptocurrency scam BS for the last decade.
It is not that I haven’t tried the tools either. They just produce extremely horrible results every single time.
“Why are we paying a human being a six figure salary when an AI is 90% as good and we pay once for the entire company?”
And if it actually was 90% as good that would be a valid question, in reality however it is more like 9% as good with occasional downwards spikes towards 0.9%.
If you spend 75% of your time writing code you are in a highly unusual coding position. Most programmers spend a very high percentage of their time understanding the problem domain and on other parts of figuring out requirements and translating them into something resembling some sort of semi-formal understanding of what the program actually needs to do. The low level detailed code writing is very rarely a bottleneck.
The error rate for human employees for the kind of errors AI makes is much, much lower. Humans make mistakes that are close to the intended task and have very little chance of being completely different. AI does the latter all the time.
Can you prove that he makes any important decisions?
Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s
AI is pretty good at spouting bullshit but it doesn’t have the same giant ego that human CEOs have so resources previously spent on coddling the CEO can be spent on something more productive. Not to mention it is a lot less effort to ignore everything an AI CEO says.
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn’t even notice any productivity problems from not having them available.
That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.
Keep worrying about entirely hypothetical scenarios of an AGI fucking over humanity, it will keep you busy so humanity can fuck over itself ten times in the meantime.
prevents it from retaliatory actions against human rights violations
They can’t retaliate if someone violates human rights?
Labor is a human putting in work. Fully automated production of goods and services is already a thing for some goods and services today and some others have a much, much larger automation component than they had historically.
Don’t confuse the wealth distribution mechanism (getting paid for labor) with the actual work itself.
The end result is often not important though. what is important is that someone understands the customer’s business use case well enough to be able to judge if the end result is actually fit for purpose and to adjust the end result to accommodate later changes in the requirements. AI is particularly bad at both of those.