OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure
OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure

OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure

Just a trillion more, bro!
OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure
OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure
Just a trillion more, bro!
The most upsetting thing about all of this is that it lays bare the truth that the reason we don't fight climate change is that there's not enough profit in it.
Capitalism needs to die, or it's going to kill us all.
Cool, no one’s running out of imaginary money yet? Too bad we’re running out of:
It's a fucking chatbot that used modern ML training methods on enormous datasets so it's slightly fancier than the ones that already existed.
They just fed it so much data that it almost appears like it knows anything, when all it does is respond to the words you give it.
They just fed it so much data that it almost appears like it knows anything,
The strawberry test shows this. Ask directly and it will give the correct number of letters. Ask in a more indirect fashion (in a way unlikely to be in the training set) and it falls over like before.
People need to realise this point. Difference between previous models and new ones are so much dependent on the amount of data it has eaten.
and the seed lottery. You can see this if you try training a simple network with two inputs to learn xor. It can converge in multiple ways, and sometimes it converges to a really bad approximation. And sometimes it doesn't converge at all (or it converges so slowly that it might as well be considered not to converge). And even then it might still converge to an approximation that's more accurate on one side of the input space than the other. Tons of ways to get an undesirable result. For a simple 2-input network.
Imagine how unlikely it is for txese models to actually converge to the optimal thing. And how often the training is for nothing.
Can't wait to see angel investors asking for their money back. OpenAI has no exit strategy.
sure hope the homeless are hungry. because they're going to eat so much AI bullshit they'll never ask for anything again.
The worst part about tech innovation is that it increases the divide between rich and poor. Poor people don’t have access to AI like rich do but they are affected more by its side effects.
if anything I would say they're negatively impacted far more than the wealthy gain benefit from.
if an entrepreneur increases their wealth by a factor of 2, the disenfranchised actually lose by a factor of 4-6 not 2. they lose the ability to compete in a job market that was already against them. the lose the ability to gain financial freedom because they can no longer "slip through the paperwork" because a person believes they are capable. they live in an environment that is continuously degraded by the toll such technology had on resources in the world. finally they are negatively impacted because all of the previously mentioned negatives are compounded across their support network and they are no longer able to gain stability or help in moving forward.
at this point anyone who supports AI as a company is my enemy and is working towards destroying everything and everyone that doesn't own a piece of AI.
innovation [...] AI
[citation needed]
You know what?
If AI is in any way unbiased and is intelligent, it will focus on public transport over cars.
It would tell people to build trains and metro system, build frequent bus services and even highspeed rail.
No person with the money to do it would ever follow it's advice.
But it would be funny.
If AI is in any way unbiased and is intelligent,
It is not. It just vomits out a slightly randomized average response based on what is fed into it, which will mostly be pro car stuff because that is what exists.
If AI is in any way unbiased and is intelligent,
It can be as unbiased as the data it uses, which ranges from human rights promotion to support for ethnic cleansing. Another major problem is that the bias can be changed by the people training the model. For example, Grok is not allowed to call Gaza genocide a genocide.
Which is why I own nvidia stock.
Whoops. They misspelled the word 'waste'.