Going through work email I saw a link o an article about Quantum-AI. It was behind paywall, and I am not paying for reading about how woo+woo=woo^2. What do you do when your bubble isn't inflating anymore? Couple it with another stale bubble!
Tried to read that on a train. Resulted in a nap. Probably more productive use of time anyway.
Not surprised. Making Hype and Criti-hype the two poles of the public debate has been effective in corralling people who get that there is something wrong with the "AI" into Criti-hype. And politicians needs to be generalists so the trap is easy to spring.
Still, always a pity when people who should know better fall into it.
Does moral cowardice matter in someone teaching about ethics? Yes, just as much as physical cowardice matters for a life guard. (The other way is fine.)
Does he express his ideas and teachings as something that it would be good if people did, but he totally wouldn't if it causes himself a smidgen of inconvenience? If he didn't, we now know that he was lying. Which matters if your moral framework cares about truth.
If you have to read his works for some reason, do it with open eyes and try to figure out who and what he is lying in service of.
My argument is that if he hasn't spoken out on Gaza, if he hasn't urged people to do what he thinks would be the best way to stop the genocide, then he is either a fool who can't see what is in front of him or a moral coward who can't act on his convictions.
Either way it makes him a poor ethics philosopher. We can be pretty sure that unless he himself is an experienced life guard, he would in fact not dive in to the river to save the child.
There is a genocide going on right now in Gaza. Has Singer, the great utilitarian, said anything about how the common man should act to stop it?
Is it more effective to protest or block ports or destroy weaponry? Do we have a moral obligation to overthrow governments supporting genocide, in particular if that government is in our country? If we come across one of the perpetrators of the genocide do we have a moral obligation to do something?
Or are these all to uncomfortable questions, while the donation habits of the middle class is comfortable questions?
That was entertaining!
Get well soon! Drink lots of fluid and watch some good movies (the non AI kind).
Get 2 and the plane will be 120% as good!
In fact if children with AI are a mere 1% as good, a school with 150 children can build 150% as good!
I am sure this is how project management works, and if it is not maybe Elon can get Grok to claim that it is. (When not busy praising Hitler.)
Leave it as it is then, I think it works.
Doing another round of thinking, the insistence of "AI is here to stay" is itself a sign of how this is a bubble that needs continuos hype. Clocks are also here to stay, but nobody needs to argue that they are. How was it Tywin Lannister put it - if you have to tell people you are the king, you are not a real king?
Some of the worst people you know are going to pivot to "See, AI is useful for cancer doctors, that was what I've been saying the whole time. Sentient chatbots? I haven't written those specific words, you must be very bad at reading. Now, lets move on to Quantum!"
Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!
Thanks, I hate it!
The ideas are in general good.
I think the long term cost argument could be strengthen by saying something about DeepSeeks claims to run much cheaper. If there is anything to say about that, I have not kept track.
The ML/LLM split argument might benefit from being beefed up. I saw a funny post on Tumblr (so good luck finding that again) about pigeons being taught to identify cancer cells (a thing, according to the post, I haven't verified) and how while that is a thing you wouldn't leap to putting a pigeon in charge of checking CVs and recommending hires. The post was funnier, but it got to the critical point of what statistical relationships reasonably can be used for and what it can't, which becomes obvious when it is a pigeon instead of a machine. Ah well, you can beef it up in a later post or maybe you intended to link an already existing one. There is a value in being consise instead of rambling like I am doing here.
Here's the WSJ article on Archive: https://archive.ph/kS9Dx
Useful as a mainstream source for people in general hating AI.
How appropriate with the German YouTube extract considering that German dialogue with laugh track is as good as a tense dialogue in English. At least according to Veo!
From experience in an IT-department, I would say mainly a combination of management pressure and need to make security problems manageable by choosing AI tools to push on users before too many users start using third party tools.
Yes, they will create security problems anyway, but maybe, just maybe, users won't copy paste sensitive business documents into third party web pages?
Can't they just re-release Kris I befolkningsfrågan? Tried and tested solutions like full employment policies, cheap houses, more support and money for parents.
Or is kids not all that important if it means having to improve conditions for ordinary people?
Clever. Writing up my pitch to Open ai...
I started thinking about what kind of story you could tell with these impressive but incoherent bits. It wouldn't be a typical movie, but there's got to be a ton of money willing to back any movie that can claim to be "made with AI".
One would have to start from the technical limitations. The characters are inconsistent, so in order to tell any story one would need something that the technology can deliver at least a high percentage of the time to identify protagonist/antagonist. Perhaps hats in different colours? Or film protagonist and antagonists with green screen and put them in the clips? (That is cheating, but of course they would cheat.)
So what kind of story can you tell? A movie that perhaps has a lot of dream sequences? Or a drug trip? It would be very niche, but again the point would just be to be able to claim "made with AI".
I think in most EU countries - after lobbying from US copyright corporations - it is explicitly banned to make copies from an illegal original. This was in order to criminalise downloads from torrents whether you seed or not. And the potential punishment typically involves jail sentences in order to give the police access to the surveillance necessary to prove the crime. Plus copyright violations being the only crime that in all EU countries also yields punishing damages.
Now I know this because I was against every single one of these unproportional laws, but some copyright organisations over here should know this. Just saying it would be fun if Meta got to pay out punishing damages. And even funnier if Zuckerberg got some jail time.
My suspicion, my awful awful newfound theory, is that there are people with a sincere and even kind of innocent belief that we are all just picking winners, in everything: that ideology, advocacy, analysis, criticism, affinity, even taste and style and association are essentially predictions. That what a person tries to do, the essential task of a person, is to identify who and what is going to come out on top, and align with it. The rest—what you say, what you do—is just enacting your pick and working in service to it.
Maybe. But I would counter with that it's an attitude towards their cynicism. Deep down they know their lies aren't true, they just consider lying in service of power a natural thing.
As an example, witness one Matthew Miller (the Biden press conference guy) who after smirking his way through lies about how Israel is totally going to investigate itself after the latest atrocity, now has appeared in an interview saying he was just representing the administration, that wasn't his own view. He knew he was lying in service of at least atrocities (he isn't ready to admit to it being a genocide), he just considers that natural.
It appears he has stopped smirking, I guess that was his tell that he was lying.

Customer service sucks, chatbots must be the solution
Capgemini has polled executives, customer service workers and consumers (but mostly executives) and found out that customer service sucks, and working in customer service sucks even more. Customers apparently want prompt solutions to problems. Customer service personnel feels that they are put in a position to upsell customers. For some reason this makes both sides unhappy.
Solution? Chatbots!
There is some nice rhetorical footwork going on in the report, so it was presumably written by a human. By conflating chatbots and live chat (you know, with someone actually alive) and never once asking whether the chatbots can actually solve the problems with customer service, they come to the conclusion that chatbots must be the answer. After all, lots of the surveyed executives think they will be the answer. And when have executives ever been wrong?

The role of the consumer in late stage capitalism
This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.
A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.
"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital a