Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MO
Posts
2
Comments
274
Joined
2 yr. ago

  • I have so far seen two working AI applications that actually makes sense, both in a hospital setting:

    1. Assisting oncologists in reading cancer images. Still the oncologists that makes the call, but it seems to be of use to them.
    2. Creating a first draft when transcribing dictated notes. Listening and correcting is apparently faster for most people than listening and writing from scratch.

    These two are nifty, but it doesn't make a multi billion dollar industry.

    In other words the bubble is bursting and the value / waste ratio looks extremely low.

    Say what you want about the Tulip bubble, but at least tulips are pretty.

  • Here it sounds like he is criticising the parliamentary system were the legislative elects the executive instead of direct election of the executive. Of course both in parliamentary and presidential (and combined) systems a number of voting systems are used. The US famously does not use FPTP for presidential elections, but instead uses an electoral college.

    So to be very charitable, he means a parliamentary system where it's hard to depose the executive. I don't think any parliamentary system uses 60 % (presumably of votes or seats in parliament) to depose a cabinet leader, mostly because once you have 50% aligned the cabinet leader you presumably have an opposition leader with a potential majority. So 60% is stupid.

    If you want a combined system where parliament appoints but can't depose, Suriname is the place to be. Though of course they appoint their president for a term, not indefinitely. Because that's stupid.

    To sum up: stupid ideas, expressed unclearly. Maybe he should have gone to high school.

  • If you mean swapped for a worker in a low wage country cosplaying as AI for minimum wage for a billion dollar company, then you have a point. Though using Bostrom's positive reinforcement bullshit is the opposite of treating someone fairly.

    But I see elsewhere that you didn't mean that.

  • So one one hand the CEO's want their minions back into office and on the other they want to replace them with AI's?

    Sounds like a conundrum. Or a business opportunity!

    Presenting Srvile! The brand new Servility as a Service company, with AI powered robots that will laugh at all boss jokes at the water cooler and say things like "That is such a great idea boss! Since I am an AI I can't realise that you are just regurgitating what you read on Xshitter!" and "We certainly need more AI to solve any problem!"

    Call now to order!

    (AI may at times be enhanced by remote human control for "quality control". Actual level of servility may vary and is not guaranteed.)

  • They are both stupid men who repeat stuff they hear to make them look good. So the question is who are this time the "very smart people" that are telling numbnuts like these two that nuclear war is survivable - and by extension winnable? Because if that is the US defense establishment, then yeah we might be cooked.

  • I happened to come across an article mentioning the Robinson–Patman Act (from 1936) in relation with wage fixing by algorithm.

    From Wikipedia: "a United States federal law that prohibits anticompetitive practices by producers, specifically price discrimination"

    It might be relevant here. Obviously I am not a US lawyer specialised in monopoly law.

  • We could have a whole discussion about geopolitics, but lets not. This is after all a thread about the AI bubble and what comes next.

    The 2% target is a economic expenditure target, not a military readiness target. I think it is kind of obvious that the west is supply constrained in arms, so what happens if every state tries to increase expenditures is that arms become more expensive. Profits go up, stock price go up, and presto you have a possible foundation for a new bubble.

  • On 1, I hope you are right that the AI bubble will burst soon.

    I am less certain where the next bubble will be, but pretty certain there will be one. We have seen bubble after bubble during the neoliberal era where hot money inflates valuations in a sector, sells it as success and cash out, leaving the bag with banks, governments, pension funds or households. Then it crashes, causing more or less widespread devastation. But those that started the process are now richer and has more money to push into the next bubble, preferably something that is already growing.

    So, apart from AI, what is growing now? Weapons manufacturers seem to be doing very well, and weapons and AI are also connected. So my prediction is that the next bubble will be weapons related, probably focused around AI powered drones. As the US is pressuring NATO governments to increase weapons spending, money will pour in directly from governments to the corporations. As long as the threat of on outbreak of peace can be averted, money will keep rolling in.

  • LLMs just train on which words follow which, right?

    So if the version of the text changes every other word, it should mess with them. And if you change every other word to "communism" it should learn that the word "communism" follows logically after most words.

    Just spitballing here, but I would find making the robots they intend to replace workers with into communist agitators rather funny.

  • Good piece.

    I would add scammers to the list, and I don't mean fake AI as the public at large isn't aware about just how much "AI" is just off-shoring to someone in a country with lower wages. I mean scam email, posts, DM:s, what have you. Creating the bullshit text to lure the victim has never been easier.

    I also think that unfortunately that is one sector where AI might very well be profitable even when it has to carry its real costs.

  • To me, the most sneerable thing in that article is where they assume a mechanical brain will evolve from ChatGPT and then assume a sufficiently large quantum computer to run it on. And then start figuring out how to port the future mechanical brain to the quantum computer. All to be able to run an old thought experiment that at least I understood as highlighting the absurdity of focusing on the human brain part in the collapse of a wave function.

    Once we build two trains that can run near the speed of light we will be able to test some of Einstein's thought experiments. Better get cracking on how we can get enough coal onboard to run the trains long enough to get the experiments done.