Skip Navigation
Jump
Get good.
  • Because there's a ton of research that we adapted to do it for good reasons:

    Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development.

    Stanford psychologist Michael Frank and collaborators conducted the largest ever experimental study of baby talk and found that infants respond better to baby talk versus normal adult chatter.

    TL;DR: Top parents are actually harming their kids' developmental process by being snobs about it.

    4
  • Jump
    PlayStation Will Use AI and Machine Learning to Speed up Game Development
  • That's definitely one of the ways it's going to be applied.

    The bigger challenge is union negotiations around voice synthesis for those lines, but that will eventually get sorted out.

    It won't be dynamic, unless live service, but you'll have significantly more fleshed out NPCs by the next generation of open world games (around 5-6 years from now).

    Earlier than that will be somewhat enhanced, but not built from the ground up with it in mind the way the next generation will be.

    1
  • Jump
    lately it's been feeling like that
  • Wait until it starts feeling like revelation deja vu.

    Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.

    • 2 Tim 2:17-18
    6
  • Jump
    Why are people seemingly against AI chatbots aiding in writing code?
  • I'm a seasoned dev and I was at a launch event when an edge case failure reared its head.

    In less than a half an hour after pulling out my laptop to fix it myself, I'd used Cursor + Claude 3.5 Sonnet to:

    1. Automatically add logging statements to help identify where the issue was occurring
    2. Told it the issue once identified and had it update with a fix
    3. Had it remove the logging statements, and pushed the update

    I never typed a single line of code and never left the chat box.

    My job is increasingly becoming Henry Ford drawing the 'X' and not sitting on the assembly line, and I'm all for it.

    And this would only have been possible in just the last few months.

    We're already well past the scaffolding stage. That's old news.

    Developing has never been easier or more plain old fun, and it's getting better literally by the week.

    Edit: I agree about junior devs not blindly trusting them though. They don't yet know where to draw the X.

    3
  • Jump
    OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Actually, they are hiding the full CoT sequence outside of the demos.

    What you are seeing there is a summary, but because the actual process is hidden it's not possible to see what actually transpired.

    People are very not happy about this aspect of the situation.

    It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

    There's a lot of things to be focused on in that image, and "hur dur the stochastic model can't count letters in this cherry picked example" is the least among them.

    0
  • Jump
    OpenAI releases o1, its first model with ‘reasoning’ abilities
  • Yep:

    https://openai.com/index/learning-to-reason-with-llms/

    First interactive section. Make sure to click "show chain of thought."

    The cipher one is particularly interesting, as it's intentionally difficult for the model.

    The tokenizer is famously bad at two letter counts, which is why previous models can't count the number of rs in strawberry.

    So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

    Will help clarify how it's going about solving something like the example I posted earlier behind the scenes.

    7
  • Jump
    OpenAI releases o1, its first model with ‘reasoning’ abilities
  • I'd recommend everyone saying "it can't understand anything and can't think" to look at this example:

    https://x.com/flowersslop/status/1834349905692824017

    Try to solve it after seeing only the first image before you open the second and see o1's response.

    Let me know if you got it before seeing the actual answer.

    -3
  • Jump
    Jet Fuel
  • I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

    It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

    And then it said "and your theory doesn't address the thermite residue" going on to reiterate their wild theory.

    Was very much a "don't name your gods" moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

    As long as they only focused on generic memes of "do your own research" and "you aren't being told the truth" they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.

    70
  • Jump
    The $700 PS5 Pro doesn’t come with a disc drive
  • They got off to a great start with the PS5, but as their lead grew over their only real direct competitor, they became a good example of the problems with monopolies all over again.

    This is straight up back to PS3 launch all over again, as if they learned nothing.

    Right on the tail end of a horribly mismanaged PSVR 2 launch.

    We still barely have any current gen only games, and a $700 price point is insane for such a small library to actually make use of it.

    3
  • Jump
    I suppose it's not *mandatory*
  • Ever noticed how right before they get referred to as the sea peoples, a bunch of the Anatolian tribes get captured in 12 groups at the end of the battle of Kadesh to be brought into Egyptian captivity?

    And that in their first mention of them as sea peoples, Egypt is remarking that they have no foreskins (as opposed to the partial/dorsal circumcision popular in Egypt at the time)?

    Where did Ramses III allegedly forcibly relocate them? Southern Levant?

    Isn't that where there's a later cultural history with one of the earliest dated sections being a song about how one of their tribes "stayed on their ships"? That's even the same tribe that is later on referred to as trading with Tyre in goods native to the Adana area of Anatolia along with the Greeks right next to them.

    Also the same group where in the early Iron Age layer of the city named after them, Tel Dan, there's Aegean style pottery made with local clay.

    This local cultural tradition makes frequent reference to a "land of milk and honey" even though there's only ever been one apiary found in the region, which was importing Anatolian bees for its hives, is one of the earliest places a four horned altar is found (a feature of later Israelite shrines), and was regularly requeening their hives (I wonder if they knew it was a female and if that had anything to do with why the alleged author of the aforementioned song, who was their leader and prophet, was a woman named 'bee').

    Of course, the apiary gets destroyed around the time that cultural history claimed a guy deposed his grandmother, the Queen Mother and took power, instituting the first of a series of later patriarchal reforms.

    Gee, I wonder if maybe there was something to all that, and if it maybe left a mark in other ways.

    12
  • Jump
    AI worse than humans in every way at summarising information, government trial finds
  • Meanwhile, here's an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn't exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It's also worth noting that Claude 3 Opus doesn't have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).

    9
  • Jump
    AI worse than humans in every way at summarising information, government trial finds
  • This is pretty much every study right now as things accelerate. Even just six months can be a dramatic difference in capabilities.

    For example, Meta's 3-405B has one of the leading situational awarenesses of current models, but isn't present at all to the same degree in 2-70B or even 3-70B.

    3
  • Jump
    Deep thoughts.
  • Lucretius in De Rerum Natura in 50 BCE seemed to have a few that were just a bit ahead of everyone else, owed to the Greek philosopher Epicurus.

    Survival of the fittest (book 5):

    "In the beginning, there were many freaks. Earth undertook Experiments - bizarrely put together, weird of look Hermaphrodites, partaking of both sexes, but neither; some Bereft of feet, or orphaned of their hands, and others dumb, Being devoid of mouth; and others yet, with no eyes, blind. Some had their limbs stuck to the body, tightly in a bind, And couldn't do anything, or move, and so could not evade Harm, or forage for bare necessities. And the Earth made Other kinds of monsters too, but in vain, since with each, Nature frowned upon their growth; they were not able to reach The flowering of adulthood, nor find food on which to feed, Nor be joined in the act of Venus.

    For all creatures need Many different things, we realize, to multiply And to forge out the links of generations: a supply Of food, first, and a means for the engendering seed to flow Throughout the body and out of the lax limbs; and also so The female and the male can mate, a means they can employ In order to impart and to receive their mutual joy.

    Then, many kinds of creatures must have vanished with no trace Because they could not reproduce or hammer out their race. For any beast you look upon that drinks life-giving air, Has either wits, or bravery, or fleetness of foot to spare, Ensuring its survival from its genesis to now."

    Trait inheritance from both parents that could skip generations (book 4):

    "Sometimes children take after their grandparents instead, Or great-grandparents, bringing back the features of the dead. This is since parents carry elemental seeds inside – Many and various, mingled many ways – their bodies hide Seeds that are handed, parent to child, all down the family tree. Venus draws features from these out of her shifting lottery – Bringing back an ancestor’s look or voice or hair. Indeed These characteristics are just as much the result of certain seed As are our faces, limbs and bodies. Females can arise From the paternal seed, just as the male offspring, likewise, Can be created from the mother’s flesh. For to comprise A child requires a doubled seed – from father and from mother. And if the child resembles one more closely than the other, That parent gave the greater share – which you can plainly see Whichever gender – male or female – that the child may be."

    Objects of different weights will fall at the same rate in a vacuum (book 2):

    “Whatever falls through water or thin air, the rate Of speed at which it falls must be related to its weight, Because the substance of water and the nature of thin air Do not resist all objects equally, but give way faster To heavier objects, overcome, while on the other hand Empty void cannot at any part or time withstand Any object, but it must continually heed Its nature and give way, so all things fall at equal speed, Even though of differing weights, through the still void.”

    Often I see people dismiss the things the Epicureans got right with an appeal to their lack of the scientific method, which has always seemed a bit backwards to me. In hindsight, they nailed so many huge topics that didn't end up emerging again for millennia that it was surely not mere chance, and the fact that they successfully hit so many nails on the head without the hammer we use today indicates (at least to me) that there's value to looking closer at their methodology.

    9
  • www.anthropic.com Mapping the Mind of a Large Language Model

    We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

    I often see a lot of people with outdated understanding of modern LLMs.

    This is probably the best interpretability research to date, by the leading interpretability research team.

    It's worth a read if you want a peek behind the curtain on modern models.

    21
    openai.com Sora: First Impressions

    We have gained valuable feedback from the creative community, helping us to improve our model.

    6
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

    Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

    > New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

    > This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

    > “[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

    93
    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

    1
    www.cnbc.com The first minds to be controlled by generative AI will live inside video games

    Non-playable characters in video games play key roles but stick to stiff scripts. Gen AI should open up their minds and your gaming world experience.

    It's worth pointing out that we're increasingly seeing video games rendering with continuous seed functions that convert to discrete units to track state changes from free agents, like the seed generation in Minecraft or No Man's Sky converting mountains into voxel building blocks that can be modified and tracked.

    In theory, a world populated by NPCs with decision making powered by separate generative AI would need to do the same as the NPC behavior couldn't be tracked inherent to procedural world generation.

    Which is a good context within which to remember that our own universe at the lowest level is made up of parts that behave as if determined by a continuous function until we interact with them at which point they convert to behaving like discrete units.

    And even weirder is that we know it isn't a side effect from the interaction itself as if we erase the persistent information about interactions with yet another reversing interaction, the behavior switches back from discrete to continuous (like we might expect if there was a memory optimization at work).

    0
    insidetheperimeter.ca A mirror universe might tell a simpler story: Neil Turok - Inside The Perimeter

    Dark matter and other key properties of the cosmos could be explained by a new theory describing the big bang as a mirror at the beginning of spacetime, says Perimeter’s Director Emeritus

    I've been a big fan of Turok's theory since his first paper on a CPT symmetric universe. The fact he's since had this slight change to the standard model explain a number of the big problems in cosmology with such an elegant and straightforward solution (with testable predictions) is really neat. I even suspect if he's around long enough there will end up being a Nobel in his future for the effort.

    The reason it's being posted here is that the model also happens to call to mind the topic of this community, particularly when thinking about the combination of quantum mechanical interpretations with this cosmological picture.

    There's only one mirror universe on a cosmological scale in Turok's theory.

    But in a number of QM interpretations, such as Everett's many worlds, transactional interpretation, and two state vector formalism, there may be more than one parallel "branch" of a quantized, formal reality in the fine details.

    This kind of fits with what we might expect to see if the 'mirror' universe in Turok's model is in fact an original universe being backpropagated into multiple alternative and parallel copies of the original.

    Each copy universe would only have one mirror (the original), but would have multiple parallel versions, varying based on fundamental probabilistic outcomes (resolving the wave function to multiple discrete results).

    The original would instead have a massive number of approximate copies mirroring it, similar to the very large number of iterations of machine learning to predict an existing data series.

    We might also expect that if this is the case that the math will eventually work out better if our 'mirror' in Turok's model is either not quantized at all or is quantized at a higher fidelity (i.e. we're the blockier Minecraft world as compared to it). Parts of the quantum picture are one of the holdout aspects of Turok's model, so I'll personally be watching it carefully for any addition of something akin to throwing out quantization for the mirror.

    In any case, even simulation implications aside, it should be an interesting read for anyone curious about cosmology.

    0
    www.forbes.com Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...

    I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

    8
    phys.org New theory claims to unite Einstein's gravity with quantum mechanics

    A radical theory that consistently unifies gravity and quantum mechanics while preserving Einstein's classical concept of spacetime has been announced in two papers published simultaneously by UCL (University College London) physicists.

    While I'm doubtful that the testable prediction will be validated, it's promising that physicists are looking at spacetime and gravity as separated from quantum mechanics.

    Hopefully at some point they'll entertain the idea that much like how we are currently converting continuous geometry into quantized units in order to track interactions with free agents in virtual worlds, that perhaps the quantum effects we measure in our own world are secondary side effects of emulating continuous spacetime and matter and not inherent properties to that foundation.

    0
    www.reuters.com Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender

    The Israeli military said it was carrying out a raid on Wednesday against Palestinian Hamas militants in Al Shifa Hospital, the Gaza Strip's biggest hospital, and urged them all to surrender.

    93
    phys.org Could a new law of physics support the idea we're living in a computer simulation?

    A University of Portsmouth physicist has explored whether a new law of physics could support the much-debated theory that we are simply characters in an advanced virtual world.

    I'm not a big fan of Vopson or the whole "let's reinvent laws of physics" approach, but his current approach to his work is certainly on point for this sub.

    0
    wccftech.com NVIDIA Predicts DLSS 10 Will Offer Full Neural Rendering Interfaced with Game Engines for Much Better Visuals

    NVIDIA's Bryan Catanzaro reckons a future version of DLSS may offer full neural rendering directly interfaced with game engines.

    At a certain point, we're really going to have to take a serious look at the direction things are evolving year by year, and reevaluate the nature of our own existence...

    0
    www.quantamagazine.org New ‘Physics-Inspired’ Generative AI Exceeds Expectations | Quanta Magazine

    Some modern image generators rely on the principles of diffusion to create images. Alternatives based on the process behind the distribution of charged particles may yield even better results.

    Pretty cool thinking and promising early results.

    0
    www.nature.com Could the Universe be a giant quantum computer?

    Computational rules might describe the evolution of the cosmos better than the dynamical equations of physics — but only if they are given a quantum twist.

    An interesting bit of history on thinking related to simulation theory even if trying to define itself separately (ironically a distinction relating to why and not how, which physicists typically avoid).

    It's a shame there's such reluctance to the idea of intention as opposed to happenstance. In particular, the struggles to pair gravitational effects against quantum effects mentioned in the article might be aided a great deal by entertaining the notion that the former is a secondary side effect necessary in replicating a happenstance universe operating with the latter.

    Perhaps we need more people like Fredkin thinking outside the box.

    0
    news.mit.edu Machine-learning system based on light could yield more powerful, efficient large language models

    An MIT machine-learning system demonstrates greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density compared with current systems.

    0
    news.mit.edu Machine-learning system based on light could yield more powerful, efficient large language models

    An MIT machine-learning system demonstrates greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density compared with current systems.

    I've suspected for a few years now that optoelectronics is where this is all headed. It's exciting to watch as important foundations are set on that path, and this was one of them.

    3
    news.mit.edu Machine-learning system based on light could yield more powerful, efficient large language models

    An MIT machine-learning system demonstrates greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density compared with current systems.

    I've had my eyes on optoelectronics as the future hardware foundation for ML compute (add not just interconnect) for a few years now, and it's exciting to watch the leaps and bounds occurring at such a rapid pace.

    0
    www.livescience.com Elite Bronze Age tombs laden with gold and precious stones are 'among the richest ever found in the Mediterranean'

    The obvious wealth of the tombs was based on the local production of copper, which was in great demand at the time to make bronze.

    The Minoan style headbands from Egypt during the 18th dynasty is particularly interesting.

    0
    www.nature.com Large language models encode clinical knowledge - Nature

    Med-PaLM, a state-of-the-art large language model for medicine, is introduced and evaluated across several medical question answering tasks, demonstrating the promise of these models in this domain.

    An update on Google's efforts at LLMs in the medical field.

    0

    I find this variation of Weigner's friend really thought provoking, as it's almost like a real world experimental example of a sync conflict in multiplayer netcode.

    Two 'observers' being disconnected from each other who both occasionally measure incompatible measurements of something almost seems like universal error correction in resolving quanta isn't being applied more than one layer deep (as something like Bell's paradox occurring in a single 'layer' doesn't end up with incompatible measurements even though observers are disconnected from each other).

    What I'm currently curious about would be how disagreement would grow as more layers of observation steps would be added in. In theory, it should multiply and compound across additional layers, but if we really are in a simulated world, I could also see what would effectively be backpropagation of disconnected quanta observations not actually being resolved and that we might unexpectedly find disagreement grows linearly according to the total final number of observers at the nth layer.

    In any case, even if it ultimately grows multiplicatively, disagreeing observations by independent free agents of dynamically resolved low fidelity details is exactly the sort of thing one might expect to find in a simulated world.

    0