Apple is creating its own AI-powered chatbot that some engineers are calling “Apple GPT,” according to a report from Bloomberg. The company reportedly doesn’t have any solid plans to release the technology to the public yet.
As noted by Bloomberg, the chatbot uses its own large language model (LLM) framework called “Ajax,” running on Google Cloud and built with Google JAX, a framework created to accelerate machine learning research. Sources close to the situation tell the outlet that Apple has multiple teams working on the project, which includes addressing potential privacy implications.
Even the most limited GPT that gave a lot of "I can't help you with that" responses to be on the safe side would be light years ahead of Siri at this point
I think there are probably some ways to cross over a bit, but really, LLMs aren't necessarily aimed at the kind of things we want a virtual assistant to do today. Siri falls down mostly on its ability to correctly do things quickly and reliably. Generating 5000 words of convincingly human sounding explanations isn't what I want from a thing I quickly trigger on my phone. What I want is very short or no reply accompanying the action I wanted to take. Call this person. Start navigation to an address. Turn on the lights. Play the version of a song I like from this specific live album. Some of those things are things Siri really sucks at today, and none of them are likely to get a lot better with an LLM in place. Maybe playing music benefits from a more robust understanding of the language of my query, but the rest of it are things where the suckage is more that Siri takes 8 seconds for the server to respond or just inexplicably decides that today it doesn't know how to turn on a light.
At this point it feels like a great LLM would let Siri fail to respond to a much more varied set of ways for me to ask my question in English, but that's not really the target we're shooting for here.
I agree with you to an extent in that I would not want Siri producing a thesis every time I ask a simple question. But I think one thing that would help is if she remembered the last few things you requested and builds some sort of context around it? That's what impressed me most about chatgpt. If it doesn't quite give me what I'm looking for, I could clarify it and we'd eventually get there. Siri is like a person with severe short term memory loss, and much of my frustration comes from that.
They indeed need to make it more conversational. I think this is a big thing Jobs would harp on if still alive. It should feel like always having a friend/assistant in the room who knows everything.
It sucks for privacy but if you trust apple enough it’d be nice to have an always-on microphone for Siri so you could be like “hey siri that tour we talked about at breakfast- can you bring up directions to that?” Stuff like that
Let’s ignore for the moment all the mega corporation and cloud data security implications of that (and there are MANY), let’s pretend it does all processing and storage locally and never needs to transmit any of those conversations offsite.
That STILL sounds like an absolute nightmare. I could spy on the people who live with me in an extraordinarily efficient way. “Hey Siri, what did my wife talk about in the phone call over breakfast?” “Hey siri, is my daughter gay?” “Hey siri, summarize all the conversations you heard at this dinner party.”
@nicetriangle I’m sure it’s their goal, but releasing an AI can rien things pretty out of control if that AI starts thinking against humans logic. It could be a PR nightmare if Apple’s AI starts going rogue. So they’ll have to bridle it a lot.
Stop trying to make assistants. That includes Siri. The ML integrations like making text selectable in images is fucking amazing, invest only there please.
I strongly believe that GPT is a (really impressive) gimmick. I’m not conviced that it has the potential for growth every outlet is pushing. No matter the brand or model (Bard, Bing, OpenAI, Apple if it happens), I don’t think this will exponentially improve over time like other tech has. It relies on growth in computational power, bandwidth/cloud connectivity and access to quality content for learning. The first two are already mature technologies that will improve marginally over time and access to new content will be increasingly difficult, specially if the web gets flooded with texts written by GPT.
So yeah, it’s cool they create their own model and add it to their services, but it’s not a big deal.