Attached: 1 image
Wait, 20 *milliseconds*? Either their kernel scheduler config is completely out of whack, or ARM/Qualcomm really screwed this one up.
Apple cores can boost to max in around 50 *micro* seconds. 20 milliseconds is just broken. That's more than one frame, and that's how you get jank...
Android doesn’t use Java at a byte-code level and never has, as far as I can tell. Source code was written in Java since mobile developers were so used to it but Android never ran the JVM, they do their own thing with Java source.
You can dislike Java syntax but the software stack on Android wasn’t Java’s.
Wait, thats is very different from what I read back in the day. I know there was a point at, I dunno, android 5 where they started doing something different with java, but my impression was that android always ran a JVM of sorts. And frankly, given how it performs even on the highest-end devices, that was really easy to believe.
Dalvik/ART is essentially the same idea. It uses dalvik byte code, much in the same way the JVM operates.
There’s some complexity (it’s designed to do different things, and the whole Oracle lawsuits added some wrinkles) but it’s not so different as you imply.
They compile Java Bytecode to Dalvik Bytecode and run that on the Android Runtime which is a tiered JIT compiler.
It still inherits the issues of Java such as the GC, no stack allocated value types, poor cache locality, etc. Although tbf the GC on Android is pretty fucking good these days and doesn't pause the world anymore.
I have no experience with iOS but I do develop for the Meta Quest which runs on Android and wow is it jank. One solution for apps up to 2GB and another for up to 4GB and yet another, much more complicated solution for apps over 4GB. Is iOS like this too?