My personal recap of this keynote
The weird stuff:
- Apple reinvents Winterboard theming from iPhoneOS 2
- Apple reinvents Android quick settings panel
- Apple reinvents PiP on Safari
- Apple reinvents tiling windows
- Apple Intelligence - I’m a bit sceptical on how capable it will turn out. Also for everyone not living in the US of A it’s again envy and waiting. Finally, I can’t get my head around it how they won’t be losing tons of money. Personal cloud AI sounds expensive as hell.
- Even more iMessage bloat enhancements
The fun stuff:
- Skydiving - I’m definitely not too old for this stuff.
- Craig, Craig, Craig, Craig
- Parcour
- Calculator on iPadOS - Err, are we living in 3024 now? World hunger is no more, everyone.
- WatchOS workout evaluation
The good stuff:
- Math and calculations in iPad Calculator and Notes - This would instantly make me buy an iPad for studies, if I were going to school/university right now
- Handwriting x word processor
- Your iPhone on your Mac - The seamlaess drag and drop gesture felt a bit like magic.
- On device AI - Pretty dope concept. I hope this turns out well.
Sure. I just think this might be the first time that the current iPhone would be missing a feature on the next iOS update
I’d guess most iPhone 15 owners would have assumed their phone was new enough for the feature
I wouldn’t be surprised if they continued to gate it to the “pro” models and the “regular” iPhones never see these features no matter how new. That’s why the 15 pro has an “A17 Pro” chip so the iPhone 16 will have a non-pro chip that can’t do AI stuff
Starting with the iPhone 14, they put the last generation processor in the non-pro and the current generation processor in the pro
The weird thing here is that the 15 non-pro (the new processor from the 14 gen - A16) has a faster NPU than the M1 processor that does support the AI feature
The only possible technical reason is because they put such an anemic amount of RAM in their phones. Otherwise it’s entirely an artificial limitation
Running top of the line models does require a lot of RAM, so it’s not an entirely ridiculous theory.
The one I run on my desktop needs at least 12 gigs of VRAM