The iPhone’s portrait mode uses actual depth information captured from the separate depth sensor. The new feature is that it will always capture the depth information for every picture you take so that at a later point you can use it to blur parts of the image at different depths. Google’s version of portrait mode just uses image recognition to detect what’s in the background. It does a good job, but not as good as if it had actual depth information.
All of the dictionary definitions it replied with are just made up to sound correct and not what those dictionaries actually say:
Merriam-Webster’s definition is “not popular : viewed or received unfavorably by the public” Oxford’s definition is “not liked or enjoyed by a person, a group or people in general” Cambridge’s definition is “not liked by many people”
This is why you don’t ask a LLM for factual information. It comes up with whatever it thinks sounds right, it doesn’t actually go look up factual information for you.