- cross-posted to:
- llm@lemmy.world
- technology@lemmy.ml
- cross-posted to:
- llm@lemmy.world
- technology@lemmy.ml
DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
How does this compare with stable diffusion XL?
I’ve seen people say that even if DALL-E images aren’t as good, the images it produce adhere to your prompt much better than SD(XL).