- 1 Post
- 12 Comments
rkd@sh.itjust.worksto Economy@lemmy.world•The Trouble With Trump’s Deal With Nvidia And AMD: It’s An Export Tax11·6 days agochat is this socialism
rkd@sh.itjust.worksto Ukraine@sopuli.xyz•European leaders including Starmer to join Zelenskyy in Washington for meeting with Trump6·6 days agono more fokin ambushes
rkd@sh.itjust.worksto Television@piefed.social•Conan O’Brien Says Late Night TV is Dying, but Stephen Colbert Is ‘Too Talented and Too Essential to Go Away’526·6 days agoLet it die. A show this frequent can only end up being boring.
rkd@sh.itjust.worksto LocalLLaMA@sh.itjust.works•HP Z2 Mini G1a Review: Running GPT-OSS 120B Without a Discrete GPUEnglish1·6 days agoFor some weird reason, in my country it’s easier to order a Beelink or a Framework than an HP. They will sell everything else, except what you want to buy.
rkd@sh.itjust.worksto LocalLLaMA@sh.itjust.works•GPT-OSS 20B and 120B Models on AMD Ryzen AI ProcessorsEnglish1·8 days agoRemind me of what are the downsides of possibly getting a framework desktop for christmas.
rkd@sh.itjust.worksOPto LocalLLaMA@sh.itjust.works•So image generation is where it's at?English1·13 days agoThat’s a good point, but it seems that there are several ways to make models fit in smaller memory hardware. But there aren’t many options to compensate for not having the ML data types that allows NVIDIA to be like 8x faster sometimes.
rkd@sh.itjust.worksOPto LocalLLaMA@sh.itjust.works•So image generation is where it's at?English1·13 days agoFor image generation, you don’t need that much memory. That’s the trade-off, I believe. Get NVIDIA with 16GB VRAM to run Flux and have something like 96GB of RAM for GPT OSS 120b. Or you give up on fast image generation and just do AMD Max+ 395 like you said or Apple Silicon.
rkd@sh.itjust.worksOPto LocalLLaMA@sh.itjust.works•So image generation is where it's at?English3·13 days agoI’m aware of it, seems cool. But I don’t think AMD fully supports the ML data types that can be used in diffusion and therefore it’s slower than NVIDIA.
it’s most likely math
rkd@sh.itjust.worksto Games@sh.itjust.works•Nintendo-owned titles excluded from Japan’s biggest speedrunning event after organizers were told they had to apply for permission for each gameEnglish13·16 days agoCongratulations Nintendo, you played yourself.
I believe right now it’s also valid to ditch NVIDIA given a certain budget. Let’s see what can be done with large unified memory and maybe things will be different by the end of the year.