• Even_Adder@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn’t work because the models are distilled. You’d have to find a way to undistill them to train them.