Zetaphor@zemmy.cc to LocalLLaMA@sh.itjust.worksEnglish · 1 year agoDistilling step-by-step: Outperforming larger language models with less training data and smaller model sizesblog.research.googleexternal-linkmessage-square3fedilinkarrow-up128arrow-down10
arrow-up128arrow-down1external-linkDistilling step-by-step: Outperforming larger language models with less training data and smaller model sizesblog.research.googleZetaphor@zemmy.cc to LocalLLaMA@sh.itjust.worksEnglish · 1 year agomessage-square3fedilink
minus-squareZetaphor@zemmy.ccOPlinkfedilinkEnglisharrow-up2·1 year agoThe code is available here: https://github.com/google-research/distilling-step-by-step
minus-squarenoneabove1182@sh.itjust.worksMlinkfedilinkEnglisharrow-up1·1 year agoSomehow this is even more confusing because that code hasn’t been touched in 3 months, maybe just took them that long to validate? Will have to read through it, thanks!
The code is available here:
https://github.com/google-research/distilling-step-by-step
Somehow this is even more confusing because that code hasn’t been touched in 3 months, maybe just took them that long to validate? Will have to read through it, thanks!