this post was submitted on 30 Jan 2025
142 points (94.9% liked)
Technology
61227 readers
5047 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think it's reasonably likely. There was a research paper about how to do basically that a couple years ago. If you need a basic LLM trained on a specialized form of input and output, getting the expensive existing LLMs to generate that text for you is pretty efficient/inexpensive, so it's a reasonable way to get a baseline model. Then you can add stuff like chain of reasoning and mixture of experts to improve the performance back up to where you need it. It's not going to be a way to push the state of the art forward, but it's sure a cheap way to catch up to models that have done that pushing.