Large Language Models (LLMs) are incredibly powerful generalists, but transforming them into specialized experts is a major challenge. The process of training a model on new, specific knowledge like internal company documents or a complex reasoning task is notoriously expensive, time-consuming, and fraught with pitfalls. We want smaller, more efficient models that can master a domain without the compute budget of a tech giant.
\ The core idea behind making smaller models smarter is a concept called "distillation." In this process, a smaller "student" model learns from a larger, more capable "teacher" model. The student doesn't just learn from a static textbook of examples; it learns to mimic the teacher's thought process. This is a powerful shortcut for transferring expertise.
\ Until now, however, engineers have faced a frustrating trade-off. One approach, on-policy reinforcement learning (RL), forces the student to learn from its own mistakes, which is relevant but painfully slow. The alternative, off-policy distillation, is much faster but dangerously flawed; the student learns from the teacher's ideal examples, which often occur in contexts the student will never encounter on its own, causing errors to compound. This has been the bottleneck for creating specialized AI; until now.
\ A powerful technique called "on-policy distillation" combines the best of both worlds. By having a teacher model provide dense, token-by-token feedback on the student model's own attempts, we can achieve breakthroughs in training efficiency and capability. Here are the four most surprising and impactful takeaways from this approach.
The fundamental difference between Reinforcement Learning (RL) and Distillation lies in the density of the feedback. To understand this, imagine learning to play chess.
\
\ This smarter feedback loop has a massive impact on efficiency. In a direct back-to-back comparison where a student model learned from a teacher trained via RL, on-policy distillation allowed the student to reach the teacher's performance level 7-10 times faster in terms of gradient steps. This translates to a staggering 50-100x improvement in cumulative compute efficiency.
\ The reason for this dramatic speedup is that on-policy distillation provides more useful information (more "bits per episode") for the model to learn from. Because this dense, token-level feedback reduces gradient noise, it allows for training with shorter contexts and smaller, more efficient batch sizes, further slashing the overall computational cost.
A common and frustrating problem in AI is "catastrophic forgetting." When you take a pre-trained model and fine-tune it on new, specialized information (like your company's internal knowledge base), it often degrades or completely forgets its original, general-purpose skills, such as the ability to follow instructions.
\ Consider an experiment to create an "internal assistant." Researchers started with the Qwen3-8B model, which had a strong instruction-following score of 85%. After fine-tuning it on a 70-30 mix of internal company documents and general chat data:
\
\ The solution was a brief phase of on-policy distillation after the initial fine-tuning. By using the original version of the model as the teacher, researchers could restore the lost behavior. The results were powerful:
\
\ This finding is a game-changer for "continual learning," aka the ability to update models with new information over time without having to perform expensive, full-scale retraining from scratch. It provides a reliable way to teach an AI new facts without it forgetting its core skills.
This finding is highly counterintuitive. In most AI training methods, repeatedly training a model on the exact same prompt is a recipe for failure; the model simply memorizes the answer instead of learning the underlying skill.
\ However, an experiment with on-policy distillation turned this assumption on its head. Researchers trained a student model on a math reasoning task using only a single, randomly chosen prompt. They trained on this one prompt for 20 consecutive steps, each with a batch of 256 rollouts, generating 5,120 total learning sequences.
\ The remarkable outcome turns conventional wisdom on its head: the student model was able to approximately match the performance of the expert teacher model on the AIME'24 math benchmark, despite only ever having seen that one problem.
\ This works because on-policy distillation teaches the model to approximate the teacher's entire thought process; its full probability distribution for what the next best token should be at every step, rather than just memorizing a final answer. This means that for certain skills, the bottleneck isn't finding thousands of examples, but creating a single, perfectly-guided learning experience.
It seems logical that if a model produces a high-quality output, you could feed that output back into its training data to reinforce good behavior. This method, known as supervised fine-tuning (SFT) on on-policy data, is like having the model "practice" on its own best work.
\ But researchers found the opposite to be true. When they trained a model using a dataset composed of its own samples, its performance on an instruction-following evaluation actually degraded.
\ The technical reason for this failure is subtle but critical. While the dataset of the model's own outputs might be perfectly on-policy on average, every finite batch of data exhibits a slightly different distribution. Training on these batches causes the model’s internal policy to drift away from its original state. This process turns training on its own samples into a form of off-policy training over time, leading to the same compounding error and divergence seen in other flawed methods.
\ In contrast, on-policy distillation is completely stable in this self-distillation scenario. Because the teacher model remains a fixed, consistent target, the student can robustly converge on the desired behavior without degrading. This further cements on-policy distillation as a superior and more reliable tool for behavior refinement and continual learning.
On-policy distillation is more than just another training technique; it's a foundational shift in how we create specialized, expert AI. By combining the direct relevance of learning from one's own actions with the incredible efficiency of dense, token-by-token feedback, it solves some of the biggest challenges in applied AI.
\ The benefits are clear: massive compute savings, a cure for catastrophic forgetting, and unbelievable data efficiency. This is a key enabling technology that lowers the barrier to entry, unlocking the ability for more teams to build and maintain custom models that possess deep domain knowledge without sacrificing core capabilities. This democratization of expert AI will fuel new business models and create competitive advantages previously reserved for frontier labs.
Podcast:
\
\


