Researchers have developed a new way to train AI models. The new technique combines the best of both worlds: dense, token-by-token feedback on the student model's own attempts. This smarter feedback loop has a massive impact on efficiency.Researchers have developed a new way to train AI models. The new technique combines the best of both worlds: dense, token-by-token feedback on the student model's own attempts. This smarter feedback loop has a massive impact on efficiency.

Beyond Brute Force: 4 Secrets to Smaller, Smarter, and Dramatically Cheaper AI

2025/11/01 23:00

Large Language Models (LLMs) are incredibly powerful generalists, but transforming them into specialized experts is a major challenge. The process of training a model on new, specific knowledge like internal company documents or a complex reasoning task is notoriously expensive, time-consuming, and fraught with pitfalls. We want smaller, more efficient models that can master a domain without the compute budget of a tech giant.

\ The core idea behind making smaller models smarter is a concept called "distillation." In this process, a smaller "student" model learns from a larger, more capable "teacher" model. The student doesn't just learn from a static textbook of examples; it learns to mimic the teacher's thought process. This is a powerful shortcut for transferring expertise.

\ Until now, however, engineers have faced a frustrating trade-off. One approach, on-policy reinforcement learning (RL), forces the student to learn from its own mistakes, which is relevant but painfully slow. The alternative, off-policy distillation, is much faster but dangerously flawed; the student learns from the teacher's ideal examples, which often occur in contexts the student will never encounter on its own, causing errors to compound. This has been the bottleneck for creating specialized AI; until now.

\ A powerful technique called "on-policy distillation" combines the best of both worlds. By having a teacher model provide dense, token-by-token feedback on the student model's own attempts, we can achieve breakthroughs in training efficiency and capability. Here are the four most surprising and impactful takeaways from this approach.

A Smarter Feedback Loop Makes AI Training Up to 100x Cheaper

The fundamental difference between Reinforcement Learning (RL) and Distillation lies in the density of the feedback. To understand this, imagine learning to play chess.

\

  • On-policy RL is like learning chess by only being told if you won or lost at the very end of a match. The feedback is directly related to your actions, but it's sparse. You know you lost, but you don't know if it was because of your opening, a mid-game blunder, or a weak endgame.
  • Off-policy distillation is like watching a grandmaster play. You observe brilliant moves, but they are made in complex board positions that you, as a novice, will rarely find yourself in. The feedback is dense, but the context is often irrelevant to your own learning path.
  • On-policy distillation provides the best of both worlds. It's like having an expert coach who grades every single one of your moves in your own games, telling you if a move was a "blunder," "inaccuracy," or "brilliant." The feedback is both dense and perfectly relevant to your current skill level.

\ This smarter feedback loop has a massive impact on efficiency. In a direct back-to-back comparison where a student model learned from a teacher trained via RL, on-policy distillation allowed the student to reach the teacher's performance level 7-10 times faster in terms of gradient steps. This translates to a staggering 50-100x improvement in cumulative compute efficiency.

\ The reason for this dramatic speedup is that on-policy distillation provides more useful information (more "bits per episode") for the model to learn from. Because this dense, token-level feedback reduces gradient noise, it allows for training with shorter contexts and smaller, more efficient batch sizes, further slashing the overall computational cost.

You Can Cure “AI Amnesia” When Teaching New Knowledge

A common and frustrating problem in AI is "catastrophic forgetting." When you take a pre-trained model and fine-tune it on new, specialized information (like your company's internal knowledge base), it often degrades or completely forgets its original, general-purpose skills, such as the ability to follow instructions.

\ Consider an experiment to create an "internal assistant." Researchers started with the Qwen3-8B model, which had a strong instruction-following score of 85%. After fine-tuning it on a 70-30 mix of internal company documents and general chat data:

\

  • Its knowledge about the documents improved significantly (from 18% to 36% on a QA evaluation).
  • However, its instruction-following skill degraded badly, dropping from 85% down to 79%.

\ The solution was a brief phase of on-policy distillation after the initial fine-tuning. By using the original version of the model as the teacher, researchers could restore the lost behavior. The results were powerful:

\

  • Instruction-following performance was almost fully recovered, jumping back up to 83%.
  • Crucially, this happened without losing the newly acquired knowledge. In fact, the knowledge score even improved slightly to 41%.

\ This finding is a game-changer for "continual learning," aka the ability to update models with new information over time without having to perform expensive, full-scale retraining from scratch. It provides a reliable way to teach an AI new facts without it forgetting its core skills.

An AI Can Master a Reasoning Skill From Just One Example

This finding is highly counterintuitive. In most AI training methods, repeatedly training a model on the exact same prompt is a recipe for failure; the model simply memorizes the answer instead of learning the underlying skill.

\ However, an experiment with on-policy distillation turned this assumption on its head. Researchers trained a student model on a math reasoning task using only a single, randomly chosen prompt. They trained on this one prompt for 20 consecutive steps, each with a batch of 256 rollouts, generating 5,120 total learning sequences.

\ The remarkable outcome turns conventional wisdom on its head: the student model was able to approximately match the performance of the expert teacher model on the AIME'24 math benchmark, despite only ever having seen that one problem.

\ This works because on-policy distillation teaches the model to approximate the teacher's entire thought process; its full probability distribution for what the next best token should be at every step, rather than just memorizing a final answer. This means that for certain skills, the bottleneck isn't finding thousands of examples, but creating a single, perfectly-guided learning experience.

Why "Practicing" on Its Own Samples Can Make an AI Dumber

It seems logical that if a model produces a high-quality output, you could feed that output back into its training data to reinforce good behavior. This method, known as supervised fine-tuning (SFT) on on-policy data, is like having the model "practice" on its own best work.

\ But researchers found the opposite to be true. When they trained a model using a dataset composed of its own samples, its performance on an instruction-following evaluation actually degraded.

\ The technical reason for this failure is subtle but critical. While the dataset of the model's own outputs might be perfectly on-policy on average, every finite batch of data exhibits a slightly different distribution. Training on these batches causes the model’s internal policy to drift away from its original state. This process turns training on its own samples into a form of off-policy training over time, leading to the same compounding error and divergence seen in other flawed methods.

\ In contrast, on-policy distillation is completely stable in this self-distillation scenario. Because the teacher model remains a fixed, consistent target, the student can robustly converge on the desired behavior without degrading. This further cements on-policy distillation as a superior and more reliable tool for behavior refinement and continual learning.

The Future of AI is Smaller, Faster, and More Personal

On-policy distillation is more than just another training technique; it's a foundational shift in how we create specialized, expert AI. By combining the direct relevance of learning from one's own actions with the incredible efficiency of dense, token-by-token feedback, it solves some of the biggest challenges in applied AI.

\ The benefits are clear: massive compute savings, a cure for catastrophic forgetting, and unbelievable data efficiency. This is a key enabling technology that lowers the barrier to entry, unlocking the ability for more teams to build and maintain custom models that possess deep domain knowledge without sacrificing core capabilities. This democratization of expert AI will fuel new business models and create competitive advantages previously reserved for frontier labs.


Podcast:

\

  • Apple: HERE
  • Spotify: HERE

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Exploring the Dynamic NFT Art Marketplaces on Tezos

Exploring the Dynamic NFT Art Marketplaces on Tezos

The post Exploring the Dynamic NFT Art Marketplaces on Tezos appeared on BitcoinEthereumNews.com. Tony Kim Sep 18, 2025 22:02 Discover the diverse NFT art marketplaces on Tezos, from Objkt’s central hub to innovative platforms like Teia, Bootloader, and Skurpy, each offering unique features and community-driven initiatives. Tezos has emerged as a significant player in the NFT ecosystem, offering a wide array of art marketplaces that cater to diverse artistic and collector needs. According to Tezos, the blockchain supports numerous platforms, each with unique features and community-driven initiatives, creating a rich landscape for digital art. Objkt.com: A Central Hub for Tezos Art Objkt.com stands out as a leading NFT marketplace on Tezos, renowned for its comprehensive features that facilitate creating, collecting, and trading digital art. This platform integrates most NFTs minted on Tezos, providing options for fixed-price sales, auctions, and more. Objkt’s recent initiatives, such as ObjktLabs, aim to enhance its offerings further, including the launch of an open-source marketplace for generative art. Bootloader.art: Pioneering Generative Art Bootloader.art, an initiative from ObjktLabs, is an open-source platform dedicated to generative art. It allows artists to build, mint, and sell projects fully on-chain, fostering a deeper exploration of generative systems. Its emphasis on open-source code and on-chain storage sets it apart, although it comes with higher fees and minting costs. Teia.art: Community-Driven Creativity Teia.art continues the legacy of Hic et Nunc (HEN), driven by a community that values inclusivity and experimentation. Governed by the Teia DAO, this platform supports new artists with initiatives like the Teia Fountain, which provides starter funds for minting. Teia’s grassroots governance and focus on artist empowerment make it a unique space for digital art. Skurpy: Merging Social Media and NFTs Skurpy blends social media features with NFT trading, offering users a platform to share posts, connect, and trade digital assets. Its discovery feed…
Share
BitcoinEthereumNews2025/09/19 18:59
China’s Ban on Nvidia Chips for State Firms Sends Stock Tumbling

China’s Ban on Nvidia Chips for State Firms Sends Stock Tumbling

The post China’s Ban on Nvidia Chips for State Firms Sends Stock Tumbling appeared on BitcoinEthereumNews.com. Cyberspace Administration of China (CAC) has instructed big companies to stop purchasing and cancel existing orders for Nvidia’s RTX Pro 6000D chip The ban is part of China’s ongoing effort to reduce dependency on US-made AI hardware, especially after restrictive US export rules After the news, Nvidia shares dropped in premarket trading by about 1.5% Cyberspace Administration of China (CAC) has instructed big companies like Alibaba and ByteDance to stop purchasing and cancel existing orders for Nvidia’s RTX Pro 6000D chip. The ban is part of China’s ongoing effort to reduce dependency on US-made AI hardware, especially after restrictive US export rules. The RTX Pro 6000D was tailored for China to comply with some export rules, but now the regulator says even that chip is off-limits. After the news, Nvidia shares dropped in premarket trading (around 1.5%), reflecting investors’ concerns about reduced demand in one of the biggest markets. This isn’t the first time China has done something like this. For instance, in August, the country urged firms not to use Nvidia’s H20 chip due to potential security issues and the need to comply with international export control regulations. Meanwhile, Alibaba and Baidu have begun using domestically produced AI chips more heavily, which shows that China is seriously investing in building its own chip-making capacity. Additionally, a few days ago, Chinese regulators opened an antitrust review into Nvidia’s Mellanox acquisition, suggesting the company may have broken some of the promises it made to get the 2020 deal passed. From AI to blockchain and the possible effects of China’s ban The banning of Nvidia chips represents a rather notable escalation in the technological rivalry between the United States and China. Beyond tariffs or export bans, China is now proactively telling its firms to avoid even “compliant” US chips and instead shift…
Share
BitcoinEthereumNews2025/09/18 07:46