August 5, 2024
8 mins

MUSCLE: A Model Update Strategy for Compatible LLM Evolution

MUSCLE - a paper from Apple research proposes strategies for LLM updates which does not alter the model behavior in a negative manner.
Paper Link
Header image

Key Takeaways

  • Updating large language models (LLMs) can lead to unexpected changes in how the model behaves, even causing previously correct answers to become incorrect.
  • This paper introduces new ways to measure these changes, going beyond simply tracking how often the updated model gets things wrong.
  • The researchers propose a new training method called MUSCLE to make updated LLMs more compatible with older versions, reducing inconsistent behavior and improving user experience.
  • This has significant implications for businesses relying on LLMs, as it can help maintain user trust and avoid disruptions when models are updated.

Introduction

Large Language Models (LLMs) are becoming increasingly popular for a wide range of applications, from chatbots and code assistants to summarization tools. These models are typically trained on massive amounts of data and require significant resources to develop. As new data becomes available or improvements are made to the underlying architecture, these LLMs are frequently updated to boost their performance.

However, these updates can be a double-edged sword. While they often improve overall accuracy, they can also introduce unexpected inconsistencies in the model's behavior. Imagine a chatbot that suddenly starts giving different answers to the same questions after an update, or a summarization tool that now misses key information it previously captured. These inconsistencies, often referred to as "flips," can be frustrating for users who have come to rely on a specific model's capabilities.

Companies typically address this by fine-tuning pre-trained LLMs for specific tasks. This involves training smaller "adapters" on top of the base LLM to tailor it for a specific application, such as summarization or question-answering. But when the base LLM is updated, these task-specific adapters often need to be retrained, potentially leading to the inconsistencies described above.

Background of the Paper

This paper builds on previous research that has primarily focused on "negative flips" in classification tasks. These are instances where a model correctly classifies an input before an update but gets it wrong afterward. While minimizing negative flips is important, this paper argues that it's not the whole story.

What if both the old and new models are incorrect, but give different answers? The paper argues that consistency in such cases is also valuable. Users might have developed strategies to work around a model's limitations, and sudden changes in how the model makes mistakes can disrupt these strategies and lead to user dissatisfaction.

This work draws from concepts like model ensembles, where multiple models are combined to reduce errors, and knowledge distillation, where a smaller "student" model is trained to mimic a larger "teacher" model. However, it goes beyond these existing techniques by:

Extending the concept of compatibility to generative tasks

Previous research primarily focused on classification tasks. This paper considers generative tasks like summarization, where the output is not a simple category but a sequence of text.

Proposing new metrics

The paper introduces new ways to measure compatibility that go beyond negative flip rate, taking into account inconsistencies even when both model versions are incorrect.

Developing a new training strategy

It presents a novel method called MUSCLE that uses knowledge distillation to align updated models with previous versions, reducing inconsistencies and improving user experience.

Problem Formulation

Setup

The paper considers a scenario where a pre-trained base LLM is fine-tuned for various downstream tasks using parameter-efficient adapters (specifically, Low-Rank Adapters or LoRA). This is a common practice in industry as it avoids the need to retrain the entire LLM for each task. When the base LLM is updated, the task-specific adapters are also retrained.

Backward Compatibility Metrics

To measure compatibility between model versions, the paper examines existing metrics like Negative Flip Rate (NFR). NFR calculates the fraction of instances where the old model was correct, but the new model is incorrect. For example:

...NFR calculates the fraction of instances that were previously correct but are now incorrect with the new model.

However, the paper argues that NFR alone is insufficient, as it doesn't capture inconsistencies when both models make mistakes.

Extended Evaluation Metrics

To address this, the paper proposes new metrics that account for "unobserved inconsistencies."For instance, in a multiple-choice question, if the old model chooses option A, the new model chooses option B, and the correct answer is C, this represents an inconsistency not captured by NFR.

For generative tasks, where the output is a sequence of text (e.g., in summarization), the paper introduces metrics like "Smooth Gain and Regression." These metrics use a similarity measure (e.g., ROUGE score) to quantify the difference between the outputs of the old and new models compared to the ground truth. This helps capture subtle changes in the model's behavior beyond simply whether the output is correct or not.

Knowledge Transfer

The paper leverages knowledge distillation to improve compatibility. They train a "compatibility adapter" on top of the updated base LLM, using knowledge from both the old and new task-specific models. This is illustrated in Figure 3 of the paper, where they use a "masked approach" to selectively align the new adapter with either the old or new task model.

Model Update Strategy (MUSCLE)

The proposed MUSCLE strategy involves the following steps:

  1. Initialize the compatibility adapter with the new task-specific adapter.
  2. Fine-tune this adapter using the task training data.
  3. During fine-tuning, use a masking strategy to decide whether to align the adapter with the old or new task model. The paper proposes a simple heuristic: If the adapter currently predicts the correct output, align with the new model; otherwise, align with the old model. This helps preserve the new model's performance gains while reducing inconsistencies.

This approach allows the compatibility adapter to learn from both the improved accuracy of the updated model and the consistency of the older model, striking a balance between performance and compatibility.

Experimental Setup

Model Update Assumptions

To evaluate their approach, the researchers consider a variety of model update scenarios, including:

  • Updates within a model family: Comparing different versions of the same model, such as Llama 1 to Llama 2 or different versions of the Phi model.
  • Updates due to different training data: Evaluating models trained on different datasets or with different data selection strategies.
  • Updates with increased parameters or context size: Analyzing how changes in model architecture affect compatibility.

Task Adapter Training

For each task, the researchers fine-tune both the old and new base LLMs using LoRA adapters. They use a standard training/validation split and select the best model based on cross-entropy validation loss.

Compatibility Adapter Training

The compatibility adapter is trained with the same hyperparameters as the task adapter, but using the proposed Comp loss function. This loss function incorporates the masking strategy described earlier to align the adapter with either the old or new task model.

Datasets

The researchers evaluate their method on a diverse set of tasks, including:

  • HellaSwag: A multiple-choice test of common sense reasoning.
  • PIQA: A binary-choice task requiring understanding of physical commonsense.
  • GSM8k: A dataset of grade-school math problems requiring numerical reasoning.
  • SAMsum: A dialogue summarization task where the model generates summaries of conversations.

Results

Negative Flips Occur in Model Updates

The experiments confirm that model updates can indeed introduce a significant number of negative flips across various tasks, with some updates exhibiting flip rates of over 60%. Interestingly, the researchers find that the magnitude of negative flips is often inversely correlated with the performance gain of the update. This suggests that larger performance jumps may come at the cost of increased inconsistency.

Reduced Negative Flips in Classification

For classification-based tasks (HellaSwag and PIQA), MUSCLE successfully reduces negative flips, achieving up to a 40% reduction in some cases. This demonstrates the effectiveness of the proposed compatibility adapter in mitigating inconsistencies in updated models.

Increased Consistent Behavior

Beyond simply reducing negative flips, MUSCLE also improves consistency when both the old and new models are incorrect. This is particularly noticeable in cases where the performance gap between the models is smaller.

Reduced Regression in Generative Tasks

For the SAMsum summarization task, MUSCLE demonstrates a reduction in regression as measured by the ROUGE-1 score. The compatibility adapter helps preserve the gains of the updated model while mitigating the negative impact on previously well-summarized instances.

The Effect of Different Masking Strategies

The researchers analyze different masking strategies for their knowledge distillation approach. They find that their proposed heuristic, where the adapter aligns with the new model when correct and the old model when incorrect, provides the best balance between reducing negative flips and maintaining performance gains.

Business Implications

This research has significant implications for businesses that rely on LLMs. Here are a few key takeaways:

Maintain User Trust

By reducing inconsistencies in model updates, businesses can ensure a more stable and predictable user experience. This is crucial for building trust and preventing user frustration, especially in applications like chatbots and customer service agents where consistency is paramount.

Reduce Re-training Costs

While retraining task-specific adapters after a base LLM update is often necessary, MUSCLE can help minimize the extent of retraining required. This can translate to significant cost savings for businesses that frequently update their LLMs.

Faster Deployment of Updates

Businesses can deploy LLM updates with greater confidence, knowing that MUSCLE can help mitigate potentially disruptive changes in model behavior. This allows for faster iteration and improvement of LLM-powered applications.

Improved Explainability

By reducing unexpected changes, MUSCLE can make it easier to understand and explain the behavior of updated LLMs. This is important for debugging and ensuring that the model aligns with business requirements and ethical considerations.

Competitive advantage

Businesses that prioritize compatibility and user experience when updating their LLMs can gain a competitive edge. This can lead to increased user satisfaction, reduced churn, and improved brand reputation.

Conclusion

This paper presents a valuable contribution to the field of LLM development by addressing the challenge of model update compatibility. By introducing new metrics, a novel training strategy (MUSCLE), and comprehensive experimental results, this work provides valuable insights and tools for building more reliable and consistent LLM-powered applications. As LLMs continue to evolve and become more prevalent, ensuring compatibility between updates will be crucial for fostering user trust and maximizing the potential of these powerful technologies.

Share this post

Why Clio AI?

Unlock the most obvious-yet-hidden-in-plain-sight growth hack - enable your employees to work on important things, and reduce their cognitive load and time to resolve blockers.

Fast, efficient, and in-context information to make every employee a super performer.

Spend time thinking not searching. Get a demo today.

By signing up for a demo, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.