July 13, 2024
9 mins

Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?

This paper from Deepmind creates a LOFT benchmark to test long context vs techniques like Retrieval, In Context Learning, and SQL.
Paper Link
Header image

Key takeaways:

  • Long-context language models (LCLMs) can now natively ingest and process vast amounts of information, potentially revolutionizing how we approach tasks previously reliant on specialized tools like retrieval systems or databases.
  • The LOFT benchmark, evaluating LCLMs on tasks with context sizes up to 1 million tokens, reveals their surprising ability to rival state-of-the-art retrieval and RAG systems.
  • However, LCLMs still face challenges in areas like compositional reasoning, as seen in their performance on SQL-like tasks.
  • The study emphasizes the significant influence of prompting strategies on LCLMs' performance, highlighting the need for continued research as context lengths grow.
  • LOFT provides a rigorous testing ground for LCLMs, showcasing their potential to supplant existing paradigms and tackle novel tasks as model capabilities scale.

Introduction

This paper investigates the potential of Long-Context Language Models (LCLMs) to tackle complex tasks that traditionally require specialized tools and complex pipelines.

LCLMs are a new generation of language models capable of processing vast amounts of information, exceeding the limitations of previous models. Imagine a single model capable of not only understanding a given question but also retrieving relevant information from a massive database of text, images, audio, or even entire codebases – that's what LCLMs promise.

To assess the capabilities of LCLMs, the authors introduce LOFT (Long-Context Frontiers), a benchmark that evaluates these models across six key areas:

Text Retrieval

This tests LCLMs’ ability to directly find relevant documents from a given corpus.

Visual Retrieval

This evaluates LCLMs’ performance in retrieving relevant images based on a textual query.

Audio Retrieval

Here, LCLMs must identify the audio clip that best matches a given transcription.

Retrieval-Augmented Generation (RAG)

This assesses LCLMs’ capacity to reason over a corpus and generate an answer based on retrieved information.

SQL

LCLMs are tasked with processing entire databases as text, enabling natural language database querying, thus potentially bypassing the need for formal query languages like SQL.

Many-Shot In-Context Learning (ICL)

This tests LCLMs' ability to learn from hundreds or thousands of examples provided in context, exceeding the limitations of traditional few-shot learning.

Background of the paper

This research builds on several key concepts:

Retrieval-Augmented Generation (RAG)

This is a popular approach for handling information-intensive tasks. RAG pipelines involve two main stages: retrieval (finding relevant information from a corpus) and generation (using that information to create an answer).

In-Context Learning (ICL)

LCLMs can learn from examples provided in context. ICL has become a promising technique for adapting LLMs to new tasks without requiring fine-tuning.

LOFT: A 1 Million+ Token Long-Context Benchmark

LOFT is a comprehensive benchmark designed to test LCLMs' ability to tackle complex real-world tasks involving large context sizes.

LOFT's Design:

Diverse Tasks

LOFT covers a broad range of tasks, including retrieval, RAG, SQL-like reasoning, and many-shot in-context learning.

Multi-Modal

LOFT includes datasets from various modalities, encompassing text, visual, and audio, reflecting the real-world scenarios where LCLMs might be applied.

Scalability

The benchmark allows for dynamic scaling of context lengths, currently supporting up to 1 million tokens and easily expandable to tens of millions or even billions in the future.

Real-World Datasets

LOFT utilizes existing real-world datasets, rather than relying on synthetic tasks. This makes the benchmark more relevant to practical applications.

Key features of each LOFT task:

Retrieval

LOFT includes text, visual, and audio retrieval datasets. The focus here is to assess if LCLMs can directly retrieve relevant information without specialized retrieval models.

RAG

This task evaluates LCLMs on datasets that require them to reason over retrieved information and generate answers.

SQL

LOFT includes datasets like Spider and SParC, which test LCLMs’ ability to process databases as text, effectively querying them using natural language.

Many-Shot ICL

This task explores LCLMs' ability to learn from a large number of examples provided in context. Datasets like Big Bench Hard (BBH) and LongICLBench (LIB) are used for this purpose.

Corpus-in-Context Prompting

The paper introduces a novel prompting technique for LCLMs called Corpus-in-Context (CiC) Prompting. This technique leverages the unique abilities of LCLMs to directly ingest and process large corpora of information provided within their context window.

Prompt Design

Instructions

Task-specific instructions are provided to guide the LCLMs' behaviors. For example, in a retrieval task, the model might be instructed to "read the corpus carefully and find relevant documents to answer the question."

Corpus Formatting

The entire corpus is inserted into the prompt, with each candidate (e.g., document, image, audio) assigned a unique identifier. This structure allows the model to reference specific candidates and ultimately output the correct identifiers as answers. The paper emphasizes that careful formatting of the corpus is crucial for optimal performance.

Few-Shot Examples

A small set of examples are provided in the prompt to demonstrate the desired response format and improve accuracy. These examples are grounded to the same corpus, encouraging the model to learn about the specific corpus it needs to use. Additionally, the paper suggests using Chain-of-Thought reasoning within these examples to further improve performance, particularly in tasks that require complex multi-hop reasoning.

Query Formatting

The query to be evaluated is formatted similarly to the few-shot examples, allowing the LCLM to generate tokens that are then parsed into the final answer.

Design Consideration

The paper acknowledges that different CiC prompting techniques can lead to significant variations in context lengths due to differences in instructions, formatting, and tokenizers. They suggest allocating sufficient space for prompt customization and ensuring that the model utilizes only the corpus and examples present in the specific context length being evaluated.

Discussion on Efficacy

Encoding large contexts can be slow and computationally expensive. However, the authors point out that CiC prompting is compatible with prefix-caching, a technique that allows for faster encoding of the corpus by encoding it only once, similar to indexing in traditional information retrieval. This significantly improves efficiency.

LOFT Tasks and Primary Results

The paper evaluates three state-of-the-art LCLMs – Gemini 1.5 Pro, GPT-4o, and Claude 3 Opus – on the LOFT benchmark. The models were evaluated using their official APIs, and their prompts were chosen based on their performance on the development queries over the 128k token context.

In each LOFT task, these LCLMs are compared against specialized models that have been carefully hand-optimized for the specific task, showcasing the potential of LCLMs to tackle these tasks without relying on complex task-specific fine-tuning or pipelining.

Text Retrieval

Results

Gemini 1.5 Pro demonstrates performance comparable to Gecko, a leading dual-encoder model, at the 128k token context level. This is significant because LCLMs have not been specifically trained for retrieval. The authors observe that while LCLMs’ performance does degrade when scaling the corpus to millions of tokens, their performance at 128k highlights the potential of using LCLMs for retrieval tasks.

Positional Analysis

The study investigates the impact of the position of the gold document (the document containing the answer) within the corpus. Results indicate that performance drops when gold documents are placed towards the end of the corpus. However, placing the gold documents of few-shot queries at the end seems to mitigate this issue. Co-locating gold documents of few-shot and test queries consistently boosts performance. This suggests that LCLMs do indeed pay special attention to the locations where the gold documents for the few-shot examples are placed, offering a promising approach to overcome performance degradation in large corpora.

Visual Retrieval

Results

Gemini 1.5 Pro outperforms GPT-4o across all four visual benchmarks, maintaining a performance advantage over CLIP (a widely used text-to-image retrieval model) across all context lengths.

Audio Retrieval

Results

Gemini 1.5 Pro demonstrates performance comparable to PaLM 2 DE, a dual-encoder trained for audio retrieval, across all five languages evaluated. Gemini notably surpasses PaLM 2 DE in Hindi, highlighting the potential benefits of its diverse pre-training data.

RAG

Results

Gemini 1.5 Pro, with the entire corpus in context, outperforms the RAG pipeline (which relies on separate retrieval and generation stages) on multi-hop datasets like HotpotQA and MusiQue. This is likely because LCLMs can reason over multiple passages in the context window using Chain-of-Thought, a capability that RAG pipelines typically lack. However, specialized retrievers like Gecko excel at ranking relevant passages, making them more effective for multi-target datasets (e.g., QUEST and QAMPARI).

Closed-Book Ablations

The authors conduct a closed-book ablation study where they remove the corpus from the context, assessing the LCLM's performance solely on parametric knowledge. Results demonstrate that Gemini 1.5 Pro performs significantly worse in this setting, highlighting the importance of external corpora in enhancing its reasoning capabilities.

SQL-Like Compositional Reasoning

Results

LCLMs show reasonable performance on SQL-like tasks but lag significantly behind the specialized pipeline, indicating room for improvement in compositional reasoning capabilities.

Reasoning Analysis

The study categorizes queries based on the operators in their SQL queries (e.g., averaging, counting, equality, inequality). Results reveal that averaging is the most difficult operation, while counting is relatively easy. Reasoning over equality is also considerably easier than reasoning over inequality.

Many-Shot ICL

Results

Gemini 1.5 Pro outperforms GPT-4o on most ICL benchmarks, demonstrating strong in-context learning abilities. However, its performance on BBH-tracking7 is unexpectedly low. Claude 3 Opus achieves the best performance among the evaluated LCLMs.

Scaling Many-Shot ICL

The study explores the impact of increasing the number of examples provided in the prompt. Results show that while knowledge-intensive tasks (e.g., BBH-date, BBH-salient) see monotonic improvements, reasoning-intensive tasks (e.g., BBH-tracking7, BBH-web) do not benefit as much, suggesting a limit to how much models can learn from scaling the number of in-context examples.

CiC Prompt Ablations

The paper conducts ablations over different facets of the CiC Prompt to analyze their impact on performance.

Removing instructions (Generic Instruction)

This leads to worse performance, highlighting the importance of providing clear and task-specific instructions.

Removing Chain-of-Thought reasoning (Without CoT)

Performance decreases, highlighting the value of this technique, particularly for complex tasks.

Using a separate corpus for each few-shot example (Corpus in Each Few-Shot)

This also leads to a performance drop, suggesting that sharing a single corpus for both few-shot examples and the evaluation task is beneficial.

Placing the query at the beginning of the prompt (Query at Beginning)

This results in a significant performance decrease, indicating that prefix-caching is more efficient.

Using alphanumeric IDs instead of sequential numeric IDs (Alphanumeric IDs)

This negatively impacts performance, possibly due to the way numbers are tokenized.

Removing ID echoes at the end of documents (Without ID Echo)

This leads to a performance drop, suggesting that repeating text can compensate for missing context.

Removing document content and using only titles (Title Only)

This significantly degrades performance, indicating that the model relies on the content provided within the context.

Varying the number of few-shot examples

Performance consistently improves with an increasing number of examples.

Business implications

This paper has major implications for businesses working with LLMs. It suggests that LCLMs have the potential to significantly streamline and optimize various business processes by:

Eliminating the need for specialized tools

Businesses might be able to eliminate the need for separate retrieval systems, databases, or other tools, simplifying development and maintenance.

Enhancing user-friendliness

LCLMs could allow for more natural language interactions, making it easier for users to access information and complete tasks.

Minimizing cascading errors

Consolidating complex pipelines into a single LCLM can reduce the risk of cascading errors, improving system reliability.

Unlocking new possibilities

LCLMs might enable entirely new applications and use cases previously impossible due to context limitations.

Conclusion

The paper concludes that while LCLMs have made significant progress and can rival specialized models in tasks like retrieval and RAG, they still face challenges in areas like compositional reasoning. Further research is crucial to improve their performance on more complex tasks and to optimize their efficiency and instructability. LOFT provides a valuable benchmark for measuring the progress of LCLMs and for driving future research in this area.

Does Long context really subsume RAG, SQL etc?

The answer is nuanced. While LCLMs show promising results, they don't entirely subsume RAG, SQL, or other specialized approaches just yet. Here's a breakdown:

Retrieval

LCLMs demonstrate strong performance in retrieval tasks at moderate context lengths, making them a viable alternative to specialized retrievers.

RAG

LCLMs can surpass RAG pipelines in scenarios requiring multi-hop reasoning but may still fall short in tasks requiring exhaustive retrieval.

SQL

LCLMs have shown the potential to handle structured data using natural language, but their performance in complex compositional reasoning tasks still lags behind specialized pipelines.

In conclusion, LCLMs represent a significant advancement, offering a more unified and potentially simpler approach for tackling complex tasks. However, they are still in their early stages of development and may not be ready to completely replace existing specialized tools and systems. As these models continue to improve and scale, we can expect them to play an increasingly important role in various business domains.

Share this post

Why Clio AI?

Unlock the most obvious-yet-hidden-in-plain-sight growth hack - enable your employees to work on important things, and reduce their cognitive load and time to resolve blockers.

Fast, efficient, and in-context information to make every employee a super performer.

Spend time thinking not searching. Get a demo today.

By signing up for a demo, you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.