BayJarvis: Blogs on optimization

paper LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models - 2024-03-26

Large Language Models (LLMs) like ChatGPT have transformed numerous fields by leveraging their extensive reasoning and generalization capabilities. However, as the complexity of prompts increases, with techniques like chain-of-thought (CoT) and in-context learning (ICL) becoming more prevalent, the computational demands skyrocket. This paper introduces LLMLingua, a sophisticated prompt compression method designed to mitigate these challenges. By compressing prompts into a more compact form without significant loss of semantic integrity, LLMLingua enables faster inference and reduced computational costs, promising up to 20x compression rates with minimal performance degradation. …

paper Evolutionary Optimization of Model Merging Recipes - 2024-03-24

The field of large language models (LLMs) has witnessed a paradigm shift with the advent of model merging, a novel approach that combines multiple LLMs into a unified architecture without additional training, offering a cost-effective strategy for new model development. This technique has sparked a surge in experimentation due to its potential to democratize the development of foundational models. However, the reliance on human intuition and domain knowledge in model merging has been a limiting factor, calling for a more systematic method to explore new model combinations. …

paper Training Language Model Agents without Modifying Language Models - 2024-03-19

Reframing Large Language Models (LLMs) as agents has ushered in a new paradigm of automation. Researchers and practitioners have increasingly been using these models as agents to automate complex tasks using specialized functions. However, integrating useful functions into LLM agents often requires manual effort and extensive iterations, which is time-consuming and inefficient. Inspired by the analogy of humans continuously forging tools to adapt to tasks, this paper introduces a novel approach to train LLM agents by forging their functions, treating them as learnable 'agent parameters', without modifying the LLM weights. This paradigm, termed 'Agent Training', involves updating the agent's functions to maximize task-solving ability, offering a promising avenue for developing specialized LLM agents efficiently. …

paper In-Context Learning for Extreme Multi-Label Classification - 2024-03-13

Multi-label classification problems with thousands of possible classes are extremely challenging, especially when using in-context learning with large language models (LLMs). Demonstrating every possible class in the prompt is infeasible, and LLMs may lack the knowledge to precisely assign the correct labels. …

paper BitNet: Scaling 1-bit Transformers for Large Language Models - 2024-03-09

The exponential growth of large language models poses significant challenges in terms of deployment costs and environmental impact due to high energy consumption. In response to these challenges, this paper introduces BitNet, a scalable and stable 1-bit Transformer architecture designed for large language models. By introducing BitLinear as a replacement for the traditional nn.Linear layer, BitNet aims to train with 1-bit weights from scratch, significantly reducing the memory footprint and energy consumption while maintaining competitive performance. …

paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model - 2023-11-05

In today's post, we delve into a recent paper that investigates the intricacies of Reinforcement Learning in the context of Large Language Models (LLMs). This study shines a light on the challenges and nuances of training such models to align better with human preferences. …

paper Branching Beyond PPO: How MCTS Sprouts Superior Text Generation - 2023-11-05

We've all been there - diligently using Proximal Policy Optimization (PPO) for text generation, only to wonder if there's more to be extracted from our models. If you've been in this boat, you're in for a treat! A recent paper under review for ICLR 2024 offers some intriguing insights. …

paper Prompting the Future: From Hard-Coded to Hard-Core Compiler Magic in DSPy - 2023-10-31

The machine learning community stands at the precipice of another significant transformation. While language model pipelines have garnered attention, the introduction of DSPy promises to reshape the landscape. Let's dive into this groundbreaking paper and its implications. …

paper Cost-Effective Hyperparameter Tuning for LLMs on a Budget - 2023-10-18

Large language models (LLMs) like GPT-3 offer impressive text generation capabilities. But with API pricing tied to compute usage, heavy costs limit wider adoption of LLMs. How can we maximize the value extracted from these models under budget constraints? …