BayJarvis: Blogs on continual-learning

paper Scaling Laws for Forgetting When Fine-Tuning Large Language Models - 2024-03-16

When fine-tuning Large Language Models (LLMs) like GPT-3 or BERT for specific tasks, a common challenge encountered is "forgetting" – where the model loses some of its pre-trained capabilities. This phenomenon is particularly noticeable in Parameter-Efficient Fine-Tuning (PEFT) methods such as Low-Rank Adapters (LoRA). …

paper Simple and Scalable Strategies to Continually Pre-train Large Language Models - 2024-03-15

Large language models (LLMs) are cornerstone technologies in AI, driving advancements across various fields. However, the traditional approach of re-training LLMs with every new data set is both costly and computationally inefficient. This paper presents a novel approach, focusing on continual pre-training, which allows for the incremental updating of LLMs without the need for full re-training, significantly saving computational resources. …

paper A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA - 2024-03-14

Low-Rank Adapters (LoRA) have emerged as a popular parameter-efficient fine-tuning method for large language models. By adding trainable low-rank "adapters" to selected layers, LoRA enables effective fine-tuning while dramatically reducing the number of parameters that need to be trained. However, the conventional LoRA method uses a scaling factor for the adapters that divides them by the rank. A new paper by researcher Damjan Kalajdzievski shows that this rank-dependent scaling actually slows down learning and limits performance improvements when using higher-rank adapters. …