BayJarvis: Blogs on mixture-of-experts

paper Scaling Laws for Fine-Grained Mixture of Experts - 2024-04-06

Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models (LLMs). "Scaling Laws for Fine-Grained Mixture of Experts", Jakub Krajewski, Jan Ludziejewski, and their colleagues from the University of Warsaw and IDEAS NCBR analyze the scaling properties of MoE models, incorporating an expanded range of variables. …

paper OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models - 2024-03-20

A team of researchers has released OpenMoE, a series of open-source Mixture-of-Experts (MoE) based large language models ranging from 650M to 34B parameters. Their work provides valuable insights into training MoE models and analyzing their behavior. Here are some key takeaways: …

paper Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models - 2023-12-19

This study advocates integrating Sparse Mixture-of-Experts (MoE) architecture with instruction tuning, demonstrating its superiority over traditional dense models. …

paper Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer - 2023-12-18

In the landmark paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer," a revolutionary approach to neural network scalability is unveiled, fundamentally challenging conventional methods in neural network design. This study, spearheaded by Noam Shazeer and his team, introduces a novel strategy to expand the capacity of neural networks significantly, without necessitating a proportional increase in computational resources. At the core of this innovation is the development of the Sparsely-Gated Mixture-of-Experts (MoE) layer, a sophisticated assembly of numerous feed-forward sub-networks known as 'experts', governed by a trainable gating network. …

paper Learning Factored Representations in a Deep Mixture of Experts - 2023-12-15

In the field of machine learning, the Deep Mixture of Experts (DMoE) model, as discussed in "Learning Factored Representations in a Deep Mixture of Experts," offers a novel perspective. To fully appreciate its impact, we must first explore its predecessors: the standard Mixture of Experts (MoE), the Product of Experts (PoE), and the Hierarchical Mixture of Experts. …