Beyond the Model Zoo: Optimizing Foundation Models for Your Application
May 14, 2024
31 min
Free
data-synthesis
ai-optimization
foundation-models
large-language-models
llm-evaluation
parameter-efficient-fine-tuning
peft
lora
mlops
machine-learning
artificial-intelligence
model-deployment
Description
Salma Mayorquin, Co-Founder of Remyx AI, discusses how to optimize foundation models for applications. The talk covers parameter-efficient fine-tuning (PEFT) methods like LoRA, the importance of data synthesis and augmentation for creating high-quality datasets, and advanced LLM evaluation techniques for assessing model performance. Mayorquin also touches on deployment strategies using tools like Triton servers and emphasizes that achieving state-of-the-art model performance is now possible with fewer resources than in the past. She highlights the benefits of using custom evaluation criteria and leveraging generative AI for both data design and model assessment.