Meta-learning на собеседовании Data Scientist
Карьерник — Duolingo для аналитиков: 10 минут в день тренируй SQL, Python, A/B, статистику, метрики и ещё 3 темы собеса. 1500+ вопросов в Telegram-боте. Бесплатно.
Идея
«Learn to learn». Train модель так, чтобы она fast adapts к new tasks с few examples.
Standard learning: many examples → one task.
Meta-learning: many tasks → fast adapt к new task с few examples.MAML
Model-Agnostic Meta-Learning (Finn 2017).
For каждый task в meta-batch:
Adapt model на support set (1-5 gradient steps).
Compute loss на query set с adapted model.
Sum task losses → update meta-parameters.Optimizes initial weights чтобы after few gradient steps на new task — good.
Pros: model-agnostic. Cons: computationally expensive, second-order gradients.
Prototypical networks
Easier alternative.
For каждый class в support set:
prototype_class = mean(embeddings of support samples).
For query:
predict closest prototype.Episodic training. Works well на image few-shot benchmarks.
Reptile
OpenAI simplification MAML.
Sample task.
Adapt model on task.
Move meta-parameters towards adapted parameters.Не second-order. Much cheaper. Comparable accuracy.
Применения
Few-shot learning. Adapt новой class с 1-5 examples.
Personalization. Per-user adaptation в recsys.
Robotics. Adapt новому task / environment fast.
Hyperparameter optimization. Learn good HP defaults.
Drug discovery. Adapt new drug class.
В практике — LLM in-context learning largely заменили meta-learning. Но research direction живёт.
Связанные темы
- Few-shot learning для DS
- Self-supervised learning для DS
- Curriculum learning для DS
- Continual learning для DS
- Подготовка к собесу Data Scientist
FAQ
Это официальная информация?
Нет. Статья основана на работах Finn 2017 (MAML), Snell 2017 (Prototypical), Nichol 2018 (Reptile).
Тренируйте Data Science — откройте тренажёр с 1500+ вопросами для собесов.