Questions about Fine-tuning (deep learning)

Short answers, pulled from the story.

What is fine-tuning in deep learning?

Fine-tuning begins when a neural network trained for one task meets new data. The original model known as the upstream task must now perform a different downstream task. This process reuses knowledge from the initial training objective to solve specific problems.

How does Low-rank adaptation LoRA work with large language models?

Designers create a low-rank matrix and add it to the original matrix to produce a fine-tuned model when combined with a base model. A language model containing billions of parameters might be LoRA fine-tuned using only several millions of parameters. Support for LoRA was integrated into the diffusers library from Hugging Face.

Why do engineers freeze layers during convolutional neural network fine-tuning?

Convolutional neural networks often keep earlier layers frozen because they capture lower-level features. These early layers sit closest to the input layer and handle basic pattern recognition while later layers discern high-level features that relate more directly to the specific task at hand.

Can fine-tuning degrade model robustness to distribution shifts?

Fine-tuning can degrade a model's robustness to distribution shifts in real-world scenarios. One mitigation strategy involves linearly interpolating a fine-tuned model's weights with the original model's weights. This process greatly increases out-of-distribution performance while largely retaining the in-distribution performance of the fine-tuned model.

Which companies offer fine-tuning APIs as of the 19th of June 2023?

As of the 19th of June 2023, language model fine-tuning APIs are offered by OpenAI for a subset of their models. Microsoft Azure's Azure OpenAI Service provides similar capabilities for selected models. Google Cloud Platform also supports fine-tuning for some of their PaLM models.