Quiz

 

Which of the following best describes how fine tuning differs from prompt engineering and retrieval-augmented generation?

Fine tuning involves modifying the model itself by training on new data, whereas prompt engineering and retrieval-augmented generation influence outputs without changing the model's internal weights.


Which statement best describes the difference between pre-training and fine tuning for large language models?

Pre-training uses massive, unstructured data to teach the model language and knowledge, while fine tuning uses smaller, structured data to specialize the model for particular uses.


What is parameter efficient fine tuning and why is it advantageous for training large language models?

Parameter efficient fine tuning focuses training on a limited set of weights, which leads to significant reductions in memory and computation requirements.


Why is fine tuning important when adapting large language models (LLMs) for specific domains or use cases?

Fine tuning allows a general-purpose model to specialize, improving its performance, reliability, and relevance within a chosen domain or application.


What does fine tuning mean in the context of large language models?


Fine tuning adapts a general-purpose model for a particular use case by training it on targeted data, making its outputs more relevant and consistent for that domain.


Which of the following lists common methods and metrics used to evaluate the performance of a fine tuned model?

Human evaluation, standardized benchmarks, error analysis, and metrics such as exact match or embedding similarity are all key approaches discussed in the course for evaluating fine tuned models.

What is instruction fine tuning, and how does it help language models follow instructions more effectively?

Instruction fine tuning uses instruction–response data to teach the model to interpret and respond accurately to user instructions.