Imitation Models and the Open-Source LLM Revolution | by Cameron R. Wolfe, Ph.D. | Sep, 2023

Are proprietary LLMs like ChatGPT and GPT-4 actually easy to replicate?

Cameron R. Wolfe, Ph.D.
Towards Data Science
(Photo by Tanbir Mahmud on Unsplash)

The proposal of the LLaMA suite [2] of (LLMs) led to a surge in publications on the topic of -source LLMs. In many cases, the goal of these works was to cheaply produce smaller, opens-source LLMs (for research purposes) that have comparable to proprietary models like ChatGPT and GPT-4. These models adopt an imitation , which fine-tunes a base LLM over synthetic dialogue data from a more powerful LLM. Despite being cheap to train, these models seemed to perform comparably to proprietary LLMs like ChatGPT. As a result, the deep research community quickly adopted the view that open-source LLMs will rule the re-producing open-source variants of proprietary models was both easy and !

“Will the most powerful LLMs be closed-source or will they be freely distributed for to use, modify, and extend?” — from [1]

Unfortunately, preliminary evaluations performed on these models, which relied upon ratings provided by other LLMs (e.g., GPT-4) or human crowd workers, were somewhat cursory. Does the of imitation models actually match that of models like ChatGPT? To answer this question more rigorously, we will study recent research that analyzes whether imitation models truly remove the “moat” around proprietary LLMs. Interestingly, we will see that these cheap reproductions of powerful LLMs perform well in human evaluations due to their ability to learn the style of a powerful LLM. However, they lack factuality and perform poorly when subjected to more broad and targeted evaluations. In reality, imitation models do not perform nearly as well as proprietary models like ChatGPT.

(from [1])

“The premise of model imitation is that once a proprietary LM is made available via API, one can collect a of API outputs and use it to fine-tune an open-source LM.” — from [1]

Source link