BeClaude
Back to News
Release2019-09-19

Fine-tuning GPT-2 from human preferences

Source: OpenAI

We’ve fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks, successfully matching the preferences of the external human labelers, though those preferences did not always match our own. Specifically, for summarization tasks the labelers preferred sentences copied...

openaigptfine-tuning