We’re exploring the usage of LLMs to deal with these challenges. Our massive language fashions like GPT-4 can perceive and generate pure language, making them relevant to content material moderation. The fashions could make moderation judgments based mostly on coverage tips offered to them.
With this method, the method of creating and customizing content material insurance policies is trimmed down from months to hours.
- As soon as a coverage guideline is written, coverage consultants can create a golden set of knowledge by figuring out a small variety of examples and assigning them labels in keeping with the coverage.
- Then, GPT-4 reads the coverage and assigns labels to the identical dataset, with out seeing the solutions.
- By inspecting the discrepancies between GPT-4’s judgments and people of a human, the coverage consultants can ask GPT-4 to provide you with reasoning behind its labels, analyze the anomaly in coverage definitions, resolve confusion and supply additional clarification within the coverage accordingly. We are able to repeat steps 2 and three till we’re happy with the coverage high quality.
This iterative course of yields refined content material insurance policies which might be translated into classifiers, enabling the deployment of the coverage and content material moderation at scale.
Optionally, to deal with massive quantities of knowledge at scale, we are able to use GPT-4’s predictions to fine-tune a a lot smaller mannequin.