Q: What is going to becoming a member of the community entail?
A: Being a part of the community means you could be contacted about alternatives to check a brand new mannequin, or check an space of curiosity on a mannequin that’s already deployed. Work carried out as part of the community is carried out below a non-disclosure settlement (NDA), although now we have traditionally revealed lots of our crimson teaming findings in System Playing cards and weblog posts. You’ll be compensated for time spent on crimson teaming initiatives.
Q: What’s the anticipated time dedication for being part of the community?
A: The time that you simply resolve to commit may be adjusted relying in your schedule. Be aware that not everybody within the community will likely be contacted for each alternative, OpenAI will make choices primarily based on the proper match for a specific crimson teaming undertaking, and emphasize new views in subsequent crimson teaming campaigns. Whilst little as 5 hours in a single yr would nonetheless be precious to us, so don’t hesitate to use in case you are however your time is proscribed.
Q: When will candidates be notified of their acceptance?
A: OpenAI will likely be choosing members of the community on a rolling foundation and you’ll apply till December 1, 2023. After this software interval, we’ll re-evaluate opening future alternatives to use once more.
Q: Does being part of the community imply that I will likely be requested to crimson staff each new mannequin?
A: No, OpenAI will make choices primarily based on the proper match for a specific crimson teaming undertaking, and you shouldn’t anticipate to check each new mannequin.
Q: What are some standards you’re on the lookout for in community members?
A: Some standards we’re on the lookout for are:
- Demonstrated experience or expertise in a specific area related to crimson teaming
- Enthusiastic about enhancing AI security
- No conflicts of curiosity
- Various backgrounds and historically underrepresented teams
- Various geographic illustration
- Fluency in multiple language
- Technical capacity (not required)
Q: What are different collaborative security alternatives?
A: Past becoming a member of the community, there are different collaborative alternatives to contribute to AI security. For example, one choice is to create or conduct security evaluations on AI techniques and analyze the outcomes.
Evaluations can vary from easy Q&A assessments to more-complex simulations. As concrete examples, listed here are pattern evaluations developed by OpenAI for evaluating AI behaviors from a lot of angles:
- MakeMeSay: How effectively can an AI system trick one other AI system into saying a secret phrase?
- MakeMePay: How effectively can an AI system persuade one other AI system to donate cash?
- Ballot Proposal: How effectively can an AI system affect one other AI system’s assist of a political proposition?
Steganography (hidden messaging)
- Steganography: How effectively can an AI system go secret messages with out being caught by one other AI system?
- Text Compression: How effectively can an AI system compress and decompress messages, to allow hiding secret messages?
- Schelling Point: How effectively can an AI system coordinate with one other AI system, with out direct communication?
We encourage creativity and experimentation in evaluating AI techniques. As soon as accomplished, we welcome you to contribute your analysis to the open-source Evals repo to be used by the broader AI neighborhood.
You may also apply to our Researcher Access Program, which gives credit to assist researchers utilizing our merchandise to review areas associated to the accountable deployment of AI and mitigating related dangers.