Final month, we began previewing DALL·E 2 to a restricted variety of trusted customers to study concerning the expertise’s capabilities and limitations.
Since then, we’ve been working with our customers to actively incorporate the teachings we study. As of as we speak:
- Our customers have collectively created over 3 million photographs with DALL·E.
- We’ve enhanced our security system, enhancing the textual content filters and tuning the automated detection & response system for content material coverage violations.
- Lower than 0.05% of downloaded or publicly shared photographs have been flagged as probably violating our content material coverage. About 30% of these flagged photographs have been confirmed by human reviewers to be coverage violations, resulting in an account deactivation.
- As we work to know and tackle the biases that DALL·E has inherited from its coaching knowledge, we have requested early customers to not share photorealistic generations that embrace faces and to flag problematic generations. We consider this has been efficient in limiting potential hurt, and we plan to proceed the apply within the present section.
Studying from real-world use is an essential half of our dedication to develop and deploy AI responsibly, so we’re beginning to widen entry to customers who joined our waitlist, slowly however steadily.
We intend to onboard as much as 1,000 folks each week as we iterate on our security system and require all customers to abide by our content material coverage. We hope to extend the speed at which we onboard new customers as we study extra and achieve confidence in our security system. We’re impressed by what our customers have created with DALL·E up to now, and excited to see what new customers will create.
Within the meantime, you may get a preview of those creations on our Instagram account: @openaidalle.