The Glaze Project (including Glaze, Nightshade, WebGlaze and others) is a research effort that develops technical tools with the explicit goal of protecting human creatives against invasive uses of generative artificial intelligence or GenAI. Our team is composed of computer science professors and PhD students from the University of Chicago. We perform research studies and develop tools that artists can use to disrupt unauthorized AI training on their work product. Ultimately, our goal is to ensure the continued vitality of human artists, and to restore balance and ensure a healthy coexistence between AI and human creatives, where the human creatives retain agency and control over their work products and their use. Since 2022, our team has released multiple tools, including Glaze, a tool to disrupt art style mimicry, Nightshade, a tool to disincentivize training without consent on scraped images, and WebGlaze, a free web service to make Glaze accessible to artists with limited computing resources. All of our tools are free to use for artists, and will never be used to generate profit. All our expenses for research and advocacy are covered by research grants and donations (Thank you National Science Foundation, DARPA, Amazon AWS and C3.ai).
Our Values and Our Mission. Art is inspired by and an expression of our experiences, emotions, pain and trauma. It connects us and defines much of what it means to be human. We believe that human creativity is unique and ever evolving, and today's generative AI systems can only produce poor approximations of the human creative works they are trained on. Our goal is to help ensure that human creativity continues to thrive, by providing the technological tools that enables all human creatives (artists, musicians, authors, journalists, voice actors, dancers, choreographers ...) to protect their artistic creations from unwanted AI training and AI mimicry.
To date, artists across the globe have downloaded Glaze more than 3.75 million times since March 2023, and Nightshade more than 950,000 times since January 2024. Our projects have been covered by major newspapers and news networks on every continent, from the Americas to EU, from Asia to South Africa. We have given talks to artist groups around the world, advocated for ethical AI with legislators in the US and EU, and are actively engaged with the US Copyright Office, the FTC, and multiple other federal agencies.
Out of our lab's recent peer reviewed publications, those most highly related to the Glaze project include:
- Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models.
Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, Ben Y. Zhao. Proceedings of 32nd USENIX Security Symposium, August 2023. - Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.
Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao. Proceedings of 45th IEEE Symposium on Security and Privacy, May 2024. - Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?
Anna Yoo Jeong Ha*, Josephine Passananti*, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao. To Appear: Proceedings of ACM Conference on Computer and Communications Security (CCS), October 2024. - Understanding Implosion in Text-to-Image Generative Models
Wenxin Ding, Cathy Yuanchen Li, Shawn Shan, Ben Y. Zhao, and Haitao Zheng. To Appear: Proceedings of ACM Conference on Computer and Communications Security (CCS), October 2024. (Final camera-ready on the way...)
- TIME Magazine Best Inventions of 2023, Special Mention
- Chicago Innovation Award 2023
- Distinguished Paper Award, USENIX Security 2023
- 2023 USENIX Internet Defense Prize