Google has updated its Generative AI Prohibited Use Policy to clarify the proper use of its generative AI products and services.
The update simplifies the language, and lists prohibited behaviors with examples of unacceptable conduct.
The updated policy clarifies existing rules without adding new restrictions.
It specifically bans using Google’s AI tools to create or share non-consensual intimate images or to conduct security breaches through phishing or malware.
The policy states:
“We expect you to engage with [generative AI models] in a responsible, legal, and safe manner.”
Prohibited activities include dangerous, illegal, sexually explicit, violent, hateful, or deceptive actions, as well as content related to child exploitation, violent extremism, self-harm, harassment, and misinformation.
The policy prohibits using Google’s generative AI for an expansive range of dangerous, illegal, and unethical activities:
New language in the policy carves out exceptions for some restricted activities in particular contexts.
Educational, documentary, scientific, artistic, and journalistic uses may be permitted, as well as other cases “where harms are outweighed by substantial benefits to the public.”
The policy update addresses the rapid advancement of generative AI technologies that create realistic text, images, audio, and video.
This progress raises concerns about ethics, misuse, and societal impact.
Google’s updated policy is now in effect, and the old and new versions are publicly available.
Leading AI companies like OpenAI and Microsoft have released their own usage rules. However, raising awareness and consistently enforcing these rules still need to be improved.
As generative AI becomes more common, creating clear usage guidelines is essential to ensure responsible practices and reduce harm.
Featured Image: Algi Febri Sugita/Shutterstock