Google to Require Android Apps to Better Moderate AI-Generated Content

Google announced on Wednesday that it will require Android apps to better moderate AI-generated content. The new policy will go into effect in early 2024 and will apply to all apps that allow users to create AI content, such as text-based generative AI chatbots and AI image generators.

Key Highlights:

  • Google will require Android apps to let users report scammy or inappropriate AI-generated content without exiting the app.
  • The new policy will go into effect in early 2024.
  • Google says the policy is designed to protect users from harmful content and ensure that AI-generated content is used responsibly.

Screenshot 2023 08 31 at 2.57.29 PM

Under the new policy, app developers will be required to provide users with a way to report or flag offensive AI-generated content without having to leave the app. Developers will then be responsible for reviewing and taking action on these reports.

Google says the new policy is designed to protect users from harmful content and ensure that AI-generated content is used responsibly. The company has seen a recent increase in the number of AI-based apps that are being used to create and distribute scammy, hateful, and other harmful content.

“We believe that AI-generated content has the potential to be a powerful tool for creativity and expression,” said Google in a blog post announcing the new policy. “However, we also recognize that AI-generated content can be used to create harmful content. That’s why we’re taking steps to ensure that AI-generated content is used responsibly on the Google Play Store.”

The new policy is part of a broader effort by Google to crack down on harmful content on its platforms. In recent months, the company has also announced new policies to address issues such as misinformation and hate speech.

What does this mean for Android app developers?

Android app developers who use AI to generate content will need to update their apps to comply with the new policy. This will involve adding a way for users to report or flag offensive AI-generated content. Developers will also need to put in place a process for reviewing and taking action on these reports.

Google has provided some guidance for developers on how to implement the new policy. For example, developers can add a report button to their app’s interface or create a dedicated email address where users can report offensive content. Developers should also have a clear policy in place for how they will handle reported content. This policy should be published on the app’s website or within the app itself.

What does this mean for Android users?

Android users will have a new way to report offensive AI-generated content. This will help to protect users from harmful content and ensure that AI-generated content is used responsibly on the Google Play Store.

About the author

William Johnson

William J. has a degree in Computer Graphics and is passionate about virtual and augmented reality. He explores the latest in VR and AR technologies, from gaming to industrial applications.