Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News

Google Announces Seven Principles Not to Deploy AI for Weapons That Can Cause Overall Harm

Moupiya Dutta
Moupiya Dutta
She finds it interesting to learn and analyze society. she keeps herself updated, emphasizing technology, social media, and science. She loves to pen down her thoughts, interested in music, art, and exploration around the globe.

Join the Opinion Leaders Network

Join the Techgenyz Opinion Leaders Network today and become part of a vibrant community of change-makers. Together, we can create a brighter future by shaping opinions, driving conversations, and transforming ideas into reality.

On Thursday, June 7, Google announced certain principles that will help them going forward as well as help society in general. It pledges not to use Artificial Intelligence (AI) for weapons and technologies that are likely to cause overall harm or go against human rights.

Pichai believes that AI is a computer programming that learns and adapts. Though it cannot solve every problem can possibly improve our lives, and so Google uses AI to make things more useful, for example, making spam-free e-mails, digital assistants, photo pops, and so on.

So today, we’re announcing seven principles to guide our work going forward. – Sundar Pichai, CEO, Google.

The search giant has been formulating policies around this for years, but finally, they put them to get this right as Google, including Google Cloud. The company announced seven principles which are not just theoretical concepts but standards that will help them in their future product development and have an equal impact on their business.

Google’s foremost belief is AI should be beneficial for society as a whole. “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors.”- Sundar Pichai. It will continue to make high-quality information for the mass using AI with respect to cultural, social, and legal norms in the countries where they operate.

Pichai confirmed that the latest principles will help the company take a long-term perspective “even if it means making short-term trade-offs.” They will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

In addition, the tech giant has also pledged not to deploy AI where there is a material risk of harm. They are clear to announce that “we are not developing AI for use in weapons; we will continue our work with governments and the military in many other areas.”

Google believes that these principles are the right foundation for their company and AI’s future development with the values laid out in their original Founders’.

Join 10,000+ Fellow Readers

Get Techgenyz’s roundup delivered to your inbox curated with the most important for you that keeps you updated about the future tech, mobile, space, gaming, business and more.


Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Power Your Business

Solutions you need to super charge your business and drive growth

More from this topic