Google Announces Seven Principles Not to Deploy AI for Weapons That Can Cause Overall Harm

Must Read

Moupiya Dutta
Moupiya Dutta
She finds it interesting to learn and analyze society. she keeps herself updated, emphasizing technology, social media, and science. She loves to pen down her thoughts, interested in music, art, and exploration around the globe.

On Thursday, June 7, Google announced certain principles that will help them going forward as well as help society in general. It pledges not to use Artificial Intelligence (AI) for weapons and technologies that are likely to cause overall harm or go against human rights.

Pichai believes that AI is a computer programming that learns and adapts. Though it cannot solve every problem can possibly improve our lives, and so Google uses AI to make things more useful, for example, making spam-free e-mails, digital assistants, photo pops, and so on.

So today, we’re announcing seven principles to guide our work going forward. – Sundar Pichai, CEO, Google.

The search giant has been formulating policies around this for years, but finally, they put them to get this right as Google, including Google Cloud. The company announced seven principles which are not just theoretical concepts but standards that will help them in their future product development and have an equal impact on their business.

Google’s foremost belief is AI should be beneficial for society as a whole. “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors.”- Sundar Pichai. It will continue to make high-quality information for the mass using AI with respect to cultural, social, and legal norms in the countries where they operate.

Pichai confirmed that the latest principles will help the company take a long-term perspective “even if it means making short-term trade-offs.” They will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

In addition, the tech giant has also pledged not to deploy AI where there is a material risk of harm. They are clear to announce that “we are not developing AI for use in weapons; we will continue our work with governments and the military in many other areas.”

Google believes that these principles are the right foundation for their company and AI’s future development with the values laid out in their original Founders’.

Stay updated

Subscribe to our newsletter and never miss an update on the latest tech, gaming, startup, how to guide, deals and more.

- Advertisement -
- Advertisement -

Grow Your Business

Place your brand in front of tech-savvy audience. Partner with us to build brand awareness, increase website traffic, generate qualified leads, and grow your business.


- Advertisement -


- Advertisement -
- Advertisement -