HOT DEALS

Google announces seven principles not to deploy AI for weapons that can cause overall harm

Sundar Pichai

Have you not upgraded your website to HTTPS yet? Upgrade NOW.

Google with its Chrome 68 update to show all HTTP websites as NOT SECURE. Avoid Google's penalty by installing an SSL Certificate. Get a DigiCert Standard SSL and secure your website at just $157/year. BUY NOW

ADVERTISEMENT

Google, on Thursday, June 7 announced its certain principles that will help them going forward as well as help the society in general. It pledges to not use Artificial Intelligence (AI) for weapons and technologies that are likely to cause overall harm or that goes against human rights.

Pichai believes that AI is a computer programming that learns and adapts. Though it cannot solve every problem but can possibly improve our lives and so Google uses AI to make things more useful as for example, making spam-free e-mails, digital assistants, to photo pops and so on.

So today, we’re announcing seven principles to guide our work going forward. – Sundar Pichai, CEO, Google.

The search giant has been formulating policies around this for years but finally, they put them to get this right as Google including Google Cloud. The company announced seven principles which are not just theoretical concepts but standards that will help them in their future product development and have an equal impact on their business.

Related

Google’s foremost belief  is AI should be beneficial for the society as a whole. “As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors.”- Sundar Pichai. It will continue to make high-quality information for the mass using AI with respect to cultural, social, and legal norms in the countries where they operate.

Pichai confirmed that the latest principles will help the company take a long-term perspective “even if it means making short-term trade-offs.” They will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.

In addition, the tech giant has also pledged to not deploy AI where there is a material risk of harm. They are clear to announce that “we are not developing AI for use in weapons; we will continue our work with governments and the military in many other areas.”

Google believes that these principles are the right foundation for their company and the future development of AI with the values that are laid out in their original Founders’.

Source: Google Blog

Google announces seven principles not to deploy AI for weapons that can cause overall harm