Artificial intelligence is not a topic that's free of controversy but Google is now hoping to utilize AI for the good of humanity. Earlier today they announced the AI impact challenge, promising up to $25 million dollars for AI projects which can help alleviate environmental and societal problems.
Although in popular consciousness, AI is often perceived as a super-human intelligence with immense capabilities, the reality is quite different. Machine learning algorithms often process huge data sets and make predictions or suggestions based on the available information. However, that doesn't mean they are not immensely useful. In their blog, Google gives examples of AI being utilized for wildfire prevention, flood prediction and wildlife conservation, among many others.
This is why Google.org - the charitable arm of the company, wants to recruit and hear the insights of more nonprofits, academics, and social enterprises. The goal is not to automate but seek solutions to tough societal challenges. Application for the contest is now open and participants are not expected to be AI experts. Google promises to provide education and other resources for those with great proposals:
"We’ll help selected organizations bring their proposals to life with coaching from Google’s AI experts, Google.org grant funding from a $25 million pool, and credits and consulting from Google Cloud. Grantees will also join a specialized Launchpad Accelerator program, and we’ll tailor additional support to each project’s needs in collaboration with data science nonprofit DataKind. In spring of 2019, an international panel of experts, who work in computer science and the social sector, will help us choose the top proposals."
AI can be more human than you'd think
Yet, AI is not infallible and can easily inherit human biases, if not monitored carefully. In their blog, Google mentioned employment as one of the areas AI can address, pointing to the success of Harambee Youth Employment Accelerator. They utilize AI to help connect young unemployed people with entry-level positions.
However, Amazon had quite the different experience with artificial intelligence and recruitment. The billion-dollar company's hiring algorithm exhibited a strong bias against women. This was due to the fact that the algorithm was trained to select candidates by observing patterns in resumes submitted to the company over a 10-year period. Despite efforts to make the AI more neutral, in the end Amazon decided to kill it off entirely. This is just one case among many, which demonstrate how susceptible AI is to human prejudices. Microsoft's Twitter chatbot turned racist is another infamous example.
This is why AI ethics are quite complicated. Earlier this year, Google also pledged not to develop AI-powered weapons and to follow ethical standards in accordance with international laws. Yet the company is still considering collaborating on military projects, so we don't know what the future holds. What do you think? Is Skynet coming for us all or is AI actually going to work for the good of humanity? Let us know in the comments.
Source: The Verge