Technology

Google has recently requested staff to hit an optimistic note in the study paper

Google also applied a layer of analysis to academic papers on controversial subjects, including gender, race and political philosophy. The senior manager also advised the researchers to "stroke a positive tone" in a paper this summer. The news was first reported to Reuters.
 
The technological advances and the increasing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues, the policy reads. Three workers told Reuters that the law had begun in June.
 
The organization has also directed workers to "refrain from casting their technology in a negative light" on several occasions, Reuters reports.
 
Employees working on the AI advisory report, which is used to tailor content on sites like YouTube, have been advised to take great care to make a positive tone, according to Reuters. The writers then revised the paper to remove all Google product references.
 
Another paper on using AI to understand foreign languages softened a reference to how the Google Translate product made mistakes, Reuters wrote. The move was made in response to a letter from the reviewers.
 
Google's regular evaluation procedure is structured to ensure that scholars can not uncover trade secrets accidentally. But the study of "sensitive topics" goes beyond that. Employees who wish to test Google's own bias programs are asked to contact law, PR, and policy departments first. Other sensitive concerns allegedly include China, the oil sector, position info, faith, and Israel.
 
The search giant's publishing process has been in the spotlight since AI ethicist Timnit Gebru's launch in early December. Gebru says she's ended up with an email from Google Brain Women and Supporters, an informal organization for Google AI research staff.
 
In it, she spoke about Google bosses pressuring her to withdraw a report on the risks of large-scale language processing models. Jeff Dean, Google's head of AI, said she had sent it too early to the deadline. But Gebru's own staff pressed back on this statement, arguing that the program was applied "unfairly and discriminatoryly."
 
Gebru contacted Google's PR and policy staff about the paper in September, according to The Washington Post. She realized that certain elements of the study could be contested by the firm, as it uses broad language processing models in its search engine. The deadline for making improvements to the paper was not until the end of January 2021, allowing researchers enough time to respond to any questions.
 
A week before Thanksgiving, however, Megan Kacholia, VP of Google Science, asked Gebru to remove the article. Gebru was shot the next month.
 
Google did not respond directly to a request for input from The Verge.

 






Follow Us


Scroll to Top