Google tests changes to its search algorithm; how search wo…


Google has the hard data to direct why it approves changes.

But its process of choosing experiments in the first place is less straightforward.

Google listens to user feedback, including from big, ugly screw-ups, like when people discovered that Google was linking a white supremacist website as the first result for “Did the Holocaust happen?” When there’s a glaring problem, Google doesn’t just eliminate the bad search result and consider its work done. More often, it tries to figure out how to change both its algorithm and its rater guidelines to avoid similar mistakes.

Other times, ideas for algorithm changes come from broad company directives or priorities. For example, some employees have long argued that Google search results should be more personalized, Nayak said. Right now, there is very little search personalization and what exists is focused on a user’s location or immediate context from a prior search. (If you Googled something related to baseball followed by “The Giants,” the results wouldn’t surface the football team, for example.)

But after a lot of effort to test personalization, Google has found that it seldom actually improves results.

A query a user comes with usually has so much context that the opportunity for personalization is just very limited,” Nayak says.

By not personalizing search results, Google has been able to escape a lot of the criticism that Facebook and Twitter have received for creating “filter bubbles,” where people see only information they were already predisposed to believe or like. (Google’s video product, YouTube, has not been able to avoid this criticism, particularly in how it recommends related videos. The two algorithms are totally separate and not created or maintained by the same team.)

Personalization could cause people to lose trust in Google, too. While Google doesn’t personalize most of its search rankings, its advertisements are extremely personalized because of the vast swathes of data its collects (Google allows users to manage privacy settings around what data it collects, but its methods have been misleading in the past).

In all, Google’s search results have come a long way from its original “10 blue links” to other websites. As voice search becomes increasingly important for Google and other big tech companies, it has relied more on its Knowledge Graph, a database of more than a billion entities with 70 billion connections between them, and on “featured snippets,” which surface answers extracted from webpages at the top of search results. There are much higher stakes to getting those answers wrong.

For all its user testing, Google knows mistakes will still appear, sometimes because of intentional vandalism, sometimes because of a problem with the algorithm, sometimes because results reflect societal biases.

“We are under no illusion that search is perfect,” Nayak said. “But we have an absolute commitment to addressing the challenges that we have and continuing to improve it. That’s what people are here to do.”

AddSearch News

Be the first to comment

Leave a Reply

Your email address will not be published.


*