SMX London 2012 kicked off this morning when Chris Sherman took to the stand and introduced Amit Singhal (right). For those cave-dwellers amongst us, Amit is Senior VP at Google and a ‘Google Fellow’ – a fancy title awarded to those select few who have dramatically impacted the world of search. In this case Amit re-wrote the Google algorithm in 2001 and dedicated the following years to improving, fine tuning and experimenting with Google search.
He was the perfect person to begin a leading search conference. The interesting Q&A was led by Danny Sullivan and Chris Sherman, who took it in turns to ask Amit questions about his history, his goals, Google Panda and Penguin updates, G+, search relevance and more.
Growing up as a boy in Uttar Pradesh, India, Singhal remembers watching old Star Trek episodes on his black and white TV, dreaming he could one day he could have ‘the Star Trek machine’, which you can talk to and receive any information you want. It was this, he says, that drove him to re-write the Google algorithms. And after studying with Gerard Saltan, who some deem the father of modern search, this goal became much closer.
Of course, as we all know, this dream is not quite realised. As Amit said in the Q&A, “computers don’t understand ‘things’, but work on ‘strings'”.
From his background in Information Retrieval, Amit soon recognised that search engines had to implement stemming. That is, focusing on the root of each search term to understand what is basically being talked about. From here Google could move on to synonymy and then Universal Search (incorporating videos, images etc. to give the user everything they could want). All of this is in the pursuit of having search engines actually ‘understand’ your request and provide you with relevant and helpful information based on that.
Amit acknowledged that Google are now investing in ‘things’, or ‘entities’. This is why they bought Freebase (aka Metaweb) back in 2010, which had 12 million entities in their database at the time.
Amit Answers Some Questions
After the introduction Danny and Chris took it in turns to ask Amit some questions. He gave a mix of both revealing and politically guarded responses to some great points, as could be expected from a leading Google chap. Here are the topics they covered and the Q&A specifics:
Danny first wanted to lead on from Amit’s mention of Universal Search. Despite the flak from the press (especially since the launch of G+), he assumed ordinary users must be benefitting from Universal, since Google’s market share continues to grow.
Amit explained the thought behind Universal Search was to give users everything they wanted in one window. Whether that be images, videos, new articles, products or information. This worked to some extent, but Universal was ultimately incomplete since it didn’t include information shared between your private networks.
The motivation behind SPYW was therefore a system where they can index and serve your information privately, to help serve relevant and helpful information in one place. The product is essentially the first step towards indexing everything in your universe, from news articles to pictures of your friend’s dog. From their perspective SPYW allowed them to build a great infrastructure to build on. As Danny suggested, Amit said he has seen that users like the social results, despite press complaints.
They have seen CTR increase in searches with personalised results present and, with one click, you can remove personalisation. Amit reinforced that the product is changing and developing behind the scenes.
The Filter Bubble
Chris Sherman raised a good point, invoking Eli Pariser’s book, The Filter Bubble, he asked Amit whether too much information restricts your ability to read relevant material. Does personalisation give you information you’re actually looking for?
Context and personalisation were the two aspects, Amit responded, that come into play here. Context is critical, otherwise search returns irrelevant results. If you are in New York searching for pizza you don’t want to find results from Roman pizzerias. The second aspect, personalisation, helps make that information relevant to you. Do you know any friends who recommended pizza in NY? What do the experts say in comparison?
Amit agreed, however, that there should be serendipity. Personalization should not overtake the SERPs, but it should be present.
After some back and forth about user testing (to be returned to later) Chris raised the point that personalisation obviously plays into social. Bing recently announced that they have separated social more than Google, who focus more on integration.
From Amit’s perspective relevance is king. The key problem with personalisation is that no one can judge the relevance except for the user, once the results have already been served. Then Google and SEOs are left looking at engagement metrics to determine, ‘was that result good for this individual user’? and ‘What can that tell us going forward?’
Where Bing separates search and social so you can “interact with friends and experts without compromising the core search experience”, Facebook argues that this is core to the search experience.
Facebook and Twitter
Chris brought up the point that Bing now incorporate Facebook results. What have Google done and what do they plan to do along these lines?
Amit noted that Google can’t crawl Twitter at the rate it produces information and that they don’t have a deal with Facebook. It has therefore been tough to build a system to deal with this. Earlier this year when Google launched G+, Twitter blocked them for a month to experiment with other options. But they now have access again and are trying to deal with the information in a relevant way. Amit sidestepped specifics, so there wasn’t really anything new to takeaway here.
Chris and Danny asked some questions about how Google decides what to focus on and how changes are experimented with before they’re rolled out (perhaps to protect against ‘did you even test Penguin/Panda’ questions?!).
In his response, Amit said that ideas are born from listening to the community, thinking of how to solve their problems and then writing something they can put through internal testing and experiments. After internal testing they do blind A/B testing to see how the idea could impact the SERPS. They send this to human raters and mix it up so no one rater can decipher upcoming algorithms. From these results they can judge the success.
After some brief technical difficulties which had Danny and Amit shouting at each other and at us, Danny brought the issue to the inevitable topic of Penguin. This is the most recent from Google to eliminate spam tactics.
He asked, how do you know if its going to work? How do you test the algorithms and how do you judge the outcome?
Wanting to answer the last question first, Amit said that at the end of the day users will go back to the search engine that’s most relevant. That is who judges whether relevance is up.
In terms of Penguin, like Panda, things have gone well (this is according to Amit, remember!). Google’s objective is to reward high quality sites, to reward users and to reward authors of great quality content. With these Penguin changes they have, reportedly “significantly improved” the quality of sites appearing. In terms of independent testing, the search engines that can take well tested results (using the methods mentioned earlier) can best understand what Google likes and fine tune their algorithms better than competing engines.
Social Shares as the New Links
This is a popular topic emerging on the circuit. Danny noted that one of the big changes is that Penguin goes after bad links. Historically Google stood out initially because they counted links as votes, but now we see increasing abuse of this system, have we gone as far as we can with links? Will social signals become more important?
According to Amit, Google use a number of signals, combined at once with a unique algorithm. Apparently they have “more than 200” signals. They want to make sure sites are high quality, they get links, the content is great and that there are social signals coming from the page.
Rather then link signals being the most important, Amit said that the reinforcement of various signals is how Google determine relevance and quality.
Beyond the Link Graph
Chris suggested that everyone in the industry understands the link graph, but wondered whether the knowledge graph is the next step in the evolution? That is, the relation between objects/entities etc. This also ties into direct answers, since Google have started reporting answers to certain questions at the top of the SERPs. Does Google plan to keep promoting Direct Answers?
In Google’s quest to answer user queries, says Amit, they first have to understand meaning. This is where entities and semantic search comes in. Once they understand this, they have to find them on sites and serve them up as answers.
An audience member later came back to this and asked a very valid question. If Google are going to take content from our sites to provide their own answers, how do they reward us? Amit said that the only thing an SEO can’t create is time. When a searcher can find information straight away, they have more time to spend looking at other more complex questions or asking new questions. Whilst this presumably doesn’t help CTR, I imagine it might help the length of visits.
Danny wanted to know more about Sponsored results and what it meant, how it came about etc. Amit reiterated that Sponsored isn’t really an advert because sometimes no one pays for the information to appear. But they wanted to be honest about it, since sometimes people do pay for inclusion. It is basically a deal between select brands to help relevance. They think it works well and Amit didn’t rule out the possibility of it appearing in other sections of Google.
The Spoken Word
Finally (have a cookie if you made it this far!) Amit bounced off a question from Danny, explaining that he hopes to expand spoken search more in the future. This will evolve as more queries come in and they can fine tune the results.
Amit closed the Q&A by asking the audience how they would do things differently if in his shoes. But apparently we’re not allowed to put our own sites at number one!
More from the Search Conference(s)
Thanks to Danny and Chris for putting together a great event. I can’t wait to cover more from the floor at SMX! And of course thanks to Amit for providing some great information about how Google works and what it has in store for us. Keep your eye on the blog for more information out of SMX and SAScon later this week.