Mobile-First Indexing or a Whole New Google? Is Mobile-First the Same as Voice-First? Article 1 of 4


By: Cindy Krum 

In the past month or so, there have been some significant signals that Google is getting much closer to launching the Mobile-First Index, but there have also been some other potential signals that Mobile-First Indexing may coincide with or support a much larger shift from Google. This is all good news, however, the slowness of the launch has given my brain time to speculate regarding the bigger picture. Most people seem to believe that the change over to the Mobile-First Index will be a simple, quick, and easy switch with minimal impact, but now I’m beginning to wonder if the launch of Mobile-First Indexing will simply set the stage for much larger changes, as Google embeds itself more deeply into the consumer-facing Internet of Things. Google’s Mobile-First Indexing may be the beginning of a much larger plan that is more focused on the consolidation of Google properties. This will make a more unified, cross-device experience that helps bring voice search and interaction deeper into the mainstream norms of how people engage with technology.

Google has been spreading the message about the importance of ‘mobile’ UX, and the impending launch of Mobile-First Indexing for a few years. With time, it seems that some in the industry are becoming numb to the idea that anything substantial could change. Google has made it clear that Mobile-Friendly and/or Responsive websites should not be worried about Mobile-First Indexing, but they have also said that the new mobile index will leverage mobile link-profiles, and mobile load-times as core ranking factors. These clarifications could be a bit more ominous for more complex sites that already struggle with proper indexing, or that lean heavily on server-side rendering and JavaScript, but in general, the tone of Google’s communication has been intended to soothe, rather than alarm the SEO community. It is obvious that Google wants this to be a smooth transition, but that does not mean that it isn’t going to be a major change.

This article is the first in a four-part series about mergers and consolidations that might go along with Mobile-First Indexing to make it more impactful than many seem to expect. In particular, it will discuss how Google is pushing its way boldly into a consumer version of the Internet of Things (IoT) as a new, rich source of data, engagement, and revenue. This first installment will discuss how Google Actions fit into this larger picture of Google Assistant and Google Home and it will speculate about how the launch of Mobile-First Indexing may align with Google’s attempt to transition the organization of their index to an entity-based solution which is primarily influenced by machine learning and AI that is trained with a variety of different human interactions. The second article will continue the discussion of how Mobile-First Indexing may help fuel consolidation of Google technology, focusing on PWA’s, apps, and the digital media space. The third article in the series will provide further context and support of the theory, specifically focusing on e-commerce, where there is considerable room for Google to gain market share and revenue from top competitors. The fourth and final article in the series will focus on how location will impact search in the context of Mobile-First Indexing, both in a local and international perspective.

NOTE: If you want to see how Mobile-First Indexing is impacting your site in Google rankings around the world, try our free mobile SERP test!

How do Google Actions & Google Assistant Fit with Mobile-First Indexing?

Last year at the annual Google I/O conference, Google launched a new digital option for ranking, called a ‘Google Action.’ Google Actions are part of the larger Google Assistant software that enables Google Home, the stand-alone speaker-style device that you can use to interact with Google, using your voice. More specifically, Google Actions are what allow Google Home to be combined with Google Assistant to create a more robust, useful, connected AI experience. They function as cloud-hosted utilities that can guide users through a short series of questions before they execute some programmatic action, like booking a reservation, purchasing a ticket or playing a song. Google Actions can be integrated with existing apps or websites or can work on completely on their own.

Google is doubling-down on Google Home and Google Assistant technologies, and so it seems like it’s a good idea for SEOs to pay close attention to this trend. Google has made a significant push to put their Google Home and Google Assistant into many technologies, far beyond the stand-alone options that Google manufactures and sells directly. Google Assistant is now on more than 400 million devices.

Thesis: Google Assistant was built primarily to surface content in the Mobile-First Index. Google Actions work within Google Assistant to complete certain programmable tasks. Both Google Assistant and Google Actions are explicitly intended to work with or without a screen, which makes them ideal for Google Home and the larger Internet of Things (IoT). Google uses the term ‘eyes-free’ here to describe devices that have no screens or keyboards or very limited screens and keyboards.

Google Actions can be built intentionally by companies to function with their systems. In some cases, Google is actively finding ways to build Actions around existing content. This could include reading the text or articles or recipes, or playing podcasts, audiobooks or music. In the middle of January 2018, Google officially launched its Google Assistant store (oddly not called the Google Action Store). The new directory already has more than 1 million Actions, and more will probably be added quickly. It is important that companies act quickly to protect their brand and trademark terms as a preventative measure, even though Google’s policy for actions officially does not allow infringement of intellectual property.

At about the same time, Google sent out a letter to webmasters in Google Search Console, encouraging them to claim their Google Action name ASAP (instructions here). The Action Name is the voice-oriented keyword that triggers the Google Action to work, so securing the Name is the first step that most companies will need to take to build their own Google Action. Securing short, meaningful Action Names is a great way to start making sure that the branding aspect of any Google Action is clear from the beginning – even if the interaction is only a Q&A, and does not take an offline action like booking travel or canceling an appointment. Google is being proactive about this because they have an interest in preventing land-grab for Action names and potential ‘Action Name Squatting’ that mirrors the historical monetization of domain names. In the Action below, the Action Name is ‘Call Santa’ so it is triggered by saying to a Google Home or Google Assistant, “Ok Google, Call Santa.” Then the AI leads you through a series of fun interactions that all take the format of multiple-choice questions.

CallSantaAction

 

What Kind of Search Results Come from Google Home & Google Assistant:

First off, it is important to understand that Google Assistant readily conflates ‘searches’ for information and ‘commands’ for actions. We have seen this indicated many times in conversations with Googlers like John Mueller about voice search and voice queries.

He says that a ‘query’ could include a request to dim the lights or change the temperature in a connected home, just as easily as it could include a request for a sports score, a recipe or a restaurant recommendation. We know that the AI for the devices can be programmed to know the different preferences of the users, based on their voice, so it doesn’t seem like a stretch to assume that the system will also be able to deduce different intents, based on the different surfaces where the search interaction might begin.

BillJohnTweet

With this in mind, it also seems very likely that the Mobile-First Index is going to rely more heavily on DIFFERENT algorithms, that all respond differently based on search context and personalization. Perhaps switching to the Mobile-First Index won’t change results all that much on desktop, tablet and mobile results, at least at first, but with the same index supplying search results and AI feedback from all the other voice-only search options, the search experience as a whole could change quite dramatically. Since both the search intent and the AI response format will be different from device to device, the results will have to change and also normalize for those new variations in the signal feedback loop.  

It’s clear, Google is all over this. A recent study from Moz demonstrates that Google has been pretty bullish about displaying no-click search results, especially in mobile. Examples of these results include map packs, answer boxes, Knowledge Graph results and even sounds, but now they also include the visual representation of Google Actions. This research was echoed in a recent study by ROAST which indicated that a very significant portion of the Google Home search responses was directly from Google’s ‘Answer Box’ results. While the Answer in Google Home search did not always match up exactly with the Answer Box ranked in a regular search result (only 80% fidelity) this could have just been a function of the two segregated indexes, using segregated AI’s.

What is important to recognize about all of the ‘position-zero’ ‘click-free’ results is that they are an easy surface with or without a screen, because they can all be read out loud quickly. If I had to guess, I would say that these are a great example Mobile-First results and they are a live example of top content from the Mobile-First Index. If this is accurate, it makes Google’s assurances about Mobile-First Indexing feel less reassuring because it leaves open the possibility that a large percentage of queries will be given a high mobile or voice intent, and surface these kinds of ‘click-free’ results, rather than anything resembling the websites our businesses have relied on for years.

Google may claim that Mobile-First Indexing does not have an algorithmic impact in the same way that they claimed that AMP didn’t have an algorithmic impact – Which was a bit of a farce! AMP results ranked at the very top of the page, with a big picture and a branded logo, which, of course, did have a significant impact on SEO traffic and rankings, if you were there, or if your competitor was there instead. Though it may not have been an ‘algorithmic’ ranking factor, in our traditional understanding of the algorithm, AMP did impact SEO traffic and rankings, and it was a huge deal. My suspicion is that Google’s guidance on Mobile-First Indexing will have the same possibly well-meaning but ultimately misleading result. The old web results will still be there, and will still have many of the same old algorithmic ranking factors impacting their placement in the rankings, but traditional websites will be second-class citizens compared to more mobile-first options – they may rank okay today but won’t be considered ideal for an ‘eyes-free’ interaction and may lose long-term viability in the Mobile-First Index. It is actually a very exciting prospect for me, but I think that if the web results will consistently be buried by ‘click-free’ results, it seems like something that Google should be more upfront about.

If a Google Action is not surfaced by a keyword-relevant search query, Google Actions can also be directly activated using a registered keyword, to initiate the verbal Action dialogue with Google Home or Google Assistant. This is the ‘command’ side of the Assistant, and this kind of interaction is also incredibly important to Google because it signals their easy entry to the next wave of the internet; the Internet of Things. A query only allows a user to find information, but a command allows a user to do something with the information. So not only can you find out what time a movie is playing at a theater using only your voice, but you can buy a ticket and have it sent to your phone using only your voice. And in a private IoT context, you see if you remembered to turn off the light, shut the garage door or changed the temperature on the thermostat, and can take any necessary action (as long as the right technology is in place), still, using only your voice.

This kind of screen-optional interaction was probably the goal of the Mobile-First Index since the beginning because it allows Google to leverage AI and the Internet of Things, where they have actually been focusing most of their innovation and research in the past couple of years. In January 2018, Google’s CEO said that AI is more profound than electricity or fire, which underscores the significance of AI for Google. Google first really started trying to integrate AI into search when they launched their predictive search engines Google Now and later Google Now-on-Tap. Voice search requires a more diverse set of contextual signals to disambiguate the meaning of a query  – especially because Google Assistant might have to determine if the user wants to search, open an app or take an immediate action. The stronger the understanding of the relationship entities and their related actions, the easier and more accurate this becomes, so these technologies rely heavily on entity understanding. This concept, and how it works with context and predictive search, is well illustrated in a breakdown of a Google patent described by Bill Slawski from 2016. While the ‘circling’ gesture described in the patent was abandoned for a ‘long-tap’ the rest feels very similar to the technology we see today.

So in Mobile-First Indexing, different types of search results will probably have different relevance based on the query and the search surface (device + mode of input). Since Google Assistant is designed to work on so many different devices, with and without screens, with and without keyboards, with and without a touchscreen, we can assume that eventually, the results might be somewhat self-aware, cooperative and inter-operable, so to speak. This is true to some degree already, with a recipe search result from a phone asking if you would like to send it to a Google Home device, sending a podcast from your Google Play on your phone on a Google Home speaker and YouTube results asking if you would like to cast them to a ChromeCast device (all shown on the below). In an ideal world, results that were compatible and/or useful with the technology present would rank at the top, and results that are compatible/useful would be de-emphasized.

Mobile Indexing

This idea becomes even more important in a context where only voice dialogue is present or appropriate – such as with a Google Home, the need for the ‘position zero,’ ‘click-free’ result is nearly an imperative. Most people will want quick information or Actions when they are using voice, so ‘fact checks’, or quick commands are most likely. Even at its best, voice search isn’t ideal for focused research like someone might do for a term paper, and you wouldn’t want Google Assistant to read you a long list of search results or web URLs that it couldn’t open. Instead, Google is using schema, machine learning and AI to parse information down into Q&A format, so that eventually, searchers will be able to ‘search’ via conversation. This is especially easy to understand in a local business context:

For months, Google has been soliciting feedback about local business – It has happened as part of their Google Opinion Rewards program, where they incentivize users to provide feedback on questions like “Do they allow pets?” “Were the lines long?” “Is this a good place to go with friends to share a small plate or snack?”, but Google is also asking the business owners directly in the Google My Business program.

Query

The business finder is already being integrated in Google Maps under ‘Explore’ along with the other vital business stats, busy hours, reviews and interactive Q&A. This kind of Q&A, and really, any similar kind of process, especially where the questions are ‘yes’ or ‘no’ or are multiple choice, are great for feeding an AI. This means in the future, when someone searches, they can either search for a local business with those features in the original query, or they can use voice-only commands to narrow the results. You can even add your own labels or share places which will also probably make the place more likely to rank in your own future searches.

It seems that Google is also planning to lean on multiple-choice options to feed AI more information about the success of Featured Snippets. In a very recent post (1/30/18), Google’s Public Liaison for Search (Danny Sullivan) drew attention to a feature that Google recently launched which lets you interactively filter down to a Featured Snippet based on your situation (screenshot below). He also mentioned that Google has another Featured Snippet format coming soon that shows more than one snippet at a time, probably in a drop-down or potentially carousel format, so that people can select the snippet that best meets their needs (screenshot below). This would allow them to prioritize the options based on the user’s clicks over time, as a means of determining which option is the best or most comprehensive entry. If you’d like to learn more, here is the full article: A reintroduction to Google’s featured snippets.

Snippet-Preview

We know that mobile phones are capable of both public and private indexing of information. The private indexing of information is focused on individualized data, like personal travel itineraries, bookings, and other information, but the larger systems can still record generic data about what is being accessed and how it is being used. It is possible that personalization will change the rate at which the change to the Mobile-First Index impacts people. 

If new algorithms for the Mobile-First Index lean significantly more heavily on active input of preferences, or active engagement with the system for better results, as many AI systems do, the people who engage with Google more actively might get more accurate results more quickly, and people who have strict privacy settings, browse incognito, or limit their logged-in behavior could see fewer changes. It is likely though, that Google will be aggregating and anonymizing personalized data, to help fill in the blanks when people are logged out or data is otherwise missing.

But as Bill Slawski outlines in the tweet below, there are a lot of interesting factors in a voice search that could eventually be used to add personalization or intuit meaning. This will be the hardest part to account for in SEO, and this topic will be discussed more in subsequent articles in this series. 

BillTweet

Google Now/Google Assistant on the Pixel phone appears to have already merged with Google Assistant and I assume this is a close approximation of what we can expect with the Mobile-First Index. Even in the visual interface, with the focus on providing cards of information from feeds. The user can specify basic information about what feeds they will find interesting, and the AI system hones their information from there based on historical interactions. Even within the system, the AI training material looks and feels a lot like Google Knowledge Graph, as you can see on the right. It focuses on the news topics you are most interested in, and what kind of media you consume, and you can drill into different hierarchies of information, so you can assume that this is some approximation of their entity understanding.

Personalization

Once they are engaged, these systems function as if they are in a perpetual conversation. When questions can’t be answered with Answer Boxes, this is where Schema and the Knowledge Graph comes in, allowing you to find the answer that you want based on the ability to filter the information, facts or relationships that are known. So, for instance, you can see that a G-wagon is a kind of Mercedes, and you can drill down there to learn more, or you can move laterally to learn more about different kinds of Mercedes or vertically, to learn about different kinds of SUVs or 4x4s. This can be done visually, or it could be done with verbal prompts and multiple-choice questions – in a very similar way that you might engage with an automated phone system: “Would you like to learn more about G-wagons, learn more about a different kind of Mercedes, learn more about a different kind of SUV or do something else?” In the worst-case scenario, in a voice-only search were a series of questions can’t get you to the piece of information you are searching for, Google may default to suggesting you cast the information to a screen or save the interaction for later when a screen becomes available.

Conclusion:

There are significant changes afoot at Google. They will try to find efficiency wherever they can, so this will probably mean using the Mobile-First Index as the canonical index for all searches and behaviors, on all devices, with Google Assistant as the entry point. In this scenario, the Mobile-First Index will be the singular, primary index, but multiple algorithms and AI will have to adapt based on the device and context where a query is initiated. The concept of a search query may become much broader, and the final destination may become much more curated by Google. This is the only way that Google will be able to provide a good user experience.

As we will discuss in subsequent articles in this series, there will also be private indexes, associated with individual users, to help further personalize the content and potentially even the weighting of the search results. Consolidations in Google’s media and shopping outlets will have reduced friction and increased opportunities across the board if Google can capture enough of the market share ahead of its competitors, and keep the consumers re-engaging with the technology. More so than ever, if we are engaging with Google, we will all be feeding the AI, whether we like it or not. The job of an SEO will be more about anticipating the context and the AI than anticipating the algorithm. Is this all part of Mobile-First Indexing or is this something else? But maybe the more important question we have to ask ourselves is, can we make it work for SEO, and how? We will discuss these important questions and more in the next two articles of this series.

 

Other Articles in this Series:
Is Mobile-First the Same as Voice-First? (Article 1 of 4)
How Media & PWAs Fit Into the Larger Picture at Google (Article 2 of 4)
How Shopping Might Factor Into the Larger Picture (Article 3 of 4)
The Local & International Impact (Article 4 of 4)