International ASO Article 3: Execution – Nitty-Gritty Details About Keyword Research, Metadata Submission & Verification for ASO (3/4)

By: Kathryn Hillis & Cindy Krum 

International App Store Optimization (ASO) can be quite complicated, so it is important to know what you are getting yourself into. This is the third article in a four-part International ASO series that walks through the high-level concepts and nitty-gritty details you need to know when creating and implementing an International ASO strategy.

The first article of this series discussed the basics of International ASO and the strategic elements to consider when determining which countries to target with an ASO strategy. The second article detailed the major differences in how the iTunes App Store and the Google Play Store approach country and language combinations, and how those differences can impact your ASO strategy and rankings. This article will discuss keyword research for International ASO, and how to upload and verify your app metadata correctly in both app stores. The final article of this series (not published yet) will outline how to track your success and will also share details about how to tweak and test nuanced keyword variations continue to drive and hopefully improve international keyword-rankings.

Just a heads up, this article is geared towards readers who are already familiar with the fundamental concepts and Best Practices of ASO. It builds on International ASO concepts originally discussed in the first two articles of this series, so it would be helpful to check those articles out, if you haven’t already. If you’re new to ASO or want to brush up on the basics, we’ve included a list of great resources on the topic at the end of this article that should help you get up to speed on the basics quickly.

International ASO Keyword Research

Just like regular SEO, ASOs must do keyword research to determine which keywords people are searching for in the app stores, that might drive app downloads. When it comes to International ASO, keyword research tends to be more complex because it involves the additional steps of translation and localization. It is important to remember that translation and localization are different. Translation is purely linguistic, but localization is related to culture and style. Therefore, translating the app content and ASO metadata is only part of the job. Apps and their metadata also need to be updated to suit the local vernacular, customs, needs, measurement systems, and regional aesthetics, in order to make the best impression and encourage user retention. In some countries or languages, users’ search patterns may favor particular app features, product benefits, and other nuances that are not as highly favored in other countries. (To find more detailed information about how to select the best metadata languages for your ASO strategy, check out Article 2 of this series.)

EX: Some languages may have search patterns that suggest native speakers will tend to place higher value on an app’s accuracy and functionality, while other languages have search patterns that suggest speakers are more concerned about whether or not an app is free or ad-free. The impact of these language trends can vary based on the type of app you are promoting, so it’s important to do your own keyword research (discussed more below).

In the best international ASO strategies, the potential keyword targets are translated and localized before the keyword research begins. This can be a bit cumbersome if you are working on a region or a language that is not your native tongue, but the natural keyword patterns in the first round of keyword brainstorming and translation can help reveal the linguistic nuances that people who speak various languages value.

Some audiences may search in more than one language or alphabet, so with ASO, it is often valuable to include non-native keywords with all possible language permutations in the keyword research stage. In regions where English is a prevalent 2nd language, it is especially common for international searchers to combine words from their native language with English words, particularly when they relate to tech or pop-culture. In other cases, there are also also nuanced alphabet, punctuation and accenting options that can be tested and optimized for when you are fine-tuning ASO keyword targeting and rankings, and these will be discussed more in the next article in this series.

Evaluating Keyword Options Using ASO Keyword Research Tools

Once you have prepared an initial list of all your possible translated and localized keyword ideas, it’s time to formally compare and evaluate them. Since the app stores don’t provide much information about the relative value of different keywords in their store, it is important to use third-party keyword research tools to determine which of the keyword ideas from your list are the strongest performers in each particular store, language and country combination. The keyword search volume and competition data that you get from third-party ASO keyword research tools is generally provide in the form of a ‘score’. The scores are relative metrics (usually out of 10 or 100) that each of the tools calculates and estimates based on data from the stores and their own proprietary algorithms. EX: In SensorTower, keyword difficulty scores close to “10” mean the keyword is very difficult to rank for; difficulty scores close to “1” mean the keyword is easier to rank for.

Below are the top ASO tools that we have found useful for evaluating keyword targets, described briefly with pros and cons below:

Top Three Tools for International ASO Keyword Research:

SensorTower

Pros

SensorTower is a great tool for international ASO because it provides free search volume data and keyword difficulty metrics. It limits how many keywords you can track per account, but for apps with very lean budgets, creating multiple accounts with different keywords should give you the data you need. It provides ASO information for 84 countries, which is more than almost all the ASO tools we’ve tested (excluding TheTool), which makes it great for International ASO. Lastly, it has features such as “Keyword Research” and “Keyword Suggestions”  that can help with the brainstorming process.

Cons

The main downside to SensorTower is that it limits the number of keywords that you can track, and it deletes your historical keyword data when you remove keywords from your account, so it does not keep a large historical database of keyword rankings (like App Annie does). Therefore, you must maintain an active account that monitors all of your keywords at all times, even if you are not currently targeting the keyword in your metadata, which limits the long-term value of the tool’s data, and means you will have to store that data for yourself in an external repository.

App Annie

Pros

App Annie has the biggest database of historical keyword ranking data, so it’s possible to get a sense of how apps have performed for popular keywords over time, which is helpful for apps that have a seasonal component. Keyword information is tracked and stored, even if you have not created an account yet to track those apps.

Cons

App Annie does not provide free keyword difficulty metrics, so you will need to lean on other tools (like SensorTower) to get that data if you’re on a tight budget. If you have more to spend, they offer a premium competitive intelligence service that includes keyword volume and competition.

MobileMoxie Native App Tracker

Pros

The MobileMoxie Native App Tracker (beta) displays keyword-ranking data for iOS and Android Apps in multiple countries, in one view. This style of reporting makes it easy to see how an app is currently performing for relevant keywords in multiple countries and stores. It is also easy to detect keyword-ranking shifts with this tool because rankings are color-coded.

Cons

The MobileMoxie Tool does not have a large historical database (like AppAnnie does) and does not have keyword difficulty data (like SensorTower does). Its main function is International ASO reporting and improvement strategy, which we’ll discuss more in the next and final article of this series.

 

We endorse the three tools above as “Top ASO Tools for International Keyword Research” because we use them frequently. However, we also wanted to highlight a couple other ASO tools that we haven’t used much, so we can’t fully endorse them yet, but they seem worth checking out. Here are those tools, with some basic perceived pros and cons described below:

Other ASO Tools for International ASO Research:

TheTool

Pros

TheTool provides data for keyword search volume, keyword difficulty, and keyword rankings for 91 countries, which is more countries than any other ASO tools that we’re aware of. Like many other ASO tools, TheTool offers keyword suggestions to help with brainstorming, which can be helpful. TheTool also provides a “Global ASO Report” that shows how your ASO strategy is performing globally, which is great for International ASO.

It also provides “Installs per Keyword” data for apps in the Google Play Store (feature in beta), which shows the organic keywords that drive the most app downloads. This install data is taken directly from Google Play Console. It seems this data is currently only available to some developers, but according to TheTool, it is available to anyone who integrates their Google Play Console account with their toolset. As far as we know, other ASO tools don’t seem to offer this “Installs Per Keyword” feature.

Cons

You have to pay to access any keyword metrics, and if you want the Global ASO Tool, you have to pay for one of the higher-tier plans. Also, no historical data is provided. As mentioned above, you must integrate TheTool with your Google Play Console account to get “Installs Per Keyword” data, which could be an issue for ASOs who do not have access to Google Play Console. Additionally, “Installs” data is not split up by country, which is limiting for International ASO.

AppTweak

Pros

AppTweak monitors keyword search volume, keyword competition, and keyword rankings for 76 countries. It could be useful as a second data source, in case you want to verify keyword research data from other ASO Tools.

Cons

You have to pay to access any keyword metrics. Though AppTweak provides many “best suggested keywords” it only included single-word data, so it’s limited as a brainstorming tool. We also found that the user interface was busier than other ASO tools, which made navigation challenging.


The grid below shows Keyword Search Volume and Competition scores from the top tools in the industry, for the same app store, country and keywords, over the same period of time. You can see that the variation could cause you to make different decisions, based on the keywords tool that you are using. For this reason, it may be valuable to consult multiple tools whenever you are making high-stakes decisions about ASO keywords, for instance if you are making decisions about keywords for an app title, slogan or marketing tagline.The goal of keyword research is to pick terms with high traffic and low competition that your app will be able to rank for. Different tools call these metrics slightly different things, but they are usually some variation on ‘Keyword Volume’ or ‘Search Volume’ and ‘Keyword Difficulty’ or ‘Competition’. When you are comparing or evaluating Keyword Volume and Difficulty numbers from third-party tools, always remember that they are just the best guess of the tool company and not a real number from the stores.

 

SensorTower AppAnnie TheTool AppTweak
Keywords (iOS) – US Keyword Search Volume

[Max Score: 10]

Keyword Competition

[Max Score: 10]

Keyword Search Volume

[Max Score: 100]

Keyword Competition

[Max Score: 100]

Keyword Search Volume

[Max Score: 100]

Keyword Competition

[Max Score: 100]

Keyword Search Volume

[Max Score: 100]

Keyword Competition

[Max Score: 100]

photo 6.1/10 5.4/10 61/100 82.8/100 77/100 96/100 60/100 99/100
photo editor 7.7/10 5.3/10 76/100 82.4/100 83/100 69/100 77/100 96/100
social network 4.5/10 7.0/10 95.5/100 93.5/100 71/100 40/100 45/100 99/100

If your app has already launched, it is possible that it is already ranking for certain words, so this should also be noted. Knowing the current ranking of an app on a particular keyword will give you a better idea of how easy or hard it will be for your particular app to improve rankings on that term. An app that is ranking in position #200 for a keyword that has never been targeted may be able to create significant change, just by targeting the keyword in the metadata; But if the app metadata is already targeting the keyword (plural, singular, gerund) and is still ranking in position #200 may struggle more to achieve change.

Evaluating Keywords for Inclusion in Metadata

Once you have search traffic volume and keyword difficulty metrics for your potential keywords, you can select the best keyword targets. Ideal keyword targets for your app are: 1) relevant and accurate, 2) get high search traffic in the app stores, and 3) are words that your app can realistically rank well for, without competition that is too intense. Consider the following unique elements of your overall strategy when reviewing and evaluating the keyword research to pick the best keywords to include in your International ASO metadata. (To learn more details about which app metrics may need to be improved to rank well for competitive keywords, check out this ranking-factor survey by TheTool: App Store Optimization Factors & ASO Trends for 2018.)

Remember, as discussed in Article 2, both app stores have some metadata localizations that seem to impact other metadata localizations, which gives you the opportunity to leverage additional keywords in different countries.  Since you will have to choose the top performing keywords, and eliminate ones that drive lower traffic or are too competitive, the best strategy is to save the keywords that don’t make the cut in the main metadata in the other localizations, whenever possible.

How to Submit International ASO Metadata

Once you have completed the keyword research for the translated and localized keywords, and used those keywords to write app metadata for the countries that you want to target, you simply need to upload that metadata to iTunes Connect and Google Play to make it live, so that it can start having a positive impact. This section will discuss how to correctly input localized metadata into iTunes Connect and Google Play. It will also discuss how to organize and coordinate multiple the metadata submissions, which may be particularly helpful for complex, ongoing, enterprise-level International ASO projects.

iOS – App Store Metadata Submission

Here is the process for adding localized metadata to iTunes Connect:

  • Select “My Apps” from the iTunes Connect homepage, then select the app that you’d like to add localized metadata to.
  • Select “App Information” in the left navigation and select the blue link with your app’s primary metadata language, to reveal a drop-down with all localizable metadata languages. (Screenshot 1 below.)
  • Select the metadata language that you’d like to add localized text for. In “App Information,” you can localize the app name and subtitle. (Screenshot 2 below.)
  • Select the latest app version in development under “iOS App” in the left navigation. On this page, you can localize “What’s New,” Promotional Text, Keyword Tag, and Description. (Screenshot 3 below.)

How to Input iOS Metadata in iTunes Connect:

1.)

2.)
3.)

How to remove localized metadata:

  • Select “App Information” in the left navigation and select the blue link with your app’s primary metadata language to reveal a drop-down.
  • Hover over the localized metadata you want to remove and remove it by clicking the red circle displayed on the right of the language.

How to change the primary language of your app:

  • Select “App Information” in the left navigation and scroll down to the “General Information” section of the page.
  • Chose the new Primary Language from the Primary Language drop-down menu.
  • Click Save in the upper-right corner.

Android – Google Play Metadata Submission

Here is the process for adding international metadata to Google Play Developer Console:

    • Select your app from the “All Applications” list on the homescreen. (Green box below)
    • Select “Store Listing” in the left navigation. (Purple box below)

 

  • How to add localized metadata:

 

    • Select “Manage Translations”
    • Select “Add your own translated text.”
    • Select your target language(s), then add your localized text.

Access “Store Listing”

  • How to remove localized metadata:

 

    • Select “Manage Translations” and “Remove translations.”
    • Select the languages you want to remove.

Add Localized Metadata:

1.)

2.)
3.)

Remove Localized Metadata:

1.)
2.)

3.)

  • How to change the default language for your app:

 

    • Select “Manage Translations” and select “Change default language.”
    • Select a new default language.

Change Default Language:

1.)
2.)

International ASO has many moving parts and there are potentially multiple parties touching the metadata before the final product goes live, so once you have submitted the new metadata in the store and pushed it live, it is important to use a metadata-verification process that confirms the correct metadata is live in all locations.

How to Verify International ASO

Ideally, you’ll have access to iTunes Connect and Google Play Developer Console, which is the most straightforward way to verify the metadata updates. Below, we outline about tactics for verifying user-facing metadata if you want to double check that the metadata reported in iTunes Connect and Google Play Developer Console corresponds with what is live in the store. These methods are also helpful for verifying metadata when you do not have access to app developer accounts, though the verification will be less comprehensive. (Most, but not all parts of the metadata verification process can be completed even if you have limited or no access to iTunes Connect and Google Play Developer Console.)

Live iOS App Metadata Verification

The best way to verify your new iOS app metadata is to check it using a live web URL. It is important to use a desktop or laptop computer to complete this task, since mobile versions of the App Store do not show all of the metadata elements, and often limit access to regional versions of the app metadata based on phone and iCloud account settings. Article 2 in this series outlined how you can modify App Store URLs to launch the app landing pages in any country; a few examples are also included below. These app landing pages will allow you to verify all of the iOS metadata except the new Promotion Text, which is currently only shown in the mobile app version of the app landing page. You also will not be able to verify the keyword tag in either location. It is not user-facing and is only viewable in iTunes Connect.  

iTunes Localization + Language Examples of iTunes App Store Web URLs:
United States – Default (English [US]) https://itunes.apple.com/us/app/facebook/id284882215?mt=8
United States – Spanish (Mexico) https://itunes.apple.com/us/app/facebook/id284882215?l=es
Germany – Default (German) https://itunes.apple.com/de/app/facebook/id284882215
Germany – English (UK) https://itunes.apple.com/de/app/facebook/id284882215?l=en

Live Android Metadata Verification

Once you have submitted your Android app metadata updates in Google Developer Console, the best way to verify the international metadata is to use Google Play Store web URLs. Examples of a few international app URLs are included below, but for more details on how an international Google Play Store web URL is constructed please see Article 2 in this series. There are a few cases in which Google Play web links seem to consistently display incorrect metadata for some non-US/UK English localizations. We have seen multiple instance of a Google Play web landing pages showing English-UK metadata on the web, even when the Google Play Console metadata is clearly set-up for English (Canada) or English (India) localizations. In these instances, testing the same apps on a mobile device revealed the correct version of localized English metadata. It is unclear why this happens, but if you encounter something unexpected on the web, we recommend using your mobile device as an alternate method of verification when you encounter something weird for non-US/UK English metadata.

Examples of Android Google Play Web URLs: Google Play Localization + Language
English – Default https://play.google.com/store/apps/details?id=com.weather.Weather&hl=en
English – UK https://play.google.com/store/apps/details?id=com.weather.Weather&hl=en_GB
Spanish – Spain https://play.google.com/store/apps/details?id=com.weather.Weather&hl=es
Spanish – LATAM https://play.google.com/store/apps/details?id=com.weather.Weather&hl=es_419

To verify metadata changes on a mobile device, start by changing the language on your Android device (in the on-device Settings) to the metadata language you want to check, then simply view the metadata that is live in the store using a search for the brand name of the app, or some other term that you know it will rank for. Google Play rankings have less complexity surrounding country and language settings, so this shouldn’t be hard. The primary limitation to verifying metadata through your mobile device is that you can only access and verify metadata for apps that have been published in the Play Store country associated with your device, which is determined by your billing address. You probably will not run into this restriction often when verifying metadata for enterprise-level apps, since they are likely to be available globally, and include the Google Play store for your country of residence, but this could come up in edge cases.

If you need to verify metadata for an app that you cannot access on your device due to country restrictions (EX: you’re a US-based marketer trying to verify metadata for a Lite App published only in India), we recommend using the Web URL verification method instead (discussed at the top of this ‘Android’ section). If it’s critical for you to verify metadata on a mobile device for an app that you cannot access due to country restrictions, you can use a VPN to change the Google Play country on your device to a Google Play country where the app you want to verify is available. (Here is a detailed answer on Quora about how to change your Google Play Store country: How can I change my Google Play Store country?) However, in most cases, going through the VPN process is likely more trouble than it’s worth.

Strategic Metadata Submission & Maintenance

Creating a metadata submission strategy can help keep things organized, especially if you’re juggling many apps and territories.  Below, we will discuss important components to consider when creating and maintaining a metadata submission strategy for International ASO.

Setting Keyword Ranking Benchmarks

If your task is to improve International ASO for an app that is already available in many countries, it is helpful to know how the app is currently ranking for relevant keywords in all the localizations you’re working on. It can be a bit tricky and time-consuming to do this ranking assessment because most ASO tools segment keyword-ranking data by app store and country, which can slow down the tracking process, especially if you want a quick, high-level global overview.

The MobileMoxie Native App Tracker tool (in beta) can help expedite this process because it displays keyword-ranking data for your iOS and Android app(s) together in all the countries where they are available, in one view – a true global perspective. For global, international and enterprise-level ASO’s it is also much easier to add new keywords that you want to track, because one keyword can be added to both iOS and Android apps, in multiple countries around the world, all from one screen with just a couple clicks. Ranking data is color-coded, so it’s easy to spot positive and negative keyword-ranking trends. As shown in the screenshot of the Native App Tracker tool above.

Being able to see all of the data together can be very valuable, especially when looking at keywords between iOS and Android versions of an app. Often, if a company has launched in one store, and then launches the other OS later, companies will want to save time by re-using keyword research and targeting from the first release in a different country. While we don’t recommend this strategy, and strongly suggest that unique keyword research is valuable for each country-OS combination, the translation and localization aspects of that process often slow down the release of the app, which is also problematic. Using previous keyword research from the opposite store, which is already translated and localized, is an okay interim strategy while waiting for the translation and localization process to be completed, especially when it threatens to delay the app launch.

NOTE: The MobileMoxie toolset does not yet include keyword search volume and competition like the tools described above, but these features will be added in future editions of the tool. For now, the MobileMoxie tool is just the best way to track and verify that the predictions of the other tools are impacting real-world metrics in your particular markets.

Coordinating App Metadata Releases & Updates

Metadata iOS Metadata Fields  – New App Version Required for Changes? Android Metadata – New App Version Required for Changes? Being strategic about metadata updates makes it more likely that the correct metadata will go live at the right time. The timing of your metadata updates can also enhance how well the metadata performs. iOS and Android handle the timing and ability to update app metadata differently so we should start there.

iTunes requires a new app build to be submitted and approved before any metadata changes can be made. Given this requirement, development, marketing, and translation teams must work together closely in an organized workflow to coordinate iOS metadata updates. The only exception to this rule is the new “Promotional Text” field, which can be made at any time. This new metadata field mainly impacts conversion rates and does not impact keyword rankings.

Google Play, on the other hand, allows metadata updates at any time, even if the app itself has not changed at all. This difference between the stores can make the process of localizing and updating metadata easier for Android than iOS, especially for more agile teams, and makes the overall process of managing and updating Android app metadata much easier and more flexible than iOS..

Title Yes No
Subtitle/Short Description Yes No
Keyword Tag Yes NA
Long Description Yes No
What’s New Yes No
Promotional Text No* NA
*This can be changed at any time but has no impact on Keyword rankings.

Beyond these hard limitations from the stores, there are a couple other things to consider when creating your international ASO release/update strategy:

  • Get familiar with how often your app will get new features or major technical improvements, and whether those updates will differ per territory. Generally, it makes sense to update metadata around the same time your app has pushes live a major update, so it’s helpful to know about likely future updates for ASO planning purposes.
  • Determine which apps will require seasonal metadata updates, for planning purposes. Schedule the seasonal keyword research with enough time to incorporate the translation and localization processes so that they are complete before any seasonal versions of the app need to be launched in the stores.
  • Establish which apps and territories are in the iTunes App Store, and proactively establish a tight workflow to ensure that iOS metadata updates go live at the right time. All metadata that can impact rankings for the iTunes App Store require a version update, so the schedules of various teams will need to be coordinated and a shared calendar may need to be created to maintain easy communication and make team planning and alignment easy.
  • Determine through testing how long it seems to take for your app to experience the full impact of metadata updates and factor those findings into your update strategy. We generally recommend leaving the same metadata live for at least 3-4 weeks, to verify the full impact of metadata changes.  This information is particularly helpful for the Google Play Store, which seems to respond to metadata changes a little slower than the iTunes App Store. The impact of metadata adjustments on your app’s keyword rankings can vary per app and potentially per territory, so it’s important to do your own testing. It can also change as new competitors enter or leave the market.
  • Take note of top competitors release cycles, and monitor them over time to see if they change. In some cases, you may be able to improve ASO by analyzing your top competitors’ release cycles and launching your version releases slightly ahead of your competitors’ schedules, to get early traction. However, this tactic can take a lot of work and coordination, and the impact can vary based on your competition, so we generally don’t recommend investing a lot of time into this until the overall ASO strategy is on solid footing.
  • Monitor user reviews and leverage those insights in future metadata, which we will discuss more in Article 4 of this series.

Coordinating Metadata Updates with Paid & Supportive Marketing

The app stores don’t separate paid metrics from organic metrics in their app quality evaluations; since Download Velocity is an important App Quality metric, driving it up with organic or paid downloads from app store ads, Google PPC, email campaigns, social campaigns and anywhere else traffic and downloads can be generated, will make it easier for the app to rank overall. In fact, some studies show that for every one app download that is driven by a paid ad, there are as many as 2x additional organic downloads that can be attributed to the same paid ad. Anyone who works in ASO should be familiar with how paid mobile ads work and how to leverage them in order to improve organic user acquisition for your app. Check out this informative blog post for more details about how to leverage paid app installs to improve organic downloads: Mobile marketing: How paid app installs impact organic downloads.

It is important to know that the the timing of paid campaigns can change their impact on organic ASO. In both stores, the paid ads have been shown to have a much more significant impact if increases in the budget are aligned with launches and updates to the app. This makes it important to make sure that your ASO team is in constant communication with the paid team, so that calendars and launches can be aligned.

An entire article series could be devoted to this topic, but we tried to summarize the most useful notes related to paid International ASO app promotion below:

  • iTunes App Store: As of January 2018, Apple search ads are available for Australia, Canada, Mexico, New Zealand, Switzerland, the United Kingdom, and the United States. Keep an eye on Apple’s Official Search Ads Page for updates about new Search Ad territories.
  • Google Play Store: Google created a “Lifetime Value Calculator” (or “LTV Calculator”) to help you understand and plan for paid user acquisition in the Google Play Store, linked here: https://developer.android.com/distribute/ltv-calculator.html. The calculator works for domestic and international markets, as long as you input data that is specific to the country or region you’d like to target. To learn more details about how the calculator works, check out this in-depth blog post by David Yin, one of Google’s Google Play Business Development Managers: Taking the guesswork out of paid user acquisition.

It is also important to note that the paid ads in the app stores are not forced to adhere to the same limitations in terms of trademark restrictions and competitive targeting. This means that it is completely ok with the stores to bid on your top competitors brand names or app names, and rank for them This can be a good way to build awareness for a competitive product in a tough space where there are only a few top apps that dominate the awareness. Just be sure that if you choose this strategy, that you are managing your budgets closely, since this tactic can drive up costs rather quickly, without driving a high proportion of downloads. This fact can also have a secondary, negative impact on the health of the over-all ad campaigns, so segment these as much as you can from other paid campaigns in the app stores.

On-Going App Metadata Management & Maintenance

If you followed the instructions in Article 1 of this series about researching the country needs and demographics to plan your app launch, you should have a pretty good idea about the level of priority for each of the countries that are included in your app launch plan. With this research in mind, you can take steps towards getting everyone on your teams organized and on the same page, to ensure a smooth, error free metadata submission process for all countries and languages.

It can be helpful for many organizations to establish a workflow where any necessary metadata updates are researched, developed and translated in parallel with app code version updates, so that they can easily be published live together without one team delaying the other. Here are some prioritization and management steps that can be helpful to incorporate into your international ASO strategy:

  • Get everyone on the same page about which apps and countries are a top priority and which are less important. Do this prioritization exercise early on to help set expectations and offer guidance about ASO resource allocation.
  • Create a shared calendar or invite ASO representatives to development sprints, to ensure that communication about deadlines and changes for launch dates are always up-to-date.
  • Consider circulating a sign-off sheet that must be approved by all parties before each app version release. This sign-off sheet should include any metadata changes that will go live with the version update if there will be changes. This process helps the team stay coordinated for each app update and helps clarify what metadata has been touched, by whom.
  • Work with developers to establish a release schedule for the app. The more systematic and predictable your app release schedule, the easier it is for everyone to work together to push app versions live with correct metadata.

This type of predictable, clear system will keep everyone on the same page and makes metadata maintenance less likely to fall off the team’s radar. It can also be really helpful if ASO has been added people’s plate along with many other competing and unrelated priorities, which anecdotally seems to be the case for many companies.

Conclusion

International ASO can be complicated because it is an evolving discipline that often includes many moving parts. This four-part article series on International ASO has discussed how to pick the best markets for international app expansion in Article 1, provided details about how the app stores approach the various country and language combinations, and how those differences can impact your ASO strategy and rankings in Article 2.  After reading this third article in the series, you should be able to:

  • Leverage ASO tools to determine which translated and localized words are the best keyword targets.
  • Upload metadata to each app store, and set up each country/language appropriately.
  • Establish the best way for you and/or your agency to verify international metadata in the app stores.

The next and final article in this series will talk about how to track and measure the success of your international ASO efforts. It will also provide a bit more nuanced detail about the relationship between ASO and language and how minor changes to the way words are used or written can impact your ASO success.

Additional ASO Resources

This article focuses on the details and insights specific to International ASO, so it did not spend much time on the fundamentals of general ASO. Here are some great resources to check out if you’d like to learn more about basic Best Practices for general ASO:

The Entity & Language Series: Frameworks, Translation, Natural Language & APIs (2 of 5)

NOTE: Please use these links to catch up on the previous posts in the series: Article 1Article 2 / Article 3Article 4 / Article 5

By: Cindy Krum 

All of a sudden, we are seeing translation happen everywhere in the Google ecosystem: Google Maps is now available in 39 new languages, Google Translate rolled out new interface with faster access to ‘Conversation Mode,’ Google Translate on Android switched to server-side functionality that allows it to be invoked and work within any app or website on the phone, as shown in the image on the right, where Google Translate has been invoked to translate a tweet in the Twitter native app. Google clearly has reached a new level in their voice and language capabilities!

We are also seeing Google find lots of new uses for their Cloud Natural Language API; Google also just launched ‘Google Talk to Books’ which uses it to allow you to ask questions that it will answer from knowledge it has gained from crawling and processing/understanding the contents of over 100,000 books. They also just launched a new word association game called Semantris, which has two modes that both allow players to work against time to guess at on-screen word relationships to advance ever-increasing special hurdles caused, as more and more words are added to the screen.

And the list goes on. We are also seeing some of this play out in search. Image search results are now clearly pulling from an international index, with captions that have been translated to fit the query-language. Map results include translated entity understanding for some major generic queries, like ‘grocery store’ and ‘ATM’, and they also auto-translate user-reviews for local businesses in to the searcher’s language.

The timing of all of these changes is not a coincidence. It is all a natural side-effect of the recent launch of Mobile-First Indexing, as we call it Entity-First Indexing. This is the second article in a multi-part series about entities, language and their relationship to Google’s shift to Mobile-First/Entity-First Indexing. The previous article provided fundamental background knowledge about the concepts of entity search, entity indexing and what it might mean in the context of Google. This article will focus on the tools that we believe Google used to classify all the content on the web as entities, and organize them based on their relationships, and launch the new indexing methodology. Then we will speculate about what that might mean for SEO. The next three articles in this series will focus research and findings that we completed to show evidence of these theories in play internationally, as they are related to the functioning of the different translation APIs, how those impact Entity Understanding and how personalization plays into Google’s Entity Understanding and search results. 

Jump To:

Fuchsia & Why Entities are So Important to Google

To understand how language and entities fit into the larger picture at Google, you have to be able to see beyond just search and SEO. Google cares a lot about AI, to the point that they just made two people who previously worked on the AI team, the heads of the Search Team. At Google, they also care a lot about the voice search – so much that Google Assistant has already shipped in more than 400 million devices around the world. Finally, Google care about reaching what they call The Next Billion Users – people living outside of North America and Western Europe, who have historically not been on the cutting edge of technology, but are now getting online and becoming active and engaged members of the online community. All of these goals may be brought together with a new software that Google is working, currently under the code name Fuchsia.

Fuchsia is a combination of a browser and an OS. What is the most important about it from a search perspective, is that it works based almost entirely on entities and feeds. The documentation and specifications are still very thin, but if this is the direction that Google is headed, then we can be sure that search of some sort will be tightly integrated in the software. As SEO’s, what we need to remember, is that search is not just web, and this is something that Google now really seems to be taking seriously. Google knows that there are lots of different types of content that people want to surface and interact with on their phones and other devices, and not all of it can be found on websites. This is why entities are so important. They allow web and non-web content to be grouped together, and surface when they are the most appropriate for the context, or surface together, if the context is not clear, to let the user decide. This is where Google Play comes in.

Even now, if you are not sold on the idea of Fuchsia ever impacting your marketing strategy, it is worth looking at the Chrome Feed, which is a default part of all Android phones and part of the Google App on iOS. This customization feed, sometimes called ‘Articles for You’ is almost entirely entity based and according to NiemanLab and TechCrunch, traffic from this source increased 2,100% in 2017. Users get to select the specific topics that they want to ‘follow’ and the feed updates based on those topics, but also shows carousels of related topics, as shown below. Users can click on the triple-dot menu at any time to update the customization of their feed. If you don’t think this is a powerful way of getting news, realize that people can’t search for a news story until they at least have an idea of the topic or keywords that they want to search for – they have to be aware of if to put in a query. You can also think of how Twitter and Facebook work – both in feeds that you customize based on who you are friends with or follow – but most of us wish we could customize those feeds more. Google is hoping to be able to get us there in their own offering!

How Device Context & Google Play Fit In

Once Google launched app indexing, most SEO’s probably thought that Google’s ultimate goal was to integrate apps into normal search results. For awhile, it probably was, but deep linking and app indexing proved to be so problematic and complex for so many companies that it fell off of most people’s radar’s and Google changed course.

Either your app and website had exact web parity, all the time, and it was easy, or you didn’t and it was much more complicated. The problems generally stemmed from large sites with different CMS running the back-ends of their web and app platforms-sometimes even different between Android and iOS. This made mapping between all three systems to establish and maintain the required web-parity between the apps and the website a nightmare. Beyond that, whenever anything moved in the app or the website, content had to be moved everywhere else, to mirror the change. We think that this was one of the many good reasons that Google started advocating PWAs so strongly – it got them out of having to sort out the problems with deep linking and app indexing.

PWA’s allowed one set of code handled app and web interaction, which was brilliant, but what a lot of SEO’s missed, was the announcement that PWA’s were being added to Google Play, Google’s app store. PWAs are essentially ‘websites that took all the right vitamins’ according to Alex Russell from Google, so them being added to Google Play was a big deal! We have suspected it for a long time, but with the addition of PWA’s (and Google Instant Apps) to Google Play, it is finally clear that apps are not being integrated into traditional web search at Google, like most SEOs suspected, BUT, traditional web search is being integrated into Google Play – or at least using the Google Play framework. This fits perfectly into the concept of Entity-First Indexing, because Google Play already uses a cross-device, context-aware, entity-style classification hierarchy for their search!

 

Google Play can also handle the multi-media, cross-device content that Google probably wants to surface more in Mobile-First/Entity-First Indexing, including games, apps, music, movies, TV, etc, as shown below in the Google Play & Monty Python Google Play Search examples. All that content is already integrated, populated and ranking in Google Play. It is also set up well for entity classification, since things are already broken down based on basic classifications, like if they are apps, games, movies, TV shows or books. Within each of those categories, there are sub-categories with related sub-categories. There are also main entities, like developer accounts, or artists, from which multiple apps or albums and/or songs can be surfaced, and these also have relationships already built in – to other genres of related content, so this is all great for Entity Understanding.

 Google Play is already set up to include device-context in it’s search algorithm, so that it only surfaces apps and content that can be downloaded or played on the device that is searching. It is also set up to allow different media types in a SERP. As discussed in the first article in this series, context is incredibly important to Google right now because it is critical for the disambiguation of a searcher’s intent when it comes to entities.

Googles additional focus on context could also make the addition of videos and GIFs to Google Image Search seem potentially more logical, if contextual is considered. Perhaps this is now just a contextual grouping of visually oriented content, which would make it easier to interact with on devices like a TV, where you might use voice search or assisted search, casting or sharing your screen from a phone or laptop to the larger screen so that the viewing experience can be shared. Bill Slawski explains that many of Google’s recent patents focus on “user’s needs” and “context”….One of those was about Context Vectors, which Google told us [sic] involved the us of context terms from knowledge bases, to help identify the meaning of terms that might have more than one meaning” We think that the ‘knowledge base’ that Google is referring to in this patent documentation is actually Google Knowledge and similar data repositories that may have since been merged into the Knowledge Graph. The current status of Google Image Search could just be a middle-term result, that will change more, as more classification and UX positioning is added to the front-end side of the search interface.

From a linguistic perspective, Google Play was also a great candidate to use as a new indexing framework. For all the categories of content, but especially for apps, the categories that are available in the store stay the same in every language, though they are translated. More importantly though, metadata that app developers or ASOs submit to describe their apps in the store is auto-translated in all languages, so that your app can be surfaced for appropriate keyword searches in any language. So Google Play is already set up for a basic entity understanding, with all the hreflang information and hierarchical structure already in place.

Are Local Businesses Already Being Treated Like Apps?

If you are not focused on Local SEO, you might not be aware of the massive number of changes that have launched for GoogleMyBusiness (GMB)  listings in the past couple of weeks, in the time since the March 17th update. In general, small business owners have recently been given a lot more control of how their small business look in the Google Knowledge Graph listings. This includes: the ability to add and edit a business description that shows at the top of the listing, the ability to actively edit the menu of services that the business offers, and more.

Before March 17, Google had also quietly been testing Google Posts, which allowed small businesses to use their GMB accounts to publish calls to action, and allow searchers to take actions directly from the GMB – Knowledge Graph panel, including booking appointments and reservations. It is essentially a micro-blogging platform that lets business owners make direct updates to their business listing whenever they want, and this is a big deal. Joel Headley and Miriam Ellis do a great job of covering it on the Moz Blog.

All of this makes it seem very much like Google is empathizing with, and trying to fix one of the biggest pains of small businesses – maintaining their websites. This is another aspect of the Google Play store that fits well in the model we believe Google is going for, is that proven entity owners, such as app developers, are able to edit their app listings at will, to help market them and optimize them for search. If Google can empower small business owners to build out their GMB listings, and keep them current, then it will save them a lot of time and money, and many of them would be just as happy, or happier with that solution then having the burden and cost of maintaining a website.

From Google’s perspective, they just want to have the best and most accurate data that they can, as quickly and efficiently as they can. Google knows that small businesses often struggle to communicate business changes to web development teams in real time, and budget constraints may keep them from making changes as often as they would like. By empowering the business owners to control the listing directly, and even allowing them to set up calls to action and send push-notifications, Google is really creating a win-win situation for many small businesses. There are some obvious SEO questions about how easy or hard it will be to optimize GMB listings in the complete absence of a website, but this is an area to watch. Google is likely using off-line engagement data, and travel radiuses to inform how widely a business’s ranking radius should be, and how relevant it is for various queries, so we could be in all-new territory here, in terms of optimization and success metrics are concerned.

Global Search Algorithms are Better than Local

The websites that Google currently ranks in search results are translated by the website creators or their staff, but this is not necessarily true of the other entities that are ranked, for instance Knowledge Graph results, and related concepts that are linked there, like apps, videos and music. In these, Google is often using their own tools to translate content for presentation in search results (as they do aggressively with Android apps) or actively deciding that translation is not necessary, as is common with most media. They do this translation with basic translation APIs and Natural Language APIs and sometimes, potentially human assistance.

Without a language agnostic, unifying principle, organizing, sorting and surfacing all the information in the world will just get more and more unwieldy for Google over time. This is why, in our best guess, Google is not translating the entire web – they are just doing rough translations for the sake of entity classification. From there, they are ranking existing translations in search results, and then their language APIs makes it possible to translate other untranslated content with APIs, on an as-needed basis, which may become more important as voice search grows in adoption. For Google, it is actually easier to unify their index on a singular set of language agnostic entities, than it is to crawl and index all of the concepts in all of the languages, without the unifying, organizing principles of entities.

This synthesis of information necessary for entity classification may actually create more benefit than is immediately apparent to most SEOs; most SEOs assume that there is an appropriate keyword for everything, but in reality, language translation is often not symmetrical or absolute. We have probably all heard that Eskimos have more than 50 words for ‘snow’. These 50 words are not all exact translations but have slight variations in meaning which often do not directly translate in other languages. Similarly, you may have been exposed to the now-trendy Danish concept of ‘Hygge,’ which is a warm, soft homey feeling that one can create, which usually includes snacks and candle light, but again, there is no a direct translation for this word in English. If we required direct translation for classification, much of the richer and more detailed and nuanced meaning would be lost. This could also include loss of larger data concepts that are valuable across international borders, as postulated in the example below:

EX: If I am a Danish climate researcher, and we develop a method for measuring a the carbon footprint of a community, we create a new keyword to describe this new ‘collective community carbon footprint measurement’ concept, and the keyword is, ‘voresfodspor.’ This word exists only in Danish, but the concept is easily described in other languages. We don’t want the data and our research to be lost just because the keyword does not universally translate, so we need to tie it to a larger entity – ‘climate change,’ ‘climate measurement, ‘carbon measurement,’ ‘community measurement.’ Entity understanding is not perfect translation, but it is great for making sure that concepts don’t get lost or ignored. It is great for allowing further refinement by humans or by machine learning and AI down the road.

We know that the nature and content of languages in the world changes over time, much more quickly than the nature and number of entities (described at length in the previous article). Keying Google’s index off of a singular list of entities, in this case, based in English, makes surfacing content on the ever-growing web faster than it would be if entities had to be coded into the hierarchy of all languages individually. This is perhaps why in John Mueller’s recent AMA, John clearly said that Google wants to get away from having language and country-specific search algorithms. According to John, “For the most part, we try not to have separate algorithms per country or language. It doesn’t scale if we have to do that. It makes much more sense to spend a bit more time on making something that works across the whole of the web. That doesn’t mean that you don’t see local differences, but often that’s just a reflection of the local content which we see.”  

MarketFinder Tool is an Entity Classification Engine

In discussing Entity-First Indexing and the process by which Google may have approached it, we think it is useful to look at the tools that they have released recently, incase they can give us insights into what Google’s tech teams have been focusing on. The assumption here is that Google often seems to release cut-down versions of internal tools and technologies, once they are ready to start helping marketers take advantage of the new options that Google has been focusing on in the background. The best example here is the Page Speed Insights tool, that came out after the PageSpeedy server utility became available and the internal Google Page Speed Team had been working on helping speed up Chrome, and helping webmasters speed up their own web pages for a couple years.

In the past couple months, along with the many other translation and language-oriented new releases, Google has launched the MarketFinder and promoted it to their advertising and AdWords clients (Big thanks to Bill Hunt, one of the most notable PPC experts in the industry, for pointing this out to me!) In this tool, you can input a URL and it will quickly will tell you what advertising categories it believes are most appropriate for the URL, as you can see below in the www.Chewy.com example; from there, it will tell you what markets and languages show the most potential for marketing and advertising success in these topics, depending on if you sell products on the site. From there it gives you detailed information about each of the markets where it suggests you should advertise, including a country profile, economic profile, search and advertising information, online profile, purchase behavior and logistics for the country.

What is important to understand about the tool is that it is not telling you the value of the keyword but the value of the keyword concept – or the entity based on the automatic categorization of the site. The keyword and its related concepts, translated in to all the relevant languages, in all the countries where people might be searching for this topic or related topics. It is ALMOST like Google published a lite version of their ‘Entity Classification Engine’ and made available for PPC marketers to help them find the best markets for their advertising efforts – regardless of language, currency and other ideas that are often tied to countries, currencies and languages, but are less tied to entities.

The other thing that is interesting about the tool, which could be a coincidence, or could be related to Mobile-First Indexing and Entity classification, is that it does not allow you to evaluate pages – only domains – but it evaluates domains very quickly. It is almost as if it is pulling the classification of each domain from an existing entity database – like Google already has all of the domains classified by what entities they are most closely related to. This part is still unclear, but interesting from an SEO perspective. If it is telling us exactly how a domain has been classified, we can verify that we agree with the classification, or potentially do things to try to alter the classification in future crawls.

Cloud Native Language API Tool

The next somewhat newly released tool, and what many of the newest translation technology has been based on is the Google Cloud Natural API, which uses natural language technologies to help reveal the meaning of texts and how Google breaks it down into different linguistic structures to understand it. According to Google, the API uses the same Machine Learning technology that Google relies on for Google Search and Google Assistant. When you visit the API documentation, you can interact with the API directly, even without a project integration, by dropping text into the text box half way down the page. The first things that it does is to classify the submitted text based on it’s understanding of it, as entities! The tab is even called the ‘entities’ tab in the tool. (Those who doubt the importance of entities, probably also don’t realize how hard it must have been to develop this technology for all languages around the world – The level of commitment to developing and honing a tool like this is quite impressive!)

As you can see in the example below, with text taken from the MobileMoxie home page, our Toolset is somewhat correctly identified as a consumer good, though it might be better described as a ‘SaaS marketing service.’ A lot of keywords that the Cloud Natural Language API should be able to identify are identified as ‘other’ which might mean that it needs more context. It is also interesting that many of the words in the submission are totally dropped out and not evaluated at all. This probably means that these words are not impacting our entity classification at all, or at least not very much – because they did not add significant uniqueness or clarification to the text. What is interesting here, is that many of these words are classic marketing terminology, so it is possible that they are only being ignored BECAUSE something in the text has been identified as a Consumer Product.

For SEO’s, this tool might be a great way to evaluate new page copy, before it goes live, to determine how it might impact the evaluation and entity classification of a domain. If it turns out that a domain has been mis-classified, this tool might be the best option for quick guidance about how to change on-page text for a more accurate entity classification.

NOTE: Changing the capitalization on ‘MobileMoxie Toolset’ did change that classification from ‘Consumer Product’ to ‘Other’ but that did not change the number of words in the sentence that were evaluated, nor did removing the mention of the Toolset from the sentence all together.

Beyond just entity classification, another way the API reveals meaning is by determining Salience and Sentiment scores for an entity. According to Google, “Salience shows the importance or centrality of an entity to the entire document text.” In this tool, sentiment can probably only be evaluated based on what is submitted in the text box, using a score from 0 to 1, with zero representing low salience and 1 representing high salience, but in any real algorithm, we are guessing that salience is measured as a relationship with multiple metrics including the relationship to the page, to the entire domain and possibly to the larger entity as a whole, if there is one.

Sentiment isn’t defined, but it is generally agreed to be the positivity or negativity associated with a particular concept and in this, Google provides a score from -1.0 which is very negative, to 1.0, which is very positive. The magnitude of this score is described as the strength of the sentiment (probably in the context of the page or potentially on a more granular sentence level,) regardless of the score.

The next part of the tool is a separate Sentiment Analysis section which is a bit hard to understand because it has new numbers and scoring, different from what was used numbers in the Entities section of the tool. There are three sets of Sentiment and Magnitude scores. They are not labeled, so it is not entirely clear why there are three or what each of the three scores is associated with. Since only one of the Entities warranted a score of anything but 0, it is hard to know where the scores of .3 to .9 are coming from here, but a legend explains that 1- to -0.25 is red, presumably bad, -0.25 [sic] to 0.25 is yellow, presumably neutral, and 0.25 [sic] to 1 is green, presumably positive. Since this is different from the scoring used for Sentiment on the Entities tab, it is a bit hard to tell. It seems that Google offers more details about Sentiment Analysis Values in separate documentation but until the feedback from this tool is more clear it will probably not be too useful for SEO.

The next tab in this tool is very interesting – it is the Syntax evaluation. It basically breaks the sentences down, and shows how it understands each piece of it as a part of language. Using this in conjunction with the information on the Entity tab will allow you to understand how Google believes searchers are able to interact with Entities on your site.

After that is the shortest, but in my mind, most important information – the Categories. This takes whatever you have put into the tool and assigns it a Category tab, essentially telling you what part of Google’s Knowledge Graph the information that you submitted would be classified as. A full list of the categories that you can be classified as can be found here: https://cloud.google.com/natural-language/docs/categories

Two Parts of an Entity Classification Engine

While the value of these two tools to marketers might be hard to understand, their value and what they represent to Google is huge. We believe that these two tools together make up parts of what made it possible for Google to switch from the old method of indexing to the Entity-First Indexing. They are basically both Entity Classification Engines that use the same core, internationally translated entity hierarchy to either show how language and entity classification is done, in the case of the natural language API or show the financial results of entity classification for a businesses marketing plan in international markets, in the case of the market finder. It is basically the upstream and downstream impacts of entity classification!

How Marketers Can Start Getting Value from the Tools

The value of these new Google tools for digital marketers is still evolving but here are some steps SEOs can take to start better understanding and using them for thinking about entities in the context of their SEO efforts:

  • Make sure Google is categorizing your domain content correctly. Use the toolset to make sure that Google is classifying the most important pages on your site, like your homepage, as expected, since inaccurate classification could negatively impact your SEO. Google will struggle to display your page in the search results to the right people at the right time if Google has an incorrect and/or incomplete understanding of the page’s content. The MarketFinder tool can be used to determine how Google might be evaluating the domain as a whole, and the Cloud Natural Language API can be used to evaluate content on a page by page or tab by tab basis. If Google is classifying your site in an unexpected way, investigate which keywords on the page might be contributing to this misclassification.
  • Read Google’s Natural Language API documentation about Sentiment Analysis. As described earlier in this article, the Sentiment section in the Natural Language API is not labeled clearly, so it will likely be challenging for most SEOs to use it in its current form. Google has separate documentation with more details about Sentiment Analysis that is worth checking out because it offers a bit more context, but more clarity from Google about Sentiment would be ideal. We’ll be keeping an eye open for documentation updates from Google that may help fill in the gaps.
  • Learn as much as you can about “Entities” in the context of search. Entities can be a tough concept to understand, but we recommend keeping it top-of-mind. As Google moves into a new era that is focused much more on voice and cross-device interaction, entities will grow in importance, and it will be challenging to get the full value out of the Google tools without that foundational knowledge. Here are some great resources that will help you build that knowledge: the previous article in this series about “Entity-First Indexing,” this excellent article by Dave Davies about one of Google’s patents on entity relationships, this great patent breakdown by Bill Slawski, and Google’s official documentation about Analyzing Entities using the Google Natural Language API.
  • Understand alternate theories about Mobile-First Indexing. MobileMoxie recently published a four-part series investigating various changes in search results and other aspects of the Google ecosystem that seem related to the switch to Mobile-First Indexing, but have not been elucidated by Google. Most SEO’s and Google representatives are focusing on tactical changes and evaluations that need to be done on a website, but it is also important to not lose sight of the larger picture, and what Google’s larger, long term goals are, to understand how these changes fit into that mix. This will help you relate entities, entity search and entity indexing to your company’s larger strategy more readily.

Essential Entity Elements  – Critical Requirements for Correct Classification of an Entity

Over time, Google will find new things that need to be classified as entities, or old things will need to be re-classified as different kinds of entities. SEO’s will 1) need to know what type of entity that they would want to be classified as, and then 2) need to know what are the critical requirements that Google needs to find to classify something as a specific type of entity.

To do this, SEO’s will need to determine what Google considers to be the essential elements for similar entities that are correctly classified and ranking well in their top relevant searches. The various types of entities that Google recognizes and highlights in Knowledge Graph panels will have unifying elements that will change from entity to entity but will be the same for similar groups of entities or types of content. For instance, local businesses have had the same requirements for a long time, generally abbreviated as NAP – Name, Address and Phone number. This could be built out to include a logo and an image of the business. In other cases, like for a movie, most movie Knowledge Graph entries have a name, cast list, run time, age rating, release date, promo art and a video trailer. If your business is not classified as a particular kind of entity, and would like to be, then this will be and important step to take.

Conclusion

In the long-run, this model could be difficult for publishers and companies that are not original content creators, but this is probably by design. Websites that use an ‘aggregation and monetization’ model, or that survive primarily on ad revenue will struggle more; this is Google’s model, and they don’t appreciate the competition, and also, it hurts their user’s experience when they search! Google wants to raise the bar for quality content and limit the impact that low quality contributors have on the search ecosystem. By focusing more on entities, they also focus more on original, authoritative content, so this is easily a net-positive result for them. In the short term, it could even decrease the amount of urgency around Google’s effort to provide more safety and security for searchers, and minimizing the negative impact of ads, popups, malware and other nefarious online risks.

While many SEO’s, designers and developers will see moves in this direction as a huge threat, small business owners and users will probably see it as a huge benefit. Perhaps it will make the barrier to entry on the web high enough that nefarious actors will look elsewhere for spam and easy-money opportunities, and the web will become a more reliable, high-quality experience, on all devices. We can only hope. In the meantime, don’t get caught up on old SEO techniques and miss what is at the top of all of your actual search results – Knowledge Graph and entities.

This is the second article in a five part series about entities and language, and their relationship to the change to Mobile-First Indexing – what we are calling Entity-First Indexing. This article focused on the tools that Google used to classify the web, and reindex everything in an entity hierarchy. The next three articles will focus on our international research, and how the various translation APIs impact search results and Entity Understanding around the world, and how personal settings impact Google’s Entity Understanding and search results on an individual basis.

 

Mobile-First Indexing or a Whole New Google? The Local & International Impact – Article 4 of 4

 


By: Cindy Krum 

Many technologists theorize that people will look back on the time-period we are living in now and describe it as a modern Industrial Revolution. Surely the massive expansion of trackable data will create a significant shift in how many aspects of business are approached, and the big tech companies, Apple, Google (Alphabet), Microsoft, Facebook and Amazon, have a huge amount of power in the marketplace because they control the flow of data. They fight to stay ahead, and one-up each and to even disrupt their own business models with newer, more innovative options and opportunities. ‘So, what does this have to do with SEO and Mobile-First Indexing’ you might ask – Actually a lot!

For awhile now, Google has been talking about micro-moments. Google describes these as instances where searchers are making a decision about their ultimate goal in a search. According to Google, micro-moments mostly fall into four categories: “I want to know,” “I want to go,” “I want to do” and “I want to buy.” While one entity or topic can easily fit into more than one micro-moment, each of the articles in these series has focused primarily on one of the micro-moments. The first article focused on information search, or “I want to know.” The second article focused on media, which fits with “I want to do,” as in, “I want to watch a video,” I want to play a podcast, “I want to listen to a song.” It could also include, “I want to turn on the lights,” or “I want to cast this image to the TV.” The third article was about shopping, so obviously fits with “I want to buy”. This last article will focus on location and maps, so it is mostly about the “I want to go,” micro-moment, but it also takes a more global perspective, and will discuss how language and location fit into all of the other micro moments.


Entity Understanding & How CCTLDs Might Factor In
Let’s start with Global location as a concept in search. Historically, different country-versions of Google have operated with different algorithms that were updated at different times. As far as we know, the US algorithm was always the first to receive new updates, and other countries algorithms would be updated later, as Google was able to modify each update to fit the different local and linguistic needs of each country. Google would geo-detect a user’s location, either by the IP block or using other localization signals, and then often redirect the user to the local country version of Google that fit their specific location. As they got more sophisticated, they would geo-locate a person based on the GPS location of their phone to determine which Google ccTLD was most appropriate, and pass that information off to all other logged-in devices. This all changed a few months ago when Google announced that they will now be serving the same algorithmic results from each ccTLD of Google, and that the results would vary instead, based simply on the location of the searcher – So now, theoretically, starting a search on Google.nl is the same as starting a search at Google.com or Google.mx. This may indicate a substantial milestone completion for Mobile-First Indexing and the entity understanding of the web.

It makes a lot of sense for Google to leverage the GPS location of a phone in a Mobile-First world, so this should be no surprise. But there are actually a lot of places where language settings can be changed in Google properties, so the increased reliance on these settings as algorithmic variables could complicate the predictability of SEO – something that is fine with Google as long as it also creates good results for searchers. As you can see in the search settings and troubleshooting guide below, Google is getting hints about languages from both Chrome and Android settings, which are not necessarily always in sync.

If a search is performed ‘Incognito,’ or the primary phone on the Google Account is turned off, logged-out, or otherwise not giving a good location signal, Google reverts to the IP address, as shown below.This may seem inconsequential, but it is definitely not.

The easiest way to understand the long-term potential impact of this change is within Google’s Image Search. Image search has historically been quite disconnected from true user-intent, because it focused on the keyword a user searched for, in the specific language that they searched in – not the actual goal of the search – which was of course, to find a picture of something. In reality, if someone is searching for an image of a ‘blue chair,’ they are searching for a concept, not a keyword. The language that the text surrounding the image is inconsequential. Any pictures of a ‘blue chair’ would be a good result, regardless of the language of the text that the keyword ‘blue chair’ is written in. A search for ‘blauwe stoel’ (Dutch), ‘blauer stuhl’ (German), or ‘青い椅子’ (Japanese) should all return basically the same image results for a blue chair but historically, with the keyword-relevance algorithms, they did not. Now with an index based on entity understanding, along with Google’s translation API’s (already available for many languages) , the intent of the search and the related entity understanding will become much more critical that the un-translated keyword relevance.

As expressed in the previous article in this series, most image search results appear to already be derived from the Mobile-First Index. This change to entity understanding is easiest to see with images, because the goal of an image search is visual, and thus, not focused on language or location at all. In a web result, the language matters much more, because the intent would more likely include opening a web page and reading text. The same is true of a voice search, unless we assume that translation APIs would be used to live translate any results for a voice output. This seems unlikely in the short term, especially for long web pages, but potentially much more likely for search results with limited text, such as a Google Action in the near-term.


The Impact Schema & JSON-LD Have on Mobile-First Indexing
Whenever entity understanding comes up, especially in SEO circles, it ultimately leads you to a conversation about the importance of Schema markup and JSON-LD. These are coded methods of describing the content of a website, app, database, etc., that are easy for search engines to parse and understand. Historically, this type of markup has been used to help companies describe local addresses for maps and provide basic information about recipes, so that Google can show images, calories and cooking times directly in a search result. Schema also helps product pages display star rankings and other great things like that directly in SERPs.

What many SEO’s might not realize, is that Schema and JSON-LD are English based codes; so even if they are being used internationally, where content is written in a different language, the markup code is still in English. This makes international entity understanding easier for Google, because it converts the basic information for categorizing entities on websites around the World into English. This is HUGE in terms of potential impacts on international SEO, but also expeditious for Google and entity understanding as a whole.

Beyond using Schema and JSON-LD as the Rosetta Stone of the web, English-based entity-understanding has a secondary implication. Google’s most successful search algorithm has always been in English, so when all content is linked together with a unified ‘schema’ for entity understanding, it is easier to have algorithmic searches based on the US/English version of the search algorithm. While the effort to get this all set up and working properly is very large in the beginning, in the long-run, this move could save Google a lot of time and make surfacing search results faster, especially as the content of the web continues to grow at (near) exponential rates. It should get them out of maintaining different versions of the search algorithm for each language and country combination and in the long-run, so most likely it will be even more important for webmasters to mark up their content with the appropriate Schema, when the content is not natively in English.


Entity Understanding & Dynamic Links in Google Maps
Beyond just Image Search results, entity understanding currently seems to be more prevalent for Google outlets that address entertainment-oriented queries in Google, Google Play and Google Now, as discussed in the second article in this series, but we are also seeing strong examples of entity understanding in Google Shopping and Google Express as well as Google Maps. For all of these topics, language is less relevant or potentially even a hindrance for surfacing what a user wants. For instance, if you are in the Netherlands, searching Google Maps for a ‘grocery store’ in English, your intent and final destination should be the same as if you searched for the Dutch version of the keyword (markt/supermarkt). In fact, using entity understanding in map search solves a huge problem for travelers, because needing to translate a keyword like this into a local language before searching for it in Google (or Google Maps) is a slow and error-prone process.

You can already see the beginning of entity understanding in some versions of Google Maps now, with buttons that represent the most common entity searches that Google would like to understand and surface: Restaurant, Grocery Store, Pharmacy, Cafe, Gas Station and ATM, shown below. Notice in the image on the right, that synonyms for the entity understanding of the query are listed with each location that is surfaced, boxed in red. This does not happen in every map query for every entity – it appears to happen only where the entity understanding is strong enough that Google has hit a certain confidence threshold. At this point, ‘supermarket is the only entity that is consistently showing the entity keywords with the specific locations in Map search results. Google seems to be using their AI to build out the entity understanding in Maps over time, so sometimes you can click one of the standard buttons like ‘Pharmacy’ for an entity concept that Google is still learning, and see that the keywords are NOT included, possibly indicating that the confidence in the result or entity understanding is not as strong.

This is important, because some concepts, like ‘pharmacies’ are harder to define, or don’t translate as readily across borders, as illustrated with the Amsterdam and Denver example searches below. In the US, pharmacies like Walgreen’s, CVS and Duane Reed are all-purpose stores where you can pick up prescription drugs, but also many other things including snacks, toiletries and makeup; but in many other countries, including the Netherlands, where one of the example screenshots was taken, pharmacies are much more limited, focusing only on prescription drugs. Google may be trying to disambiguate the query intent, deciding if an American would be just as happy with the Dutch equivalent of a ‘corner store’, ‘convenience store’ or ‘bodega’, even though they clicked on the ‘pharmacy’ button. What is interesting here is that the English understanding in the search from Denver does not appear have entity understanding either, even in the US. This indicates that Google is insisting on multilingual entity understanding in all cases, including in English, where it has the greatest native understanding already, before entities can achieve the same keyword inclusion level of confidence that the Grocery Stores are currently getting.

Restaurants and tourist destinations are more universal, so Google’s AI is generally more robust for these types of locations (though there is sometimes confusion with supermarkets that have eat-in delis). You can see in the image above, that in a Maps browse screen, Google is not only highlighting different types of location-groupings especially for restaurants, but is showing the time of day and weather at the top of the screen, which also impacts the results that are being suggested. It is noon, so we are being shown lunch restaurants, and the weather is cold, but not raining, so outdoor activities are ok, but not preferred. Most likely, these recommendations are based on simple logic like this, as well as crowd-sourced data about what other people have historically chosen to do on days like this.

The inclusion of a dynamic (shareable) link that is associated with the map (the three dot, triangular share links), each of the entity results in the map and each of the specific locations suggested in the map should give us a pretty clear idea of how this part of the Mobile-First Index is organized. There are entities, they are grouped, and other entities live within those entities. The entities are more or less relevant to different context, like time of day and weather, and they don’t require unique domains to be surfaced. The dynamic links allow them to be indexed, surfaced and shared without necessarily needing a website or traditional links. This concept will be critical moving forward into the future of Mobile-First Indexing.


Local Search, Shopping, Inventory Control & Just in Time Delivery
Rob Thomas, leading software analyst from IBM says that “The rise of the Data Era, coupled with software and connected device sprawl, creates an opportunity for some companies to outperform others. Those who figure out how to apply this advantage will drive unprecedented wealth creation and comprise the new S&P 500.” His prediction is that there will be no more ‘tech companies’ per se, but that all companies will be tech companies by default, and that the tech and the data will be deeply integrated into the fabric of every company. Well-known thought-leader in the technology space, Benidict Evans agrees by saying that simply “It is easier for software to enter other industries than for other industries to hire software people.”  I agree and think that there is a big risk that the largest tech companies will invade more industries; empire-building with conquests in the form of M&A in offline sectors; we have seen this already with Amazon’s purchase of WholeFoods grocery store. But it could be more complicated than that; It seems likely that Google sees itself as a potential middle-man and will position itself to help businesses harness the data and stave-off their own wholesale takeover by tech companies in offline industries.  

Cross-device, multi-sensor data will be revolutionary, in many ways, and in the long run, it will allow Google to directly index offline goods, further bridging the gap between on and offline information. This will be powerful and revolutionary in its own right. Many online retailers have tried to launch websites tapped into the local store inventory systems, to allow shoppers to find items online and reserve them for pickup in the store, rather than paying for shipping. Target, BestBuy and DSW have all tried, but most encounter significant struggles because current inventory control systems are so bad, Google could easily fix this problem with cheap data sensors. It is possible that Google’s next move will be towards better Just-In-Time (JIT) inventory control systems that help business know the reality of their inventory, rather than simply knowing rough estimates which are only updated every 24 hours, as they do now.

Google may also be hedging its long-term bets on their ability to manage fleets of driverless cars, which they have been working on for many years. This would allow them to pull together information from product and inventory-oriented sensors, as well as information about maps, traffic and orders to seamlessly execute Just-in-Time deliveries for all kinds of stores, with maximal efficiency and minimal incremental cost. The new expectation will be set, where potential customers can order something that they are interested in, have it delivered in 24 hours, and return it in the next 24 hours if it does not meet their expectations, all using the internet, without leaving their house. No more waiting in line or trying things on in store dressing rooms.

Similarly, grocery delivery apps have been around for a number of years, but none of the stores made it easy enough to casually add things to the list or place orders. Being able to update a weekly grocery list using just your voice may change all that. When voice search is combined with AI, cloud data and mapping or potentially even driverless cars for Just-in-Time delivery services, we get something that Google would really be interested in. Looking into the future, if Google’s ultimate goal is to use sensors to help companies index offline inventory for Just-in-Time delivery, potentially using fleets of Google’s driverless cars, the long-term result of Mobile-First Indexing could ultimately help smaller, more local businesses, empowering small-scale retailers to compete more directly with the large, enterprise e-commerce sites. The model might help people easily get food that is locally grown, or organic and in-season, delivered on a regular basis without the markup that is associated with most retail stores; small start-ups could spend the money they would have invested in the physical store on inventory planning and order management systems. Shopping may resume a local focus, leveraging the internet to compete with the larger, global retailers and this would fit well with the direction of society, the demands of Millennials, and Google’s goals to reach the next Million Users. 


Is this A Whole New Google?
With all of this consolidation at Google, and the movement towards Mobile-First indexing, what should we expect next – Is this a whole new Google? Well in short, yes – I think so. The next thing to come will be Fuchsia – Google’s new cross-device, OS-browser combo, and one of the most important rumors that no one in the SEO community is really talking about. The rumor that Google Chrome could merge with the Android OS and launch a web-OS has been around for awhile, and it most certainly would fit well with Mobile-First Indexing. At first it was unclear if Fuchsia was just another mobile OS, but according to Kyle Bradsahw, author of the new Fuchsia Friday at 9to5Google, “Now that we’ve seen it up and running on a Pixelbook, it seems more likely Fuchsia could eventually supplant both Android and Chrome OS.

Google has wanted a unified experience that pulled together the web browser, app stores and operating system for a long time. The first try Google had at unifying the browser and OS was with their ChromeBooks, which had the Chrome operating system, and allowed users to access and download software for the Chrome Store. The Chrome Store offered Chrome Plugins and Apps, that function somewhat similarly to a PWA, leveraging the browser code for the core functionality, Plugins using the normal Chrome layout, and apps that could install minimal software but also re-style the browser display to suit the needs of the app. (It is rumored that much of the team working on PWAs at Google came from the old Chrome Store team. My guess is that they pioneered the concept for PWAs there. In my mind, it is still easiest to understand a PWA as browser-plugins that get to re-style the browser window.) It is possible that the next major handset launch will happen at Google I/O 2018 and showcase Fuchsia. Perhaps it is telling that the branding for the event seem to depict a wind or current map – like ‘The Winds of Change’? (Though I think if they were really going to go for it they would have done it all in bright pink, or fuchsia.)

Unsurprisingly, Fuschia focuses heavily on Entity understanding. From his review of the Fuschia documentation, Kyle suggests that “Entities are created and shared in JSON, a format which is designed to be human-readable and is nearly universal with parsing available in most modern programming languages. We also briefly learned last week, that Ledger [individualized Google device software that enables quick task and device switching for one user] is also designed to handle JSON objects well. This is certainly no coincidence. Ledger will almost certainly directly keep track of entities, among its other duties ….  Improved Dynamic Entity Extraction: The new entity extractor adds support for Microdata and listens for mutation events so that entities can be re-scraped. When the entities in the page change an event is triggered and the web view updates the context store’s list of entities.” This all fits extremely well with how we have described Mobile-First Indexing’s heavy reliance on JSON markup and other methods of entity understanding for dynamic presentation of data and on-going AI.

Additionally, for another indication of the good fit with Mobile-First Indexing, Kyle explains that Fuchsia leans heavily on Android Instant Apps. “Android Instant Apps allow users to try out new apps before committing to install. To do this though, developers have to create a specially-made and stripped-down modular app using the Instant Apps SDK. Where Fuchsia differs however, is that there is seemingly no distinction between an installed app and an “Instant” version. Whether you install it manually or run instantly from a suggestion, the app is the same… The most important thing though, is that these processes will be completely transparent. It looks like Google is building Fuchsia so that when users know what they want to do next, Fuchsia will be happy to accommodate. You won’t have to worry about whether or not you have the app you want installed, saving you from filling your devices with apps you “might need someday.” If Fuchsia is a success, this could be monumental change for SEO and progress for entity understanding and Google’s AI. Traditional ASO will fall into the past and be replaced with context-aware surfacing of Instant Apps.

From a more pragmatic, and immediate SEO perspective, Fuchsia is still important. Remember that the Google’s “Speed” updates is set to launch ‘this July’ and the update to fix the problematic cached AMP URLs slated to launch ‘sometime in the middle of the Summer,’ so these both could be timed to coincide with a more full-throated launch of Mobile-First Indexing at Google I/O. Beyond that, we have already seen movements in the Chrome browser announcements that fit well with  Mobile-First Indexing, and with a unified browser-OS, like what is being tested with Fuchsia: Chrome will not have a visible address bar, which is important if it is launching content without URLs, and for PWAs launching full-screen; Chrome will focus more on offline caching, important if it is running as both OS and browser); Chrome will auto-adjust for zooming in DevTools; if they can do it in DevTools, they can do it elsewhere, and this is an important feature for any OS-browser combo that might work on a non-standard device display, like on TV and car displays) and Chrome will focus much more on HTTPS, which is very important if it is running as both OS and browser.


Conclusion
In the same way that ‘tech’ may no-longer be as relevant or descriptive of a classification for any company, ‘the internet’ may no-longer be as relevant of a concept for where people spend their time, money and attention. It will just be a necessary part of the equation that will fade into the background and become a ‘given’ or a ‘constant’. Access to high-tech data processing and analysis on the internet will become analogous to having access to a unique skill, material or machine needed in the previous century. This is going a long way to break down barriers, and from the perspective of SEO, the most important barrier that is being broken down is a linguistic one. Mobile-First Indexing, and entity understanding, along with translation APIs is allowing Google to index the world based on a single set of ideas, which will speed up the organizing and surfacing of information, and help expedite the management of algorithms. It will also make international AI and machine learning for voice search more robust and meaningful much faster, with more data feeding the same system, instead of having the systems all segmented by country or language. 

Technology has driven us to a new ‘on-demand’ economy, but the newest, most innovative opportunity might actually be the one that is closest to home. On-demand goods and services that are organized and ordered on the internet, but then appear nearly immediately, might be the next big thing. The concept of tech companies using ubiquitous cross-device internet, data and sensors to radically change non-tech industries and the concept of a Mobile-First Index could go hand-in-hand, especially if indexing of offline products and inventory becomes a reality. When searchers seek out these goods and services, their end goal is not the website, but the good or service.

The prospect of Fuchsia could be huge, both for Mobile-First Indexing, but also AI and the Internet of Things. If it is a success, it will fundamentally change the job of marketers and SEO’s, hopefully for the better. Unfortunately, marketers have gotten so used to marketing on the website or in the physical store that they can’t imagine their jobs without those options, but Mobile-First Indexing could help make this part of the new reality. This is the fourth and final article in a series about Mobile-First Indexing. It explained how the entity understanding described previous articles, which focused on information searches, media searches and shopping searches come together globally, and how that will change SEO in the long run. Finally, it also covered how this change to Mobile-First Indexing may indicate, at least from an SEO perspective, that soon we really may be dealing with a whole new Google.

 

Other Articles in this Series:
Is Mobile-First the Same as Voice-First? (Article 1 of 4)
How Media & PWAs Fit Into the Larger Picture at Google (Article 2 of 4)
How Shopping Might Factor Into the Larger Picture (Article 3 of 4)
The Local & International Impact (Article 4 of 4)

Mobile-First Indexing or a Whole New Google? How Shopping Might Factor Into the Larger Picture – Article 3 of 4

By: Cindy Krum

E-commerce has always been a cornerstone of SEO. Sellers need to rank well for keywords that are related to the products that they sell so that searchers can find the products and buy them. The switch to Mobile-First Indexing and entity understanding for search could be a huge opportunity for retailers, or it could be a threat. Especially if the voice-search and ordering component is limited or optimized for a certain set of retailers, or those who are will willing to cut Google into the action. Voice-only online payment and offline delivery add a bit of complexity to the mix, especially if less-durable items like groceries are being delivered, on-demand. SEO’s will need to prepare themselves for potential changes in e-commerce. This could have a lot to do with voice search and Mobile-First Indexing.

The previous two articles in this series focused on major consolidations that are happening across a number of the Google brands that all seem related to Mobile-First Indexing. These new innovations facilitate frictionless movement of information and experiences from one device to another, regardless of the differing capabilities of the various devices. The first article focused on Mobile-First Indexing in a basic information retrieval context and the second focused on the media and entertainment context. This is the third article in the series, and it will tie Mobile-First Indexing to current and potential future changes with Google’s online payment options, Google Wallet and Android pay, as well as changes in Google Shopping and Google Express. It will also discuss the challenges and opportunities that e-commerce sites might face as Google fights to reclaim its market share in online shopping and protect its ad revenue from Apple, Amazon and other potential threats. The fourth and final article in the series will outline the geographic implications of Mobile-First Indexing, both international and local, especially as they occur in Google maps. Then it will speculate more about how the preponderance of evidence in these four articles strongly suggests that we may soon be dealing with a whole new Google, and what it will mean for SEO.


Consolidation in Mobile Payment

At Google I/O this year, Google representatives were very enthusiastic about a new one-click, cross-device registration, authentication, and payment that will be available soon, to make PWA shopping experiences much better. According to the speakers, the same technology that Google uses to make cross-device media consumption seamless will also be used to make any kind of cross-device authentication secure and seamless. Google’s new PWA-enabled cross-device payment system will work from the web, so will be easily accessible and secure on any device with a browser including iOS devices.

Historically, Google Wallet has been associated with Google-at-large and included by default in Google Chrome. Android Pay, on the other hand, has always been associated with the Android mobile operating system, as the default payment management system on the phone. As you can see on the right, it can hold multiple credit cards and even includes options for mobile carrier billing (T-mobile), and PayPal. From what we can tell, the main thing that made both systems necessary was Google’s desire to enable payment on iOS devices. These two options could continue to be maintained as separate systems, or they could be combined, allowing them to aggregate all the users that have signed-up with the different services all into one.

With one unified payment system, it will be easier for Google to integrate voice ‘buy’ commands into Google Home and Google Assistant for all of the cross-device shopping experiences in and outside of the Google ecosystem. With the increased focus on Shopping, and media, the ability to use voice commands to execute purchases without users needing to touch a device will be an especially big deal. The result of this strategy will be a seamless, frictionless shopping experience that is secure and deeply integrated into all of Google’s offerings.

When payment systems are deeply integrated and frictionless across all devices, it removes one of the biggest hassles for online shopping and makes people more likely to buy. It will go a long way to helping combat Google’s loss of users to Amazon Video, Netflix, Hulu, iTunes and other media outlets, as discussed for the relationship to Mobile-First Indexing in the previous article. It will also help protect Google from losing more mobile and desktop shopping market share to Amazon Prime, Amazon Fresh, Whole Foods and other future e-commerce competitors. Google believes that they can match their competitors in terms of their inventory and pricing – especially for digital goods, but eventually, they also hope to match Amazon Prime with cheap and immediate shipping of physical goods. This will be especially viable in the future if Google is able to use driverless cars for the automated pick-up and delivery of goods. This concept is discussed more, as it relates to Google maps, in the next article in this series.


Mobile & Voice Search Technology for Commerce

Google’s Mobile-First Indexing will enable users to interact in a voice-only way to find the information that they need about products and services without a web search – at least the way we conceive of web search today. Either your brand will be strong enough in the mind of the consumer that they will ask for it by name, or your product and its specifications will be the best-fit of the needs of the consumer. Brands that can demonstrate that they are the best-fit for the needs of the consumer will win the business through (hopefully) honest and unbiased direct comparison, based on features and specifications that are comparable because they are well marked-up in Schema. The internet has already led to a decreasing viability for over-saturated offline retail stores, as shown in the graph below, and Mobile-First Indexing may further decrease its importance, especially in industries that are more low-risk, repeat-purchase and voice-oriented, like groceries.

Online shopping is changing the offline shopping world in pretty significant ways. The massive growth and adoption of Amazon Prime have removed many of the barriers that kept people from buying things online. Similarly, regular and variable subscription services like Amazon Subscribe & Save, StitchFix, ShoeDazzle, Ipsy, BirchBox, BarkBox, and the like are changing how people prefer to shop, as is on-demand product delivery that is enabled by Sharing-Economy apps like Postmates and TaskRabbit, both of which can be used for food or small-purchase product delivery. The ability to casually shop or even narrow-down potential shopping options using your voice could be a game-changer for all of the online shopping business models in the near future, as adoption grows. Voice-only ordering, like what is already available and growing in adoption with the Amazon Alexa integration, allows people to make quick, low-risk purchase with minimal hassle, but also could allow people to do some simple research and filtering for more complex decisions.

Understanding how voice search will be used for shopping, and how to optimize content to work well in that context is a huge topic, so we moved the details about that into a separate post, linked here. The essence of it is that people will search by store, by brand, and by features. The likelihood that the search to start with one or the other is based on the price, risk, and regularity with which the user purchases the product. Success in optimizing for voice search will be based on an SEO’s ability to express and manage inventory-level information about the products, to help them surface in store-specific searches, capitalize on and reinforces brand loyalty, and filter appropriately for advanced searches, based on the features of the product.

Ultimately, voice search should make simple purchases easier and add confidence and satisfaction to more complex purchases. When the stakes of a purchase are higher, voice filtering and comparison could help users more accurately compare and evaluate a larger variety of options and information, actually giving them opportunity to ‘talk-through’ the complicated purchases and consider even more products and points of comparison. An automated system that passes no judgment about the questions, priorities or requirements or how long you take to make a decision, which could even improve the shopping experience and level of satisfaction that some users feel with their purchase.

 

Changes to Google Shopping & Google Express

Google has two shopping portals; Google Shopping, which has been around for many years, and Google Express, which is relatively new. Google Shopping allows users to search and compare product offerings from many online retailers who have agreed to share part of the revenue from each purchase with Google. Google Express is similar, but in Google Express, product availability is determined by your zip code, and more quick shipping options are available. It does not appear that Google is building huge AI driven warehouses like Amazon, in order to facilitate this quick shipping. Instead, they seem to be delivering items directly from retail stores or small warehouses. If Google Express is eventually able to further decrease their delivery time, its could be a huge boon for local supermarkets that get on-board. Offline retailers (grocery stores and others) could use Google Express as a replacement for individual store-run and app-run delivery services.

This is relevant for Mobile-First Indexing because Google Home and Google Assistant’s voice-controlled shopping list is already integrated with Google Express, making it much easier for people to place one-time or recurring orders for products that they need using only their voice. This makes Google Express at least somewhat competitive with Amazon Alexa’s voice ordering capabilities, so Google might be able to take back some of the market shares for product-oriented searches from Amazon. It is important to note that the Shopping List was originally part of Google Keep, which is already integrated with Google Docs, so there appears to be a more holistic, base-level integration happening there.

To this end, I expect we might see a merging of Google Shopping with Google Express in the next two years, to shift of power away from Amazon Prime. Google may even be willing to take a financial hit or break even to accomplish this task, just as Amazon did in the early days of its existence and of the launch of Amazon Prime, to build momentum and buy-in for the brand. NOTE: We don’t work much with PLAs or Google Shopping, so this is all just based on external observations.

To make this idea really profitable, Google will have to isolate a new advantage that consumers want. Google Express does offer free shipping, but the total value of your order from each different store has to meet variable thresholds for each, as you can see in the image below. This might be fine for big shoppers, or shoppers who really just want easy online ordering and quick delivery from companies who are not known for those services, and most of the stores in the list are not. The upside is that Google Express will store and enable information for each of the stores’ loyalty programs, so that you can still accumulate loyalty points, now without leaving the house.

If you already have logins and account details for a site, beyond a simple loyalty card, it will store and manage those as well. When you add this loyalty and account management system to what we suspect is happening with the Google payment utilities, there appears to be an even more powerful consolidation opportunity on the horizon. Once the two payment utilities are merged, the resulting product can be rolled into the Google Express payment system to manage all payment options as well as loyalty accounts and logins, making the long-term prospect for automated and voice-only ordering from Google Express that much more appealing. Loyalty points are good, but voice-only, same-day delivery and automated repeat delivery and billing, like Amazon Subscribe & Save offers are great.

Companies originally seemed slow to onboard with Google Express, but there appears to have been a recent surge in the number of major department stores that are integrating with Google Express. The current list of brands includes: Target, Walmart, Overstock.com, Wayfare, Whole Foods, CostCo, Kohls, Fry’s, The Home Depot, Walgreens, Bed Bath & Beyond, Ulta Beauty, Guitar Center, Hayneedle, JOANN, Payless Shoes, ACE Hardware, Pier 1, Sur le Table, Toys’R’Us, REI, The Vitamin Shoppe and more. As you can see in the list on the right, taken from Google Shopping, most of the top participants in Google Shopping are already part of Google Express. Some stores like Pier 1, Sur le Table, Fry’s, CostCo, WholeFoods, Ulta Beauty, REI, Payless Shoes appear to be part of Google Express but do not appear to be part of Google Shopping; still, there is definitely a large amount of crossover, and this may be further corroboration of an impending merger.

A potential consolidation of Google Express and Google Shopping may seem tangential for SEO because technically these are both sponsored search options, not organic search. While this is true, I  believe that it will still be important for SEOs to focus here, or at least be aware of it. Voice ordering may be compelling enough for consumers that they may change search behavior to the only shop with utilities that facilitate it like Amazon and Google. This is exactly what Google wants because they get revenue from each transaction.

In the future, most shopping with Google may be ‘Sponsored’ to one degree or another, which could be further altered to help get them out of future antitrust problems in the EU, where they are paying a €2.42 billion (~$2.73BN) fine. In this case, it was proved that inclusion of products in Google Shopping made it harder for organic products to surface, so if Google could make all product inclusion the result of the submission, and not rank products organically at all, that would be a win-win for Google. It is hard to know exactly how this change might happen, but it seems likely (or at least possible) that products which are not available in Google Express or Google Shopping will become even harder to find, especially in a voice search context. Items that can not be easily surfaced or compared in voice search would suffer. Changes like this may force us to redefine the distinction between paid search engine marketing (SEM) and organic search optimization (SEO), as the base requirements for success in SEM will begin to look more and more like skills that have historically been associated with SEO.  


Addition of Product Information in Google Images

Moving out of Sponsored product optimization and into a more pure-play organic optimization discussion about e-commerce and Mobile-First Indexing, we have to start with Google Images. It is quite possible that Google Image Search is already coming from the Mobile-First Index and has been for about the past 6 months. The images all appear to have been re-indexed, based on schema and entity understanding, rather than being indexed exclusively on the text content of the page where the image is from. With this new indexing, images are identified with special symbols when they are GIFs, still images that represent videos, news, recipes, and products, as shown below.

The change was most noticeable when product images got marked up with a little icon that indicates that they were a product, and all the vital stats for the product got lifted into the image search results ‘details’ information, as shown below, so that images could be filtered based on color, price, size, seller and other shopping oriented criteria, which is also shown below, though unfortunately the sorting feature seems to have been disabled for now.

This kind of image tagging shows that Google is not drawing a hard line between image results, and shopping results, news results, and any other image content. It seems like the entity understanding of the new Mobile-First Index allows these concepts to ‘sit together’ despite coming from different types of sources, which is nice because it adds utility and fits with how many people already shop online; finding items that they are interested in, then searching Google Images to see examples of how bloggers have styled an item of clothing, how the furniture or fixture looks in various rooms, or how the sporting gear looks after a real workout.

This presentation of image search results is also great for comparison shopping allowing competitors like eBay to filter into the consideration set, where they might not previously have been.


INFO: Historically, Google Shopping used image recognition to populate ‘Related Items’ and have been for awhile. I wrote about this phenomenon over two years ago (Nov 2015).: When searchers click on an image of a garment on a mannequin, all of the ‘Related Images’ are on mannequins. Searchers who click on an image of an item on a human model are only presented ‘Related Images’ on human models. This is still true. (So having product images that look a lot like higher-end competitor product images is still a great way to show up in the Related Items results of Google Shopping.)


Google began adding entity-style filters to the top of Google Images about 18 months ago, and these may have been the beginning of Google’s entity understanding for images. Before that, Google Shopping and Google Images have both been using their image matching algorithms to recommend similar items, described in the info box to the right, but this new image indexing seems much more driven by Schema and entity understanding, as discussed in Article 1, rather than image recognition. Image recognition and matching is still very important to Google, but it has to fit into their new entity understanding of the web. Google is even asking users to help with AI image recognition and categorization training tasks, using a crowdsourcing app which asks users questions about images to help them categorize images more accurately and make their entity understanding of images better.

The other indication that Google Images is already using the Mobile-First Index is the Dynamic ‘Share’ Links that are now included with all the images. Remember, Google uses these links when they ingest an entire database of information without requiring that URLs are available to associate with each concept, idea or entity. The share links are visible with the new save button and vertical triple-dot ‘more’ button with each image, but these may become more prominent over time. Currently, most of the images in Google’s Image search results do appear to have URLs, but this may not always be the case; especially if Google begins ingesting entire product catalogs directly from a database that is marked up with JSON-LD, instead of crawling e-commerce websites.

Even though Google Images is pulling in information from e-commerce sites, it is not pulling in Google Shopping results, aside from the Sponsored carousel at the top of the page. The images always appear to link to their original seller, so unlike Google Shopping and Google Express, in Google Images Google is not getting cut-into any of the transactions.

Since Image Search result are organized with entity understanding, another interesting phenomenon related to image search is starting to pop up. The ‘People Also Searched For’ boxes that were just recently added to Google desktop results, are now also appearing in image search results, in the image grid, as if they were an image, but each of the options is clickable and will allow users to filter down to those related images. Examples are shown below:

This could be hard to understand at first, but in a cross-device context, they make plenty of sense. Image searches will obviously struggle in a voice-only/eyes-free context, but they could be very useful in a voice-assisted scenario, such as a person who wants to search images on a voice-enabled TV. Adding a multiple-choice option for drill-down into the images will probably improve that user-experience significantly, but still help the voice recognition AI, because they cause the user to prompt the system with related options that it is already expecting; much like a multiple choice question in a voice-only experience.


Changes to Google Search Console & Google Analytics

Last but not least is tracking. Tracking is incredibly important for e-commerce SEO but may get much more complicated after Mobile-First Indexing. By now, everyone in the industry is aware that there is a new version of Google Search Console available, but there are some things that we might be able to glean from this about the Mobile-First Indexing update. The New Search Console is accessing different data that the Old Search Console could not; it is not just a re-skin of the same data. It seems to have been launched separately, on a different subdomain in order to keep the information about the two indexes from overlapping and creating duplication or confusion. Google may have also have used statistics from the New Search Console to compare with the Old Search Console, until they achieved some threshold of similarity in reporting, to know when the rankings and behavior in the Mobile-First Index were starting to match up with the rankings and behavior of the traditional Desktop-First Index.


NOTE: At the Friends of Search conference last week, Garry Ilyes did not deny the relationship between the new Search Console and the Mobile-First Index, and said that there was some relationship.


The main signal that indicates the relationship between Mobile-First Indexing and the New Search Console is the section that measures, ….wait for it….indexing! Since Mobile-First Indexing is new and will work for content that does not require a static web URL, that can’t be tracked by Google Analytics, Google needs a tool to report on the new non-URL’d content; hence the Indexing Report in the New Search Console. Most likely, web masters are only seeing URLs in their accounts because they only have websites set up to track to Google Search Console. It seems likely that once Mobile-First Indexing is fully launched, Google will start providing more information about how to ensure other things like Native Apps, Instant Apps, single-URL PWA’s, databases and other content are doing as far as Mobile-First Indexing.

Conclusion

E-Commerce is an important part of the internet economy but changes spurred by Google’s transition to Mobile-First Indexing could change how shoppers find what they are looking for online, and how SEO’s manage and optimize their online sales. The potential for voice-only shopping and ordering of products could be a huge opportunity for many companies, who are able to move quickly and adapt to the changing landscape in this space. Capitalizing on existing systems from Google or finding your own way to ensure secure payment and quick delivery will be critical to competing for the shoppers business.

The first article in this series focused on how Mobile-First Indexing might change basic information retrieval and the second article focused surfacing and serving media and entertainment across different devices with voice search. This article tied Mobile-First Indexing to current and potential future changes with Google’s online payment options, Google Wallet, and Android pay, and also discussed the potential impact of Google Shopping and Google Express. The fourth and final article in the series will outline the geographic implications of Mobile-First Indexing. It will focus more on Google Maps but discuss local and international implications for Mobile-First Indexing. It will speculate more about how all the changes discussed in this article series indicate that we may soon be dealing with a whole new Google.

Other Articles in this Series:
Is Mobile-First the Same as Voice-First? (Article 1 of 4)
How Media & PWAs Fit Into the Larger Picture at Google (Article 2 of 4)
How Shopping Might Factor Into the Larger Picture (Article 3 of 4)
The Local & International Impact (Article 4 of 4)

Mobile-First Indexing or a Whole New Google? How Media & PWAs Fit Into the Larger Picture at Google – Article 2 of 4

By: Cindy Krum

In the past couple of years, the internet has seen massive growth in the streaming and downloading of high-quality digital media. This is especially true for legitimately licensed (not pirated), legal, on-demand content including music, podcasts, audiobooks, video, animation and games. There is a lot of money to be made in this part of the Internet, and the potential is only growing, as people continue to abandon static media and cut cables and transition to online subscription services and on-demand options instead. According to a study by SensorTower last year, the U.S. consumer spending in the top 10 subscription video-on-demand apps (SVOD) surged 77% y/y to $781 million on iOS and Android. The final quarter of the year was even stronger with an 88% jump to $242 million for the same group.

With that, Google has a significant vested interest in securing its position as a digital media provider during this transition. The new Internet of Thing (IoT) and Connected Home technologies increase the stakes and the potential gains for Google immensely. Google Home and Google Assistant are already used heavily for media consumption. Often, they’re combined with Chromecast to make the mobile device the most capable TV remote available on the market. Now with Google Assistant, the phone becomes the TV remote; one that can even leverage voice commands, much like less-capable but competitive remotes from the cable companies.

The phone, of course, can do much more than cable remotes. Lots of big brands are already jumping on the Google Assistant bandwagon. Dish and Qualcom technology are both getting Google Assistant (Dish already works with Alexa) and you will soon also be able to get Sony Headphones and Bose noise canceling headphones with Google Assistant built in as well. There are also rumors that Google has begun a renewed round of efforts with GoogleGlass, which would presumably function using Google Assistant.

With all the consumer demand for media on so many different devices, it will be important for Google to help users find the exact content that they are looking for, or find something new and get it to the right device quickly and easily. It will be important for users to be able to do it seamlessly, even if they are transferring between devices with different levels of connectivity, processing power or no visual interface – even if they are in the middle of a streaming session. Consumer demand for media across different devices is creating unique technological challenges, and Google needs to be proactive about establishing a framework in the search engine for these evolving demands.

This article series dives into how Mobile-First Indexing, Google Actions, and PWAs may all be part of Google’s framework. The previous article in this series talked about Mobile-First Indexing and how it was probably related to Google Home and Google Assistant. This article will take that further, and put Mobile-First Indexing into a media-search and consumption context. It will talk about the pressures in the marketplace that Google is trying to address and how they are trying to address them using PWAs and other similar technology, even when in a ‘voice-only’ or ‘eyes-free’ context. The next and article in this series will speculate about Mobile-First Indexing in a e-commerce, shopping and payment context. The final article in the series will give information about how Mobile-First Indexing may impact local and international searches in maps and Google at large.

All in all, my belief is that Google’s shift to the Mobile-First Index is meant to lay the groundwork for many more significant and strategic changes in the future. Most of these changes are already in the works and they will be based on leveraging the Mobile-First Index, AI and the Internet of Things (IoT). The Mobile-First Index and the search technology that uses it is just what Google needs to create a constant stream of data to feed their many new AI-driven technologies that are all currently being developed.

More Details about the Fundamentals of Mobile-First Indexing

-Mobile-First Index Basics

One of the most misunderstood points about the Mobile-First Index is that Google intends it to be the primary index – for both mobile and desktop searches. It is called ‘mobile-first’ because the crawler is evaluating the content as a mobile phone rather than a desktop computer, so content that would not be linked or visible for a mobile phone might not get included in this index. The idea is that Google wants us to consider the content in a ‘mobile-first’ setting, but not necessarily a ‘mobile-only’ setting.

In terms of algorithms and ranking, Google has confirmed, at least in a pagespeed context that results will be device specific – faster pages will rank better on mobile, but the speed guidelines will be less strict on desktop – as illustrated in the Twitter conversation with John Mueller from Google shown to the right. This lends itself well to the theories presented in the previous article in the series, which suggested that while the index may stay the same across the many different devices and search options, the algorithms will likely adapt to the device, context and personalized needs of the searcher.

-The Importance of Collaborative Use & AI in the Mobile-First Index

Google seems to be banking on the idea that users will use their devices in a collaborative way, as a singular, personal unit, and as a larger societal unit, providing feedback to the search engine in order to refine the information they receive. There are hints that in a voice-only search context, websites might not be present at all, replaced only by rich answers, Featured Snippets, and other Mobile-First, ‘eyes-free’ style content, and saving the web interaction for later in the Google Home or the Google Assistant app. This is how Danny Sullivan suggested a user might get more information from a Featured Snippet that was conveyed by voice, but still accessible later, archived in the app. It may also be that in some cases Google will try to review web results in ‘eyes-free’ search experiences. This could be the incentive for the increasing the length of HTML description tags in search results that Dr. Pete observed in his research. In edge cases where a voice search needs to surface a web result without a screen, Google could read the title tag and description tag to the user, and then just save the URL in the app for later viewing, or potentially cast it to a screen, to be viewed immediately.

The more devices that are enabled with Google Assistant, the better coverage and penetration Google will eventually have with the new Mobile-First Index. High-dollar purchases like cars, TV’s and stereo systems that are enabled with Google Assistant from the beginning, lay the foundation for the Google Assistant software to be available, whenever the user is ready to integrate or interact with the technology. This is even true if the Google Assistant capabilities that are built into these many devices are not leveraged right away, but instead lay dormant until Google is really ready to start pushing for more engagement or until the user chooses to engage on their own. At that point, Google can always remotely update the hardware with the latest software, and encourage engagement through marketing on all the other connected channels. The fact that these kinds of devices are generally kept for longer than a typical mobile phone makes the integration that much more valuable for Google.


What are PWAs & How do they Solve the Deep Linking & App Indexing Problem for Mobile-First Indexing?

Before we dive deep into media SEO, it is important to get a basic understanding of PWAs. Put simply, a PWA is a website that looks and behaves like an app. It can do this primarily because of two files that are hosted on the domain with the website code:

  • App Manifest: An ‘App Manifest’ holds the app icon, start screen and basic app-shell styling information
  • ServiceWorker: The ‘ServiceWorker’ is more complicated than the App Manifest. Basically, it helps the website look like an app by saving local copies of critical assets, to make the website experience feel faster and more seamless on the phone – in other words, more like an app. With a ServiceWorker, once important content has been loaded on the phone the first time, it never has to be loaded again. It can even allow the app to work offline, and then when the web connection is restored, systems sync-up all the assets in the background, to make sure everything on the server and the user-facing side of the app are both up-to date.  

To get a better understanding of how PWAs fit into Google’s larger plans, you have to understand that one of the biggest struggles Google has faced has been crawling and indexing native app content. It has been tough with Android apps and nearly impossible with iOS apps because they are built in different code than Android. To crawl and index native apps, Google has always had to use APIs to help translate and verify the content, but it has never been an effective or efficient solution.

Instead of actually understanding the app content, Google has always ended up relying heavily on associating app content with web content, which can be more easily understood. This association process made it easier for Google to understand what was going on in the native app so that it could be indexed, but it also meant that anything in the native app also needed to exist on the web to be indexed by Google. It also meant that native app content had to be actively and intentionally mapped back to web content, which sometimes proved quite difficult.

In large companies, with large websites and apps, the iOS app team, Android app team and web teams often work as independent bodies, with minimal overlap. Not only do they not know how the other systems work, they often didn’t even know what content was available in the other systems or where it would be found. Most large websites – especially those whose websites operate using a different CMS than their apps, still have not been able to achieve full deep linking or app indexing, even after significant effort. PWAs diminish the need for deep linking and app indexing in the short term, and potentially eliminate the need for many native apps in the long run, hopefully making the deep level of complexity that deep linking caused some companies simply go away.

Just like most software can now be converted to web-ware, most native apps can now be converted to PWAs. This makes the creation, maintenance and distribution of ‘app-like’ content much cheaper for everyone. Once a website has an app manifest and a ServiceWorker, it is technically a PWA. From there, it is a matter of adding app-like behaviors and designs to the website and this can be done in small changes as things are updated over time.

The main thing when creating a PWA is to think like an app developer and leverage the server, JavaScript and APIs more than regular web code to accomplish your normal tasks. PWAs use APIs to accomplish many of the complex and data-heavy tasks that previously only native apps could accomplish, including heavy transfer of video and animation, use of accelerometers and GPS, and integration with OS level utilities like the phone, camera and speakers. Since they provide an app-like experience without requiring a large download, Google has also found that users prefer PWAs to websites. This fact is drawn out in longer time spent on site, higher conversion numbers and more engagement.

Android Instant Apps (Previously Google Streaming Apps) have not gotten much attention, but were an interesting attempt at allowing app indexing for app only content without requiring web parity. These were launched at about the same time as something Google called Dynamic Links (also at about the same time that Google launched Google Now-on-Tap, which draws in a potential connection with AI, entity understanding and the contextual awareness of the phone – as discussed in the last article.) Basically, Android Instant Apps were originally designed to be a way to have an app be indexed without a website.

Similarly, Google Dynamic links were a way to link to and share the app content without having to build a website. They were essentially temporary short-link that included a link-disambiguation engine. If an app had iOS and Android versions, those could both be stored in the Dynamic Link and shared. Since the Instant App lived in ‘chunks’ on the web, the Dynamic link would link people to the deep content in the native app and people without the app would get a web-view that showed just the small portion of the native app that was deep-linked to. The idea was to achieve cross-device sharing without requiring web-parity, but it never really caught on. The process of making a native Android app ready to be ‘chunked’ into Action-oriented Feature Modules is still very hard, and still required labor-intensive mapping between iOS and Android apps, even though the iOS app can never be updated into an Instant App.  

What is interesting though, is that Google now appears to be using Dynamic Links all over the place in their content – especially with things like Knowledge Graph content that have no URL at all. It is almost like the reverse process of creating an Android Instant App, and it seems to be much easier. It appears that Google is simply ingesting databases or data sets that are marked up with JSON-LD – they published the instructions on how to do this about 8 months ago – and allowing these to index on their own, with no URLs and using styling instructions that come directly from the browser. At least, it appears possible with Google-owned assets like Knowledge Graph and similar database style assets.

With this concept of the technology, when a database is properly marked up and modified it becomes an Instant Apps, not only because of how fast the app experience is without extraneous code, but because with minimal additional code or development, the database itself becomes the app, nearly instantly.

NOTE: This description of how “Instant Apps” might be theoretically created in the future sound very similar to Google’s theoretical description of PWAs. According to Google, “PWAs are websites that progressively become apps.”

This kind of indexing is especially great for media databases, because there is no crawling involved. That makes it very fast, and items are surfaced in a search based on their relationships to other items, which is exactly how most media, like music, TV, movies, books and art are consumed. As long as the media can be easily surfaced and played, that is what matters in Mobile-First indexing. As discussed in the first article of this series, it seems that URLs are not required for the Mobile-First Index because it leans heavily on databases for content. In fact, URLs may be one of the least efficient ways that Google finds and indexes content. If a user wants to share this non-URL’s content, they can click on the triangular ‘share’ icon to create a Dynamic Link for it. Then, the asset is uploaded to a server and a temporary link is created and copied to the clipboard. It is clear that this is happening on the fly, because it can take time, if a lot of content is included in the ‘share.’

While it is not as fast or exciting, Google has been encouraging large media sites to mark up their content with JSON-LD ‘watch’ and ‘play’ action schema for more than a year. This markup was originally designed to be included in the body of the HTML or in the head tag of the HTML. However, it seems more and more likely that Google will begin suggesting developers build external JSON-LD files like that are linked from the head tag in the future. Currently this option is in ‘Partner Only’ mode for websites, so you have to be approved, and then will be given more comprehensive instructions, but for native Android Apps to leverage this same option, they use the Voice Interaction API. This separation of files reinforces the idea that Mobile-First Indexing is more about avoiding burdensome crawling in favor of ingesting content whenever they can.

What Makes PWAs & Instant Apps Great for Cross-Device Media?

PWA’s create a more seamless experience across the different devices by ensuring simple, web-based transitions, that don’t require software, app or OS-specific programming. PWA’s use ServiceWorkers to help modulate what data is stored to the phone, and what data is stored to the server. That functionality is also what makes them so great for media.

PWA’s also are able to store app-states in the cloud, including the location of ‘stop’ and ‘start’ in a media file, so that the user can pick up where they left off in an PWA-media interaction, even if it is on a different device. Google Home and Google Assistant have continued to build out the software with new voice command capabilities that you can use to control media. For instance, now you can use Google Home to start and stop shows in Netflix, query TV schedules, set reminders for TV shows or associate specific artists, songs or playlists with alarms, so these kinds of integrations may be important as the Google Assistant becomes more able to interact with media from websites and especially PWAs..

PWA’s require less data transfer and less storage space, and are built to be device independent, so they are great for devices that were not built with lots of storage and data transfer in mind, like web-enabled TV’s, Android Auto devices, ChromeBooks, watches and lots of other devices. Since PWAs are hosted on the web and require Responsive Design, any device with a browser or OS that supports ServiceWorkers will do. With that minimum requirement met, PWAs can work on the largest group of devices possible, regardless of OS, including low-cost, web-enabled dummy-terminals created by Google and other manufacturers as well. Let’s dig in to those devices here:

Windows & Other Desktop OS: Since PWA’s are just websites with extra capabilities, all desktop browsers can display them. Many desktop browsers are starting to support more features of PWA’s – especially push notifications, but some may also begin including some support of ServiceWorkers. Remember, the Windows App Store was the first store to list PWAs alongside regular OS apps, and they use the same app store for desktop and mobile. The integration of PWA functionality on desktop as well as mobile is an important step forward to making PWAs the standard of development, to make them more efficient and effective than device-specific options like native apps or non-responsive websites. Once all of the major tech players are able to effectively support PWAs in mobile and desktop and tablet scenarios, it will be easier and more likely that they will begin pushing them for use on less traditional devices like web-connected car systems, game systems and especially TV’s, where web display and interactivity has historically been problematic and unwieldy.

Apple Desktop & Mobile OS: Apple was a long hold-out on PWA’s. Most assumed that Apple’s previous resistance supporting PWA’s was likely due to their desire to maintain mobile market-share for their products and services in the App Store, and protect against Google’s encroachment, since Google has always thrived on the web and Apple has always done best in their ‘walled garden’. Now Apple has announced support ServiceWorkers in iOS 11.3 and macOS 10.13.4 (although top names in the space like Maximiliano Firtman have already some technical issues with PWA’s on iOS that still need to be ironed out.)

Apple also recently made a surprisingly bold move by suggesting that PWAs were a better option for developers than using app templating software to build low-quality native apps. Since Apple’s revenue has always focused more on the iTunes and the AppStore, their sudden support of PWAs likely means that Apple has figured out how they can monetize PWA’s. It also likely indicates that PWA’s will be accessed through iTunes, the AppStore or through some aspect of AppleSearch (Siri, Safari, Spotlight). Recent announcements even indicate that Apple could let you run iOS apps on your Mac soon – Further indications of convergence across multiple ecosystems, so this change seems likely to further broaden the appeal of PWAs as well.

Low-Cost, Web-Enabled Smart Displays: The last important concept here is Google Smart Displays, which allows Google Assistant and Google Home to cast video content. All different types of screens including TVs and computer screens but it also seems to include Android Auto screens and ‘Smart Displays’ designed exclusively for this purpose. Different from TV’s that might be associated with a variety of other components, dummy screens might only be used for receiving casted information from other devices. It seems likely that these ‘Smart Displays’ will actually be more like dummy terminals that have minimal processing power and which lean heavily on Google Assistant and the cloud for most or nearly all of their functionality. This might be an interesting middle ground where even though there is a visual interface, there is no capability to download apps or even PWAs, and the screens are more like projectors that can never be altered or personalized.

How do PWAs Fit with Google Actions, Media & Mobile-First Indexing?

As described in the previous article in this series, the Google Assistant and the Google Now utility currently pull information from a variety of sources that include News Feeds, Knowledge Graph, Google Actions and website results. Until a search is submitted, the screen focuses on information cards that it determines you might find useful based on your established preferences and habits. This includes information about the weather in your location, sports scores, news and entertainment information. When a voice-oriented search is submitted, search results focus mostly on similar results that include Google-owned content like news, weather and sports scores, but also might include Featured Rich Snippets, Answer Boxes or a Knowledge Graph. When a more traditional query is submitted, it is more likely to generate a list of website results with blue links. The Schema markup that Google has been pushing websites to add is equally apparent here, but not more-so than it might be in a regular desktop or mobile search result.

Google has begun turning many of the stock Android apps into PWA’s. This includes their Contacts, Photos, News, Maps, YouTube. This gives a couple good examples of what PWAs can do, but it also give Android users the added benefit of allowing account-specific information to sync to the cloud across multiple devices. There are also some more niche, non-stock Google apps like Google+, Google My Business and The AMP Playground that are also available as PWAs, probably for similar reasons. 

We can probably expect Google to continue spinning off entity-specific parts of the Knowledge Graph into their own PWAs as the markup is finalized and the assets become available.

There are also some PWAs with app manifests, but no unique web urls to link to, including: Weather, Sports, Dining, Drive, GooglePlay TV and Traffic. The process for adding the Google Dining App/PWA to an Android phone is illustrated below. It is possible that these are examples of Google Instant Apps or something else that Google has yet to document for public consumption. The idea that it is a PWA is based on how the app download is triggered, and how quickly the download is complete (adding only an app manifest and ServiceWorker rather than an entire app takes only about 1 full second).There are also some Google search results that have special formatting and PWA-like qualities in the search results, but for which there are no URLs or app manifests for a PWA including: Events, Calculator, Knowledge Graph. These apps may leverage ServiceWorkers and app manifests that are part of the Chrome app itself to enable this behavior without a separate download.

The voice-only capability of a Google Action is a bar that most apps and websites can’t manage yet, but this could quickly change for PWAs, making them a great potential option for Mobile-First Indexing. Google has already announced that they will begin repurposing existing web content – especially recipes, podcasts and articles into Actions for Google Assistant. (ht/t to Aaron Bradley (@aaranged) for calling this out on Google+ and Twitter). ServiceWorkers force PWAs do a very good job of separating text content from the rest of the code for caching, so to generate a voice-only interaction Google might be able to read the text of a PWA directly from the ServiceWorker cache. This might even be made easier with the announcement that Chrome will be automatically lazy-loading images, preventing them from slowing down access to text content that could be conveyed through voice, when images might slow down the transfer. In the long run, the same may also be true of AMP’d content and Instant Apps but this is a bit harder to envision.


PWAs & Instant Apps in Google Play

Google has already indicated that they will include PWA’s and Android Instant Apps in the Google Play Store, so this means that in both web and app search results, they will be expected to compete side by side. While this is not an overt merger (yet), it is a clear change of direction to allow web content to cross into their app store, where the web was previously forbidden. PWA’s also use ServiceWorkers to manage caching of the most important content on the site, and this is likely how Google will crawl and index this content – saving considerable time over traditional crawling. In this, PWA’s have another significant advantage over both native apps and websites, in that they might be much faster and easier for Google to crawl and index, because it is all done with the ServiceWorker API.

Google Play’s recent focus on evaluating and ranking apps based on technical factors like crash rate and errors, as well as the recent massive purge of 700,000 low quality apps from the store is probably an important signal in the preparation for changes here. Adding PWAs and Instant Apps to Google Play is a signal, not only that Google Play is capable of listing web content in the store, but that it is capable of comparing the web content with app content, and determining an appropriate ranking of all the relevant options. It is an indication that anyone in the App Search Optimization space (ASO) should expect some significant changes soon, but it also is likely an indication that Google is prepared to monetize all of these different offerings simply, through a unified system.

The Mixing of Google’s Media Assets

In terms of media assets, Google owns YouTube, which is the largest media search engine in the world, and the second largest search engine in the world in terms of volume. It also has Google Play which offers TV, movies, books, audiobooks, games, music, podcasts and news. Both have been coming out with a number of new subscription options and announcements, which makes it seem like this type of asset is explicitly intended to be part of the long term picture in terms of Mobile-First Indexing and Google’s movement into IoT, along with casting, playlists and lists of favorites. It seems that both might soon be capable of being used as DVR and set-top-box replacements. You can see in the progression below, that Google is really ready to monetize digital media, so Mobile-First Indexing will be the critical part to surfacing it quickly and easily.

It seems like there may be some major consolidation brewing between YouTube and Google Play. Google Play has historically been targeted at Android users, but as described above, it can now take web content in the form of PWA’s. Potentially, this is true for the entire store, and not just the ‘app’ aspect of the store. YouTube, on the other hand, has always been more web-focused, reaching both Android and iOS users, but is now encouraging people into a subscription service – which was something previously only seen in Google Play. Consider the merger of YouTube Red and Google Play Music – YouTube Red is Google’s Video subscription service for cable-cutters and Google Play Music is where Google’s streaming and music library solution lives.The real benefit that Google has over its main competition (Apple, Bing and Amazon) is search. It is what they have always been good at, but as any seasoned developer knows, legacy can be a mixed blessing. The motivation to create a new Mobile-First Index may have come from their desire to leverage their biggest advantage over their biggest rivals – Apple and Amazon have their own indexes, which are focused heavily on media, but in media, but Google has the rest of the internet, and that will be quite powerful. Google also needs to compete with companies like Netflix and Hulu, which have their own subscriber bases and repositories of content, but need partnerships because they don’t have the voice-oriented or sophisticated search technology.
 

As the two platforms become nearly inter-operable, it becomes more likely that they will consolidate to build the power of the brand and reduce the friction in cross-device interactions. There is also suspicious duplication and cross-linking of the Google Play and Google Play Movies & TV app, as shown on the right. The Google Play Movies & TV app appears to use some Instant App or PWA capabilities though it is not clear which – it seems that the transition may be only partially complete. Regardless, consolidation of media assets is an important way that Google will protect and grow its revenue and market share, to pull customers away from strong competitors like Netflix, Hulu, Pandora and Amazon Video/Music.

For years, Google Play video purchase records have been cross-populated directly with YouTube. Before the MyAccounts disambiguation system was put in place YouTube seemed to host Google’s account management arbitration system, and this is important because YouTube was not just focused on the Android audience like Google Play was – it has always been one of the stock apps on iOS devices. This helped get iOS users into Google’s account funnel (if they didn’t already have a Gmail account or some other Google account set up.)

YouTube and Google Play also both seem to be enabling future in-roads for further social connections in the platforms. While rating, commenting and sharing have always been part of both systems, they both now allow Android users to search their Google contacts to find friends and to share content to other users, in any other apps on the phone, using Dynamic Links. It is possible that once YouTube and Google Play are merged into one, they could also bring Google+ into the fold, in another (possibly ill-fated attempt to launch a viable social network). This probably wouldn’t be successful, but as the level of media that is shared in social networks continues to increase, it remains firmly in the realm of possibilities.

Conclusion

It feels like we are in the midst of a land-grab for voice-oriented technologies that allow consumers to interact directly with the internet. This is an exciting time to be in search – but potentially scary if you are not ready for change. Many SEO’s have settled into the idea that their job will always be about optimizing websites to be found based on the keyword optimization of the content on a website, with URLs and links. While those strategies may work for a bit longer, it seems like in the near future, those types of search results may be relegated to a very small portion of the total number of results that are served to users. As more and more devices go online, and users can search using only their voice, the types of results that they want will change dramatically. Queries will become more action-oriented and conversational. It could only be a small set of research style circumstances where a blue-link web result is an appropriate type of result.

The Mobile-First Index will be the index of all content, but I believe that different algorithms will rank the content differently, based on the queries, but also based on the user intent, circumstances, and history. Some devices may have a strong understanding of a users tastes and preferences and will be able to personalize recommendations based on entity relationships of content in their index. Databases and feeds of content – especially media content – will be preferred over HTML, because they can be easily ingested through API’s rather than crawled. The existing structure of a database, along with JSON-LD markup, can be used to build on entity understanding for the content, rather than creating it from scratch in the index. With the intense competition for users’ attention in the media space, this kind of capability and agility will be heavily in Google’s favor.

Mobile-First Indexing is a change that is laying the foundation for much greater changes at Google, and SEO’s who are not prepared for the change could be left in the dust. Where our old jobs focused on optimizing websites, HTML tags and content, our new jobs might focus much more heavily on optimizing JSON-LD markup in databases and feeds, and training AI. The first article in this series focused on the importance of Voice for Mobile-First Indexing, and this article focused how Mobile-First Indexing could change our interaction with media in the near future. The next article in this series will focus on e-commerce, and how how Mobile-First Indexing may change how we shop and pay for physical goods, and the final article in this series will focus on the impact that Mobile-First Indexing is expected to have on local and international SEO. 

Other Articles in this Series:
Is Mobile-First the Same as Voice-First? (Article 1 of 4)
How Media & PWAs Fit Into the Larger Picture at Google (Article 2 of 4)
How Shopping Might Factor Into the Larger Picture (Article 3 of 4)
The Local & International Impact (Article 4 of 4)

Enterprise-Level International ASO in iTunes App Store & Google Play (2/4)


By: Kathryn Hillis & Cindy Krum 

As with all App Search Optimization (ASO), International ASO is an evolving discipline. Neither the Apple App Store or the Goole Play Store have shared the nuances of their internal app store search algorithms, so a lot of what we know is based on experimentation, trial and error, and case studies. The complexity of International ASO is compounded by the fact that an international apps can be available in many languages and many countries; they can also be available in ‘Paid’ and ‘Free’ versions and/or in ‘Lite’ and ‘Full’ editions. Each version of the app will need locally focused ASO assets and a process that must be maintained semi-individually, especially when the content and KPIs for the various versions of the app change.

This is the second article in a four-part series about International ASO. The first article described the strategic elements to consider when determining which countries to target with an ASO strategy. This article will detail the major differences in how the iTunes App Store and the Google Play Store approach the various country and language combinations, and how those differences can impact your ASO strategy and rankings. The next article will explain how to perform international ASO keyword research and then upload and verify the metadata correctly in the stores. The final article in the series will outline how to track your success and how to use the nuances of translation and localization to tweak and test minor keyword variations to drive ASO rankings improvements.

Understanding the International Architecture of the App Stores

If you are just getting started, NationsOnline.org is a great resource for finding out which languages are spoken in countries around the world. Some countries have a population that speaks many languages, and some languages are spoken by people, in many countries, which makes the job of organizing app stores quite difficult. That being said, there are a few fundamental things both the iTunes App Store and Google Play have in common:

  • First, both iTunes and Google Play allow you to control which countries your app is sold in, and won’t rank apps for countries that they are not intentionally published in. One app can support many languages within the app and one app can be supported by multiple languages of metadata, (though this is more restricted for iTunes than Google Play).
  • Second, both stores let you deploy you app metadata in any of the potential metadata languages listed within iTunes Connect or Google Play Console, although the stores distribute and utilize metadata differently, which we’ll discuss in-depth in this article.
  • Third, both stores attempt to tailor their search experience to the user based on the device settings. They each try to match the query exactly, and while they may pull in apps from other countries that have metadata that matches your query (as long as the app is permitted to show in the searcher’s country), in the stores will not translate the query to rank other apps in different languages.
  • Fourth, though it is not the focus of this article, ASO’s can always submit different app builds to different countries. Different app builds will have different app IDs and will not see any strong benefit from other apps by the same brand (though both stores do offer a small benefit when apps come from strong developer accounts with good reputations.) The ‘separate build’ strategy is especially useful if the app content, design or functionality changes significantly from one place to another. The most common example of this might be a ‘lite’ version of the app that is made available in places where cellular data connectivity is less reliable, and the reliance on web access needs to be minimized.
  • Finally, to address the complexity of country and language combinations for users, app stores adapt the store experience on every phone to match the country and language settings of the phone’s operating system (where they can). Most phones require this to be set up in the activation process, so this creates the best and most reliable search experience for the users. If you are hand-testing ASO search results, it is important that you make these changes in your phone settings before you begin testing. Problems only come into play here if the phone OS is set to a language that is not supported by the store, in which case, both stores seem to fall-back to English app results;  iOS usually defaults first to UK-English, and Google Play usually defaults to US-English, but we have seen both included when no store-supported OS language is detectable from the phone.

Unfortunately, this is where the similarities end. There are many differences between the two stores. The Apple App Store and Google Play Store handle countries and languages differently from each other, which becomes particularly clear when doing International ASO. One of the most important differences between the two stores is their international architecture and the impact is has in each store. The iTunes App Store is segmented by country or territory, and then by languages within that territory. The metadata languages that can help you in terms of ASO are limited by country/language combinations, due to Apple’s restrictions within this segmentation.

In the Google Play Store, on the other hand, any metadata language can rank in any country your app has been launched in. It is segmented first by language and then sometimes by dialects for particular countries or territories. This granularity is helpful for targeting local markets, since different dialects of the same language are common in different regions.  Google expects users to self-select through language preferences on their device. This fundamental difference in segmentation between stores can have a significant impact strategy and prioritization:

Language-First (Google Play): In Google Play, apps are submitted and sorted by language first, then ASO’s can limit which countries the app is sold in. Sometimes specific languages or dialects are associated with specific countries, and the submitter chooses if they wish to publish in those dialects/countries or not in the ‘Pricing & Distribution’ settings. There are specific languages or dialects that are used by most people in particular countries or territories (like English [US] is used by most people in the United States). Therefore, users based in the United States with their phone set to French (for example) will see French metadata in Google Play for apps that have provided French metadata but they will only be shown apps that are published in the US.

Google will automatically translate the app title and short description that you submit to the store into a number of different languages, so that most people read about an app, even if the app itself is not available in the language that they are reading. Android app metadata can be added in any of the languages listed below, and will be auto-translated as indicated below with the ‘hreflang’ tags that Google Play store uses in their rel=alternate tags to link to these variations. Languages that will not be auto-translated are bold and pink, so that they are easy to spot.

Google Play Language Support

Afrikaans – af (hreflang=”af”)
Amharic – am
Arabic – ar (hreflang="ar")
Armenian – hy-AM (hreflang="am")
Azerbaijani – az-AZ
Basque – eu-ES
Belarusian – be (hreflang=”be”)
Bengali – bn-BD
Bulgarian – bg (hreflang=”bg”)
Burmese – my-MM
Catalan – ca (hreflang=”ca”)
Chinese (Hong Kong) – zh-HK (hreflang="zh_HK")
Chinese (Simplified) – zh-CN (hreflang="zh_CN")
Chinese (Traditional) – zh-TW (hreflang="zh_TW)
Croatian – hr (hreflang="hr")
Czech – cs-CZ (hreflang=”cs”)
Danish – da-DK (hreflang=”da”)
Dutch – nl-NL (hreflang="nl")
English – en-AU
English – en-CA
English – en-IN
English – en-SG
English (United Kingdom) – en-GB (hreflang="en_GB")
English (United States) – en-US (hreflang=”en”)
Estonian – et (hreflang=”et”)
Filipino – fil (hreflang=”fil”)
Finnish – fi-FI (hreflang="fi")
French – fr-FR (hreflang=”fr”)
French (Canada) – fr-CA (hreflang="fr_CA")
Galician – gl-ES
Georgian – ka-GE>
German – de-DE (hreflang=”de”)
Greek – el-GR (hreflang="el")
Hebrew – iw-IL
Hindi – hi-IN (hreflang="hi")
Hungarian – hu-HU (hreflang="hu")
Icelandic – is-IS>
Indonesian – id (hreflang="in")
Italian – it-IT (hreflang=”it”)
Japanese – ja-JP (hreflang="ja")
Kannada – kn-IN
Khmer – km-KH
Korean (South Korea) – ko-KR (hreflang="ko")
Kyrgyz – ky-KG
Lao – lo-LA
Latvian – lv (hreflang=”lv”)
Lithuanian – lt (hreflang="lt")
Macedonian – mk-MK
Malay – ms (hreflang=”ms”)
Malayalam – ml-IN
Marathi – mr-IN
Mongolian – mn-MN
Nepali – ne-NP
Norwegian – no-NO (hreflang="no")
Persian – fa
Polish – pl-PL (hreflang="pl")
Portuguese (Brazil) – pt-BR (hreflang="pt_BR")
Portuguese (Portugal) – pt-PT (hreflang="pt_PT")
Romanian – ro (hreflang=”ro”)
Romansh – rm
Russian – ru-RU (hreflang=”ru”)
Serbian – sr (hreflang="sr")
Sinhala – si-LK>
Slovak – sk (hreflang="sk")
Slovenian – sl (hreflang="sl")
Spanish (Latin America) – es-419 (hreflang="es_419")
Spanish (Spain) – es-ES (hreflang="es")
Spanish (United States) – es-US
Swahili – sw (hreflang="sw")
Swedish – sv-SE (hreflang="sv")
Tamil – ta-IN
Telugu – te-IN
Thai – th (hreflang="th")
Turkish – tr-TR (hreflang="tr")
Ukrainian – uk (hreflang="uk")
Vietnamese – vi (hreflang="vi")
Zulu – zu (hreflang="zu")

Relying on the auto-translation feature is not a strong ASO strategy because it is primarily based on Google Translate results, which may not choose the most optimized or ideal keywords for ASO search and discovery. The auto-translate feature also ignores the Long Description entirely.  If you prefer not to use the auto-translate results from Google, app metadata translations can also be submitted, along with translated videos and screenshots for inclusion in the store.

From an ASO perspective, the best option is to do localized keyword research for each metadata language you need, and work with your own translators to incorporate that research when writing your own local versions of the metadata for each language or localization. Then you can add that metadata individually in the Google Play Console, by going to ‘Manage Translation/Add Your Own Translation Text’ to changing the metadata for each specific language. If you are struggling to find translators, Google offers a fee-based service for translation of the Long Descriptions. This fee-based service is just for translation, so you will not get the full benefits of strategic, ASO-focused localization.

You can see Google Play’s Language-First orientation play out in the URLs of the online version of the Google Play Store. The ‘hl=’ designation in the URL shows the app metadata language. These mostly follow standard ISO web language abbreviations, but when there is a specific localization available, they can change, as outlined in the table below. Some languages are designated ‘default’ languages (highlighted in pink), and this means that they are available in multiple locations without further localization. Other languages like Portuguese for Brazil and Portuguese for Portugal, are only a default setting in one country, and have no broader options that could cover other countries.

Google Play Web URL/App URILanguage + Localization
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=enEnglish - Default
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=en_GBEnglish - UK
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=esSpanish - Spain
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=es_419 Spanish - LATAM
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=filFilipino
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=frFrench - Default
https://play.google.com/store/apps/details?id=com.weather.Weather;hl=fr_CAFrench - Canada
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=pt_PTPortugese - Portugal
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=pt_BRPortugese - Brazil
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=beBelarusian
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=zh_HKChinese - Hong Kong
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=zh_CNChinese - Mainland China
https://play.google.com/store/apps/details?id=com.weather.Weather&hl=zh_TWChinese - Taiwan

Country-First (iOS App Store): In iTunes, you can submit one app to multiple countries, with multiple app content and metadata translations. When submitting the app, you can supply any metadata localizations you would like, but Apple restricts which localized metadata will appear in store listings based App Store country, so only some country and language combinations will actually display the localized metadata in the App Store or help your keyword rankings. (We discuss these limitations in more detail later in this article, in the “Regional & Default Languages” section.)

A full list of countries that are supported in the App Store is available online, and this is a great place to start if you are trying to decide where in the world you would like to launch your iOS app. It is also useful for finding the the country abbreviations that the App Store uses in their URL structure, if you are trying to get more information about their support, or what to expect for the country or region. iTunes will populate the metadata for each country’s store differently, according to the hierarchy of default languages that is specific to each country’s store, and the available metadata localizations it can use. In countries where more than one language is spoken, a user’s device settings may also impact which metadata localization iTunes will display for them.

This metadata hierarchy plays out in the structure of URLs for the Apple App Store. The country is the first subfolder in the URL (highlighted purple in the grid below). Sometimes there are language parameters at the end of URLs (highlighted pink in the grid below), which display metadata in different languages. URLs with language parameters will only display that language if that metadata is supported in that country’s App Store; if it is not supported, the default metadata language for the country will be shown. This test can be helpful for ASOs to use App Store web URLs when researching potential metadata restrictions for their app. Below is a grid with some examples of URLs structure for the Apple App Store, and the metadata that displays, or doesn’t, in the case of restrictions. Localization and language combinations that that do not display matching metadata are highlighted red.

Examples of Apple App Store Web URLs Localization + Language
https://itunes.apple.com/de/app/facebook/id284882215Germany - Default (German)
https://itunes.apple.com/de/app/facebook/id284882215?l=enGermany - English (UK)
https://itunes.apple.com/de/app/facebook/id284882215?l=frGermany - French [Restricted - Displays German metadata, not French]:
NOTE: The Apple App Store in Germany seems to restrict metadata to German and English. We see this restriction at work when we try to view French metadata in the German store by adding a French language parameter (“?l=fr”), and the store listing displays the default metadata language (German) instead.
https://itunes.apple.com/us/app/facebook/id284882215United States - Default (English [US])
https://itunes.apple.com/us/app/facebook/id284882215?l=esUnited States - Spanish (Mexico)
https://itunes.apple.com/us/app/facebook/id284882215?l=deUnited States - German [Not Supported - Displays UK-English metadata, not German]
NOTE: As expected, the Apple App Store in the United States limits the possible metadata to English and Spanish. You can see this restriction at work when you try to view German metadata in the United States store by adding a German language parameter (“?l=ge”), and the store listing displays the default metadata language (UK-English) instead.
https://itunes.apple.com/in/app/facebook/id284882215India - Default (UK-English)
https://itunes.apple.com/in/app/facebook/id284882215?l=hiIndia - Hindi [Not Supported - Displays UK-English metadata, not Hindi]
NOTE: As expected, this request shows UK-English instead of Hindi, since Hindi is not supported in the App Store.
https://itunes.apple.com/ca/app/facebook/id284882215Canada - Default (US English)
https://itunes.apple.com/ca/app/facebook/id284882215?l=frCanada - (Canadian French)
https://itunes.apple.com/ca/app/facebook/id284882215?l=frTunisia - Default (UK English)
https://itunes.apple.com/tn/app/facebook/id284882215?l=frTunisia - French [Not Supported - Displays UK-English metadata, not French]
https://itunes.apple.com/sa/app/facebook/id284882215Saudi Arabia -Default (UK-English)
https://itunes.apple.com/sa-ar/app/facebook/id284882215Saudi Arabia - Arabic[Not Supported Yet - Displays UK-English metadata, but we expect this to change soon!]
NOTE: This url is still 404, but it is linked to from the UK English version of the page. We believe this may indicate that the App Store may be about to start supporting Arabic - in Saudi Arabia, but also in many other countries where the primary language is Arabic. This would be a big deal, since many people whose primary language is Arabic are less likely to speak, write or search in secondary languages. *

 

Africa

Middle East

*Countries Where Arabic is Spoken, & Where We Expect Arabic Might Soon be Supported in the App Store
Arabic Speaking Countries Already in the App StoreArabic Speaking Countries NOT Already in the App Store (But Could Now Be Added)Arabic Speaking Countries Already in the App StoreArabic Speaking Countries NOT Already in the App Store (But Could Now Be Added)
EgyptAlgeria
Chad
Djibouti
Comoros
Eritrea
Libya
Mauritania
Morocco
Sudan
Tunisia
Tanzania
Bahrain
Israel
Jordan
Kuwait
Qatar
Saudi Arabia
United Arab Emirates
Iraq
Lebanon
Oman
Palestinian territories
Syria

Conceptualizing Your International ASO Strategy

Knowing the store-specific details will allow you to conceptualize, research, submit, verify and set KPIs for ASO metadata effectively for multiple countries. To make all the concepts a bit more simple, we have broken the discussion up into the ‘Store-Side Elements of International ASO’ and the ‘User-Side Elements of International ASO.’ The ‘Store-Side’ elements are basically the metadata and category selection for the app. The ‘User-Side’ elements of ASO are simply the app content itself, and the secondary impact that user device setting that can have on the search results users see in the store. Since the ‘User-Side’ parts of ASO are driven by the ‘Store-Side’ elements, we will explain the “Store-Side” elements first, and then talk about how they impact the user.

Country-Specific Dialect Designation
(Unique Metadata Option Available)

iTunes App Store
Google Play Store
India - HindiNOYES
India - EnglishNOYES
Mexico - SpanishYES NO
Portugal - Portuguese YES YES
Brazil - PortugueseYES YES
UK - EnglishYES YES
Australia - EnglishYES YES
Canada - EnglishYES YES

Store-Side Elements of International ASO

To help encourage localization, both of the app stores have separate metadata for dialects of the world’s most-used languages. For example, both the iTunes App Store and Google Play Store include language designations like UK-English, Canadian English and Brazilian Portuguese. The two stores don’t always support the same languages and dialects; For example, Google Play supports separate metadata for Indian English, but iTunes does not, as you can see illustrated in the chart to the right. Conversely, both iTunes and Google Play support separate metadata for the the UK, Australian, and Canadian versions of English, also shown on the right.

In Google Play, historically it has been strategic to have an app and metadata in one common language like English, so your app can theoretically be ready to launch in all countries where a particular language is spoken, even if is not the primary language of the country. Uploading English metadata is a quick way to reach a larger share of the market in Google Play because that store displays metadata based on user’s language preferences on their device, and many devices will have English accounted for in their list of language preferences.

NOTE: Google Play just announced that they will soon begin allowing apps to launch staged roll-outs that are explicitly targeted to a particular country, which will make testing and launching international app version updates much easier, and less potentially disruptive if the updates are buggy. There is likely an ASO benefit to managing these roll-outs to help make the most of the download volume and velocity.

Conversely, in iTunes, it is vital to know what languages iTunes associates with each country for both the app content and the app metadata (in the store) because they are not always the same. Occasionally, Apple will support one group of languages for an app’s content and another group of languages for the App Store metadata in that same country. (A good example is India, discussed in the paragraph below.) What is interesting here is that Apple does not seem to be using any editorial review to verify that the app or metadata is in the language that it claims to be. There could be circumstances in which it makes sense to ‘fudge’ the language designations a bit, for instance, labeling ‘US-English,’ ‘UK-English’ until the UK specific metadata is researched and ready to launch, or US-English, even if the metadata includes a significant amount of Spanish too.

These concepts are often easier to understand by way of example:

India has more people than all of the countries that speak German, Italian and French, combined, so if you are marketing an Apple app, it might be strategic to launch in India first. To launch there, you need to make sure that the app content itself is available in Hindi or UK-English. iTunes does not currently support Hindi metadata in the App Store, so the app metadata can only be in UK-English. (And yes, iTunes has been accused of being Anglo-centric before.)

Teams have to be organized enough to build the app in Hindi, but write and submit metadata in UK-English. In that metadata, it will be important to explain (in English) that the app itself supports Hindi, especially if you need to clarify if the app itself does not actually support UK-English. From a strategic perspective, you should always verify that the language you choose to launch with is widely spoken in the country it is launching in, but the good news here is that many people in India do speak some English.

Since Google Play supports both English and Hindi metadata in India, the app and the metadata can be written in either language. The ability to target both languages broadens the appeal of the app so writing the app content and metadata in both languages is the strongest option, but not required. Since Google Play does not require content in both languages, it is easier for companies without translators to launch apps in either language (one or the other), which helps reinforce the idea that Android is the best option for targeting “The Next Billion Users” (discussed in Article 1 of this series).

Regional & Default Languages

As mentioned earlier, both iTunes and Google Play allow you to deploy your app metadata in any language. If needed, both stores also seem to fall back to English metadata for many international localizations. This tends to happen when an app has not uploaded localized metadata for a territory, or if localized metadata is not yet supported for a territory. In iTunes, UK-English is the the default version of English, so submitting UK-English metadata can improve your ASO options for many countries around the world. In Google Play, US-English is the default version of English, so submitting US-English metadata can help with the international optimization for your app.

User-Oriented Elements of International ASO

Apple goes a step beyond the normal store adaptations for language and country localization based on the user’s phone settings. They also seems to reference the user’s iCloud language settings – both for determining what should rank, and what languages apps should launch in. We believe this extra effort is in place, just in case a user prefers to access the App Store in a language that is different from the phone OS language. This could be especially important in countries like Switzerland, where there are multiple metadata localizations that can be shown to users. It could also be important in instances where the App Store language does not match the most-likely default device language, like in the Hindi/UK-English example. Many Indian users probably have their phone OS language set to Hindi, but need the App Store to use UK-English, since Hindi is not supported. In this case, the App Store experience will be in UK-English, but the app will launch in Hindi.

It is possible that Google also updates Google Play search results based on language settings in Gmail/Google accounts, but we have not seen evidence of this yet – Remember, it is less needed, since Google Play is organized with language-first (though it is definitely something to watch out for in the future). When a device’s OS language is set to a minor language (like Afrikaans and Icelandic) and nothing else, the Google Play store presented mostly US-English metadata but also includes UK-English metadata for some apps. It appears that they do this only for apps which perhaps have yet to uploaded US English metadata.

It is also important to know that sometimes, submission of metadata in one language will have an additive, beneficial impact on search results in another country where the app is also submitted – even if it is in a different language. Neither store does anything to make these relationships clear, but it can be very strategic. A great example of this additive impact is included below:

Mexican Spanish in the iTunes App Store; companies that submit an app to the US App Store and assign it a ‘Spanish (Mexico)’ localization should expect to see the app ranking in the US Store for keywords included within the “Spanish (Mexico)” localization. This is true, even if the app’s content is not available in Spanish, and if the Mexican Spanish metadata is actually written in English.

If the app’s content is not available in Spanish, ranking well for Spanish-language keywords may not be helpful for your business. Furthermore, if the app is also only available in the US, you will not be concerning yourself with any additional Latin American marketplaces. This means that you can safely use the Spanish (Mexico) localization like an extra keyword field for the US, which can be pretty handy.

Companies that are not actively monitoring ASO search results in different countries might never know, so most of this information related to these relationship is based on our own experiences. To make the most of this relationship, companies should avoid replicating keywords in both US and Mexican metadata elements like the Title and the Subtitle. That being said, it is also important to make the metadata look real, and not overly spam it, to avoid generating undue attention from the Apple Editorial teams.

It is a different story if you have an app that you actively want to market in both the US and Mexico. In that case, companies should do keyword research for both versions of the app store – iOS US and iOS Mexico, and write the Mexican metadata with the understanding that it will have an additive impact on the US rankings as well. The Mexican metadata can still include keywords that help the US app, especially in instances where the keyword research has overlapping positive results, but the main focus of the Mexican metadata should be ranking in the Mexican market.

Below is a grid that should help clarify the major languages, dialects and localizations that are supported and not supported between the two stores. Here’s how to interpret the grid below. The “Language/Region” column on the far left shows the language and region of the language metadata. In the subsequent columns to the right, more detail are provided for each store in the  “Language Designation” columns.

  iTunes App Store Google Play Store
Language/RegioniTunes Connect
Language Designation: The metadata language in iTunes Connect.
Localizations Available/ Impacted: Localizations where the “Language Designation” (column to the left) can appear in the store and rank in search results. Google Play Console
Language Designation: The metadata language uploaded in Google Play Console.
Top Localizations Impacted: Most common territories where the metadata from the “Language Designation” (column to the left) will be displayed to users & where keywords tend to rank the best, (since translations can technically show anywhere)**
English - United KingdomUK - English (Most Store Listings Impacted World Wide)United Kingdom, India, Republic of Ireland, United Arab Emirates, Singapore, Mexico, Germany, Belgium, Kenya, South AfricaUK - EnglishUnited Kingdom, United States**, Canada**
English - United StatesUS - EnglishUnited StatesUS - English (Most Impacted Store Listings World Wide)United States, United Kingdom**, Canada**
English - CanadaCanada - EnglishCanadaCanada - EnglishCanada
Spanish - South AmericaMexico - SpanishMexico, United States, Argentina, Chile, Colombia, VenezuelaLATAM - SpanishMexico, Argentina, Colombia, Chile, Peru
Spanish - EuropeSpanish SpainSpain - SpanishSpain
French - EuropeFrenchFrance, Switzerland, Belgium, LuxembourgFrance - FrenchFrance, Belgium, Switzerland, Luxembourg, Canada**
French - CanadaCanadian FrenchCanadaCanada - FrenchCanada
French - AfricaFrenchNONE - iTunes localizations in Africa seem to display English (United Kingdom) metadataFrance - FrenchAlgeria, Madagascar
Simplified Chinese - East AsiaSimplified Chinese China (Mainland) PRC - Chinese
(Simplified Chinese is the official script of the People’s Republic of China [PRC].)
China (Mainland)
Traditional Chinese - East AsiaTraditional ChineseHong Kong, Taiwan, MacauHong Kong - Chinese Hong Kong
Taiwan - ChineseTaiwan
**Sometimes, we observed that certain keywords in metadata for a particular language designation ranked in a country and other keywords (in that same metadata) did not. When we have seen mixed results like this, we added an asterisk (**) to the grid. Since mixed keyword-ranking results are possible, we recommend doing your own research and monitoring for keyword-ranking impact.

Below are two examples to help understand proper interpretation of the  information in this chart:

Spanish:  If you have authored iOS metadata in Spanish (Mexico) you will be able to target all of the Spanish-speaking countries in Latin America very quickly, but not Spain. But if you are working on an Android app, you can publish metadata in LATAM Spanish and it will cover all the Spanish-speaking countries in LATAM, Mexico, and any users in Spain with “Latin American Spanish” set on the their mobile device (but most will have their phone default set to Spain-Spanish). It’s possible but less likely that this metadata will also be displayed to users with “Spain Spanish” set on their device because that preference is a pretty close language match. We believe that Google Play uses a competitiveness threshold to determine when this happens; for high competition queries, with lots of potential apps to rank, Spain-Spanish is less likely to show, but for low competition queries, where there may not be enough apps to generate a good search result, it becomes more likely. We recommend testing metadata on your devices to confirm that this happens, as this isn’t a scenario we’ve tested.

French: If you have French iOS metadata, you could also theoretically use ASO to reach France, Switzerland, Belgium, Luxembourg, but none of the French-speaking countries in Africa. If you have Android metadata written in France-French, you could reach France, Belgium, Switzerland, Luxembourg, Canada, and French-speaking countries in Africa. To target Canadian users more precisely, you can upload French Canadian metadata, but it is not required.

Update (1/24/18): There seems to be a new “App Stores and Localizations” grid that displays as a pop-up in iTunes Connect (screenshot below). You can access that grid by clicking the question mark next to the metadata-language drop-down and then clicking “App Stores and localizations.” (Screenshot of that navigation path below.)

To use the iTunes grid, select one of the five App Store regions from the drop-down at the top. The left column of the grid (“App Store”) lists each App Store country. The right column (“Language”) displays the metadata language localizations in iTunes Connect that impact each App Store country. The data in this iTunes grid matches our findings published in our localizations grid (above), including the observation that English (UK) metadata impacts the most App Store countries worldwide.

Other things that can impact ASO search results include the model of the phone (remember, model is different from the OS), which is much more common in Google Play, since compatibility issues associated with specific handsets are more common. We have also seen that the time and/or time-zone of a search may also impact rankings. This could be related to an app’s download velocity at different times of day, but could also just represent normal fluctuations or simply be a random bug associated with the app store data-centers or their APIs. (More research is in progress here!)

Conclusion:

The complexity between how the two app stores handle country and language relationships can impact your app deployment strategy, and understanding these differences will go a long way to maximizing any international ASO results. It could also save your teams from wasted time and effort caused by a misunderstanding of how the two stores operate. Just because a language, dialect or strategy works or is available in one of the app stores does not mean that it will work or be available in the other. This is critical to understand because it often adds to the overall cost of an international ASO strategy; work for one store may not carry over any shared benefit for the other. Also, in companies where the Android and iOS teams always work in parallel, these complexities could make that more challenging. Here is a quick recap of the differences between the stores:

iTunes App Store Google Play Store
For iTunes, metadata only helps with ASO rankings if it is in a supported country-language combination. For example, Spanish (Mexico) metadata helps with ASO rankings in Mexico and the United States (both supported country-language combinations), but does not help with ASO rankings in France (not a supported combination).For Google Play, any metadata language can rank in any country where the app has been launched. Keyword ranking success varies based on language, territory, and the keyword itself. Common metadata and localization combinations (like French metadata in France) will rank better than less-common metadata and localizations combinations (like French metadata in the United States or Spain).

This article compared and contrasted the international organization of the two app stores and how those differences impact an ASO strategy. The iTunes App Store is organized country-first and Google Play is organized language-first. This subtle difference can impact details and approach you take with your ASO strategy, as well as larger priorities and goals that a company reasonably be able to achieve. This article also covered the user-focused elements of an international SEO strategy, and how apps may support different languages than the stores that sell them (especially iOS), and how that can impact users search queries. Finally the article clarified how the country and language settings on a phone and in a user account control and change the search results that any particular user might see, and why this is important for your strategy.

The next article in this series will focus on the execution of an ASO strategy, including how to complete international keyword research, as well as how to upload and verify the metadata. The final article will focus on tracking success, and nuances in language that you might be able to use, when fine-tuning your metadata for long-term ASO success.

 

 

Understanding the Basics of International ASO (1/4)


By: Kathryn Hillis & Cindy Krum 

App Store Optimization, otherwise known as ASO, is a nuanced skill-set that has escaped the attention of most digital marketers.  However, it can be a critical part of marketing strategy for companies who want to drive app downloads from Google, Google Play and the iTunes App Store. While some aspects of ASO are considerably more simple than traditional SEO, international ASO can be surprisingly complex because of the level of detail and international knowledge necessary for success. Companies that use apps to reach an international audience face a particularly challenging task, because many of the nuances of international ASO are not well-documented anywhere online. This three-part article series is designed to fix that situation. These articles will help drive awareness of the complexity, share details about international ASO implementation strategies, and provide an instructional resource for companies that need to optimize apps that are deployed around the world.

This article will begin with a discussion of Google and Apple’s increased focus on ‘The Next Billion Users.’ and how devices and connectivity can impact strategic marketing decisions. It will then provide resources and guidance to help determine which countries, languages or regions a company should  target, and how to manage the projects to meet important KPIs in those countries.  The next article will focus on the fundamentals of international ASO, discussing the nuanced differences between internationalization in the iTunes App Store and Google Play, and how that impacts your deployment and localization strategy. The third article in the series will focus on how to implement and manage your ASO campaign, and how translation and localization can impact app development, launch and update strategy. The final article in the series will provide details about how you can maximize app rankings and downloads with country or language-specific changes to your metadata submissions, and how to track and measure your success.  If you are new to ASO, it might help to check out our ‘App Indexing & The New Frontier Of SEO: App Packs & App Store Search’ article, to establish a baseline understanding of ASO optimization strategy before reading any further.

Understanding the Importance of “The Next Billion Users”

In the past couple of years, Google, Apple and even Facebook have been increasing their focus on what they describe as ‘The Next Billion Users.’ This is a demographic term that groups people who live in underdeveloped or developing regions, who have historically had minimal personal access to the internet and/or who have just recently built out the necessary infrastructure for widespread personal internet access. The demand and growth potential for internet content in these regions has become clear, and the international web-behemoths want to capitalize on it before small regional operations can take hold.The focus on The Next Billion Users is not a call to ignore markets that are currently strong, but instead to recognize the limited growth potential in “Developed” nations that already have a lot of mobile internet users – The Developed market is saturated or reaching full saturation, so there is infinitely more growth potential elsewhere.

All projections show that the next few years’ growth of mobile internet in places like India, South America and the Middle East will be huge. According to Google, Brazil, India, and Indonesia are already in the top 10 countries with the highest Search Volume for Google Search. As an example, 100 million new users went online in India alone from 2015 to 2016, and India is projected to have around 1 billion unique mobile subscribers by 2020.

With ASO, it is especially relevant to note that the Next Billion Users generally skip wired-line internet and computers and go straight to mobile data and devices. They have fewer preconceived notions about software and computing, and may be more reliant on apps for all aspects of their digital life. In fact, according to the App Annie chart below, Brazil, India and China use more apps per day on average than any other country, including the US, Japan, UK and Germany. With this level of intense app engagement and rapid growth, the opportunity for a successful ASO project is potentially immense. Knowing local details about the use-case for your app will help ensure that your app secures market share quickly, before local or international competitors do.

MORE INFORMATION:
Google is deeply committed to reaching the ‘Next Billion’ users – So much so, that they have begun catering some technology directly to people who have slow and/or limited internet connections, and/or limited storage space on their mobile device. For example:• AMP, PWA & Instant Apps – For the web, Google has been advocating a lot of web-based app alternatives that focus on speed, local caching and optimizing UX for inconsistent connections or offline use-cases.• Android Go – Google’s lightweight version of the upcoming Android O operating system, designed to run on smartphones that have 1 GB, or even 512 MB, of RAM. (Source: http://www.androidauthority.com/android-go-773037/)• YouTube Go – Targeted to Indian consumers, this app will let you download videos and control data usage while streaming video, which is perfect for people with slow/limited internet connections. (Note – the app is still in beta, but has a sign-up page in English and Hindi: https://play.google.com/store/apps/details?id=com.google.android.apps.youtube.mango&hl=en_US)• Chrome Lite & Chrome 64 – Chrome Lite is a lightweight browser that focuses on loading critical content first, to maximize the value of a mobile web experience with limited connectivity. Chrome 64 is the most recent Chrome update. Both allow users to save content for later downloading, especially good in cases where connectivity is limited. They both also includes a ‘Low Data Mode’ which leans heavily on local browser caching, overriding explicit cache settings on a site to maximize the speed of data transfer.

(Source for image: http://grow.co/content/mau-vegas-2017-usage-is-the-new-currency/ [2:20 min mark].)

Determining Where You Should Target Your International ASO

At its most basic level, ASO is about user acquisition, and international ASO is no different. Starting with the right country or countries makes it much more likely that your internationalization efforts will be rewarded with new, engaged users. However, many businesses make the mistake of starting international ASO in countries without first considering the factors that might increase or reduce their potential user-base and ultimate success in those regions.

Before any company begins an ASO project, the pragmatic reality of the users that you would like to target should be understood and conveyed to developers as soon as possible. Stakeholders should know the most common types of mobile operating systems and handsets in your desired locations, and the average level of connectivity available to most users at different parts of the day, and which OS most users will be on – Some regions are much more heavily populated by iOS devices and others Android.

In some cases, apps may need to work on completely different handsets that are only available regionally, which is also important to know and plan for. Some areas are more likely to have fast, new phones with loads of memory while others are more likely to have older phones, a model or two back. All of these things should be taken into consideration when deciding which operating systems and phones the development teams should prioritize or de-prioritize, how large the app download should be, how much the app can rely on robust data connectivity, what types of features will make app engagement easier for the user and also where to focus testing and quality control. The following sections include details about how you can learn more about these topics and research specific countries to make the best business case for your efforts.

Connectivity Research & Projections

If an app requires that users have a strong, consistent connection to the internet, you’ll want to know which international users have access to fast, reliable, and affordable internet, and to what degree that the access is available throughout the day: at work, at home and while commuting. If countries that you wish to target do not have a robust cellular infrastructure, your app may need to work offline and/or gracefully handle inconsistent connectivity in order to appeal to that audience. It may also need to include different experiences for WiFi vs. cellular data connections or the ability to monitor and self-regulate it’s own data consumption on different connections.

You can get a sense for this from Wikipedia, which has a very simple resource that outlines the mobile device penetration by country,  and  also a basic outline of 4G penetration. The GSMA also has a great map utility to help visualize mobile device penetration. It includes heat maps and relative scoring on things like 4G penetration, infrastructure and affordability of mobile data. 

Understanding OS – iOS vs. Android

A phone’s operating system determines which apps it can run in the first place. iOS app that are accessed from the iTunes AppStore can only work on iOS devices like iPhones and sometimes iPads; Android apps that are accessed from Google Play can only work on devices that run the Android OS.  iOS devices are only manufactured by Apple, but many companies manufacture Android devices; (The top ones are Samsung, Google and Huawei.)  Some countries have more iOS users, while others skew more towards Android. If you’re used to working in the US market, which tends to have a larger iOS population than many other countries, the OS market-share in developing countries may surprise you. The US is basically split 50/50 between iOS and Android, but the affordability of low-end Android handsets generally makes the more popular in developing markets.

StatCounter is a great resource for generating customizable charts to help determine and compare the mobile OS market share in a variety of different countries and regions. As shown below, these charts can also provide data about the breakdown of mobile vs. tablet vs. desktop use, browser and browser-version market share as well as search engine and social media market shares. The best part is that all of the data can be exported into CSV for quick and easy comparison.


 

Knowing Your Handsets

Even within one operating system (Android or iOS), mobile handsets can vary in terms of screen size, RAM, disc space, and even some of the functionality they can support. While there are only a limited number of iOS devices available, there are a much larger variety of Android handsets on the market to consider, so understanding which handsets are most popular in each region can illuminate those users’ pragmatic barriers to downloading an app. For example, if you find that the most popular handsets in a particular region are devices with low storage space, you shouldn’t expect your 128MB app to get a lot of downloads in that region. You can only expect success in regions like this if you build and market a lighter-weight app that requires minimal local storage on the phone.

Handset popularity can also help you prioritize the next countries or languages where you want to expand. If a country’s handset landscape closely resembles another region you target, you’ll be able to replicate much of your existing development work and to target the new region. On the other hand, if you’ve decided to to target a country with a whole new landscape of devices, handset research can help optimize your app’s testing process for the new device types and anticipate potential issues before you publish. China is a good example of a country with a unique unique mobile device landscape — Samsung and Apple face fierce competition from Chinese smartphone manufacturers, potentially making development for these regions more complicated. There are a few places you can research international device usage.

Here again, you can start with StatCounter which provides good information about handset groupings, based on manufacturers and screen sizes, as shown below:

To get more granular, you may want to look at AppBrain reports, which provide the Top-10 Android devices by country, both in terms of Absolute Popularity (the device usage in that country) and Relative Popularity (that country’s share of the global market of a given device). An example showing Mexico’s top Android devices is included below:

Getting information about top iOS devices is a bit harder, but also a bit less important, since their development and rendering is so tightly controlled to help manage interchangeability across the different devices. You can start with Device Atlas, which gives basic information about handsets and specifications for all different types of devices used around the world, including iOS. 

If you have a mobile-website, you can also use your website reporting to see which devices most commonly access your mobile website by region, and this may help fill in any of the missing pieces. This data will be more tailored to the users in your particular market, and will even more closely match the potential devices that you’ll see using your native app once you launch in that region.

Summary:

Mobile devices have made the internet more accessible to audiences around the world, including a group of people widely referred to as The Next Billion Users, who are getting online for the first time. This group offers a new and substantial opportunity for international apps, but these regions pose unique challenges that marketers and app developers may not have needed to consider in the past. By acknowledging the pragmatic reality of users in each country and paying close attention to their handset, operating system, and mobile connectivity needs, you can develop a solid plan for international expansion. You can also make adjustments for any technical changes the app needs, to make it viable and compelling for users in your newly targeted regions.

Before moving on to the next step in international ASO, you should ideally be able to answer the following questions:

  1. Does my app require that users have regular and reliable access to the internet?
    • Will limited internet connectivity potentially make it difficult for people to download the app?
    • When will people want to use this app in their day, and will they have the necessary connectivity to use it properly at that time?
    • Are there elements of the app that should be partitioned for a WiFi/Data-rich experience, so that they don’t disrupt regular use of the app?
    • Are there any parts of the app that should run offline, to accommodate users with limited mobile connectivity?
  2. What mobile OS and screen sizes are most common in the areas we are targeting?
    • Does it make sense to target both iOS and Android? Which is the priority?
    • Will the app struggle to work on the area’s most-used screen sizes, or with less advanced hardware?
    • Is the app backwards-compatible with previous Operating Systems that are popular in this country?

Your answers to these questions will shape your app’s international strategy as you evaluate potential regions for expansion based on their connectivity, dominant operating system(s), and handset landscape. Once a company has sussed out the necessary details to build an international expansion strategy, the next step is to think about how localization works in the iTunes App Store and Google Play Store, so that you know what changes to the metadata will be necessary to achieve top rankings. With that in mind, the next article in this series will cover the localization differences between the iTunes and Google Play App Store.

The ASO Impact of #DeleteUber

After accusations that Uber was attempting to profit from airport protests over the weekend, many users deleted the app. Deleting Uber also quickly became a trending hashtag on Twitter (#DeleteUber), where users shared screenshots of them removing both their Uber user accounts and deleting the app itself.

Many of those who deleted Uber subsequently downloaded Uber’s competitor, Lyft, who had declared its support the ACLU with a sizable donation. The Wall Street Journal reports Lyft received nearly 100,000 new downloads over the weekend. This is a 78% increase from the weekend before.

So how did this impact Uber and Lyft’s rankings in the iOS App Store and Google Play Store?

iOS App Store Category Rankings

Uber & Lyft iOS App Store rankings from #DeleteUber

Uber experienced a noticeable drop in category rankings, while Lyft also experienced a boost. However, it’s hard to isolate exactly which ranking factor is most responsible for this shift. Without even considering Uber’s uninstalls, Lyft’s fast boost of new installs improved their download velocity, which could have lead them to out-rank Uber.

In addition to installs and uninstalls, the Uber app also received a disproportionate surge of negative app reviews in the App Store, as shown in the graphs below. Many of Uber’s one-star reviews cited the #DeleteUber movement or a similar political reason. Since app reviews contribute to rankings, this surge also likely contributed to Uber’s ranking decline.

iOS App Store Reviews for Uber and Lyft after #DeleteUber

You may have noticed that Lyft also experienced an uptick in negative reviews, though to a lesser degree than Uber. Many of the new Lyft one-star reviews cited a disagreement with Lyft’s policy stance. However, they also received positive reviews for the same reason.

Google Play Store Category Rankings

Uber & Lyft Google Play rankings from #DeleteUber

There was less of a noticeable impact in the Google Play Store category rankings. While the Lyft app experienced some rankings growth, the Uber app didn’t seem to fall very much. This may indicate that more iOS users than Android users were deleting Uber and installing Lyft, or it may mean that the Google Play Store is less sensitive to those signals.

However, what I think really made the difference in the Google Play Store trends was ratings and reviews. The Android Uber app did not receive the same surge of negative reviews that the iOS Uber app received. In fact, in the Google Play Store, Lyft seemed to receive a larger distribution of negative reviews during the last week, as shown in the graph below.

Google Play Reviews for Uber and Lyft after #DeleteUber

Conclusions

The combination of uninstalls for Uber, new installs for Lyft, and a surge of negative reviews for the iOS Uber app created the perfect storm for an ASO rankings shift. It is unclear if Lyft will be able to hold on to their new and improved category rankings or if Uber will be able to recover.

The big lesson here is that a trending hashtag should not be underestimated. In this case, #DeleteUber had real-world impact on not only Uber’s install base but also its organic App Store rankings.

Understanding Mobile-First Indexing (2/3): The Long-Term Impact on SEO

By: Cindy Krum

Most of us in the digital space have heard the statistic that 90% of the world’s information was created in the past two years. It is a great illustration of the immense challenge that Google is facing to pursue their goal of cataloging all the world’s information. They have made significant progress, but the definition of ‘information’ is expanding. ‘Information’ according to Google, now includes: songs, movies, TV shows, apps, recipes, books and just about anything else you can think of. Google wants to know, not just that the information exists, but exactly what it is and how to access and present that information on a variety of devices and in a variety of formats, not just limited to visual presentation in a browser.

Suffice it to say, crawling the web to index and rank it is getting to be a much bigger task. When companies add new content, they don’t remove old content. Beyond that, Google has conditioned us to be wary of ever removing content from our websites, presumably because maintaining an archive is best for users, but also in case the old content has links, social shares or other signals that are helping it continue to drive traffic. To keep up with the ever-increasing amount of digital content that Google would like to organize, they will have to make their process more efficient, and limit their algorithmic evaluation-set more stringently. They have already indicated a strong preference for sorting signals like Schema and other micro-formats because it simplifies the crawling, decreases the algorithmic effort and minimizes overhead that Google needs to continue indexing and ranking content. Now, with Mobile-First Indexing, factors like these will become even more important.

This article is the second part of a three-part series about Mobile-First Indexing. The first article focused on providing a simple and pragmatic interpretation of what Mobile-First Indexing will mean in the immediate term, and how web masters and SEO’s can update their existing websites to protect against any negative impacts of Mobile-First Indexing. This article will go deeper, and focus on more theoretical concerns. It will detail the reasons Mobile-First Indexing is necessary and valuable for Google, how Google continues to push companies towards a new rubric of ranking signals and finally, the important role that the cloud will play in the future of SEO. The final article in this series will give specific use-cases and information about the various Mobile-First development options that Google has been advocating, and favor its mobile search results. It will outline the pros and cons of each, and detail when and how they can be used to their maximum benefit.

 

Loss of the URL as The Foundation of Indexing

Historically SEO’s have talked about indexing in somewhat binary terms – Content was in the index, or it was not (or at one time content was in the mobile index, the desktop index, or neither). In SEO audits, technical problems may arise, when too many or too few pages are admitted to the index. Again, a question of what is or is not in the index. As SEO’s we rarely have had the occasion to question how the index worked, until now. The change that Google is describing the shift to Mobile-First Indexing is actually not how things are admitted or prevented from inclusion in Google’s index, but instead, how things are organized within the index. Assuming this is the case, Google is using the word ‘index’ to mean ‘organize’ rather than simply ‘identify,’ so this change could be even more significant than most SEOs realize.

Mobile-First Indexing alludes to a future that is less dependent on URL’s as the organizing mechanism for Google’s index. You must understand that indexes are essentially just databases. Before everything was digitized, the phone book was an alphabetical index of people, and the white pages were alphabetical index of companies, pre-sorted by category. Similarly, the Dewey-decimal system was an index of books that were present in a library, ordered numerically. Books could be in the index, not in the index or indexed incorrectly. What is important, is that indexes are not free-for-alls; they have a unifying organizational principle based on an element that is extracted from a larger set of data, like a name, title, category or a numeric representation.

Google has used URLs and URL structure, along with metadata and links to organize content in their index which is why SEO’s have always operated under the maximum, “one URL for every piece of content”. Google has always been an ‘internet search engine’ and the internet is mostly consumed through web browsers that rely on URLs, but this is all changing. The internet is actually much larger and contains much more information that cannot be presented in a browser. Huge amounts of data and information that is not HTML formatted is processed in the background of the Internet. This type of information is becoming critical for the Internet of Things (IoT) and Big Data style calculations. It is accessible only through API’s and direct access to the databases, and Google wants to be able to leverage this information in their algorithm.

Beyond that, mobile operating systems (OS) and the browsers are getting less distinct. Both Spotlight Search and Google Now on Tap are aspects of the mobile OS that can search and surface content from the web and apps. In the case of Google Now on Tap, it appears that content is provided in feeds and APIs, without necessarily including a web page or URL. Once the URL requirement is removed, content from apps can compete with websites, which makes for a much better experience that has much more flexibility in terms of how and where information can be presented to a user. A Mobile-First Indexing mentality allows Google to further distance rankings from simple URLs and links, and focus more on things important to mobile experiences, like speed, rendering, and engagement.

Many of the newest mobile-oriented development techniques that Google has been advocating actually muddle or de-emphasize the importance of URLs, site structure, and links. Things like native apps, web apps and PWAs and AMP all obscure Google’s access to URL’s and link data. AMP content does not have traditional URLs, but instead, lives on a URL that Google generates and hosts, and Android Instant Apps, which just came out of beta, is expected to be the same. In-app indexing, deep-linked URI’s are basically just bookmarks in a user-flow of an app. Similarly, PWA’s leverage on-going communication with the server to deliver content when it is requested, and different URLs are not needed to trigger different content.

For years Google has been trying to distance rankings from the link-economy that it created, and now they seem to be actively trying to stop SEOs and web masters, (or maybe themselves) from relying on URLs as the primary organizing principle in their index. Instead, Google has probably begun associating specific indexable pieces of content with signals like Schema, on-page structured markup and XML feeds.

 

Schema, Markup & XML Feeds

Schema and nested schema have been part of Google’s top SEO recommendations for a number of years because they help provide a concrete and easy-to-crawl entity-understanding of the content on the page. Since Google the launch of the Hummingbird update, which focused on entities, voice-search, and semantic understanding, Google has not communicated as actively about entity search, but with the connected home, it is going to become vital again. Schema is much easier for Google to crawl and understand than regular HTML, and with the transition to JSON-LD, it has become even faster because it is separate from the page code and available directly from the server. Google is so interested in Schema that they now are even requesting that certain kinds of Schema be added in-app markup. This on its own, is quite telling.

The shift from JSON to JSON-LD is important for Google’s larger understanding of the world. The ‘LD’ in ‘JSON-LD’ stands for ‘Linked Data’ because JSON-LD is not about individual pieces of metadata, but instead, it is about metadata in the context of other metadata. JSON-LD.org explains that: “Linked Data empowers people that publish and use information on the Web. It is a way to create a network of standards-based, machine-readable data across Web sites. It allows an application to start with one piece of Linked Data, and follow embedded links to other pieces of Linked Data that are hosted on different sites across the Web.” This is how Google plans on building deeper understanding of content and content relationships without relying on links and URLs.

Google has been directly asking for information feeds from a larger number of sources. Most companies are happy to comply, because getting their feed of information directly to Google has direct benefits, like AccuWeather’s visual weather information that shows up at the top of a weather-related search result, or the flights and hotels that Google easily surfaces in their aggregation engine, or any of the million products that Google includes in Google Shopping PLAs. These feeds give Google exactly the information they need, in exactly the format that they want it so that it can be added directly to your database and surfaced appropriately. Results from these sources do seem to be shown favor in mobile search results – because they provide a good user experience, but also because they are easy for Google to parse and display quickly.

Many SEOs may not realize that Google also uses XML feeds and JSON-LD data to ingest the list deep-link maps for app indexing, to understand Schema relationships on a large detailed website, and to understand things like sports scores, news, recipes and movie times. Methods like this are superior to old-fashioned crawling because they are so much more efficient. In both, webmasters essentially connect your database to a Google API.

 

Mobile-First Really Means Cloud First

Google will also rely heavily on their own cloud hosting to facilitate Mobile-First Indexing. They are calling this change ‘Mobile-First’ Indexing, but that is probably a misnomer. Ostensibly, it is more accurately described as ‘cross-device-first’ or even ‘cloud-first’ indexing in the long-run. Nearly every new Google announcement seems to be another front or ploy to get web masters to host their content on Google servers. Even Google Play is actively encouraging Android app developers to test hosting their apps on the Google Cloud Platform so that users can benefit from a speedier native app experience, as shown at the right. While this is optional now, you can expect cloud-hosting with Google to be heavily incentivized or even required in the future.

Google will push more and more assets into their cloud because it allows them to decrease their reliance on crawling, and increase their understanding how users engage with the content over time. In the cloud, without reliance on URLs for indexing, indexable content can be text, video, audio, image or anything else; a concept brilliantly illustrated by Emily’s tweet above. As the cost of cloud hosting continues to decrease, and the amount that can be stored continues to increase, the benefits Google can expect from pushing developers and web masters to host content on their cloud become apparent. Cloud hosting content will dramatically improve Google’s ability manage their own effectiveness and minimize their business overhead, but it also feeds into the Big-Data mentality that they love and that they can profit from dramatically in the long run. Here are the concepts related to Google Cloud Hosting to consider:

Less Crawling: Hosting is much more efficient and effective than crawling. Crawling might be the least effective and most complex aspect of ranking the web and it is difficult to scale. If Google hosts your content, they don’t have to crawl it as aggressively, and because they know immediately when you make updates or upload new content. They can also use the frequency of content requests on their system to know what is popular and what is not. This is a better measure of engagement than links.

More Efficient Data Collection & Presentation: When all your content lives in Google’s cloud, it can be super-fast all the time. They use their own compression and caching algorithms; they can also detect the speed of the device requesting content then adapt what they send to suit the speed of the network connection and device that they are on, so content work across different devices and operating systems without hassle.

The speedier presentation that Google gets from web masters cloud-hosting with them will benefit Google users when they access the cloud-hosted content, but it will also benefit Google. As shown in the diagram below, Google prefers to include rich visuals when they can directly in the SERPs. They are especially important for improving the mobile search user experience and driving engagement in search results. When Google hosts content, it also makes it faster and easier for Google to present your content directly in SERPs like this. Google already incentivizes this kind of cooperation with top rankings, especially in mobile search results.

Flexibility of Content: Part of making content indexable in a cross-device or cloud-first world is a deeper separation of content from its ‘intended-device presentation layer’ making it device-agnostic and even potentially format-agnostic too. When clean, unencumbered content can be saved in a web-hosted database, developers can focus on creating the custom interfaces for a variety of potential presentation-devices and formats without having to constantly replicate the content for each device – It is a bit like taking the concept of separating content from design with CSS further, to include format and functionality instead of just the visual cues for presentation.

If Google had to crawl and index content that was tied to specific devices over and over again for each device, indexing and ranking it all, it would be a nightmare. Cloud hosting agnostic content is much more scalable for Google’s indexing resources, but also development resources within companies. Google will need to know what content to rank well for each device, but the as the number of potential searching devices grows, so does the number of potential use cases.

Understanding the Indexing of the Future

The difficulty with disconnecting content from a corresponding URL is the loss of a unique identifier for the index. URLs on the web work like product SKUs in an inventory database – there is a one to one correlation. So, how will Google manage their index without URLs? The best answer is most-likely a relational database with entity understanding and AI. Google will still have the URLs that it currently has in the index, and they have no-doubt already used those to begin creating an entity-understanding for all the companies and content in the current web index. They can merge with Schema from the web and other information that they have from content they host on their cloud hosting platform and then the will have a lot. From there, Google can structure its understanding based what it knows about the world in general, by leveraging Freebase/WikiData, which they technically ‘retired’ a few years ago but may have actually just been re-purposed. Maybe now, instead of being built out by human editors, it was built-out by Rank-Brain.

There may already be indications of this popping into Google’s search suggest, as they try to disambiguate a broad query that could mean two different things. As you can see below, Google believes that the query for ‘Bread’ could be about two very different topics: Food or a Band. You know this is their understanding of the word because you can see the disambiguation suggestions directly at the top of the Search Suggest options. What you have to understand is that each of these disambiguation options is there because they both are attributed to very specific information in Google’s Knowledge Graph. Bread the food is associated with certain ingredients, recipes, images, calorie counts and the like, whereas Bread the band is associated with songs, tour dates, and very different-looking pictures. (Perhaps part of Google’s image recognition engine was to help sort pictures for queries like this!)

Conclusion

The internet is changing. As technology expands and improves it is becoming more and more invisible. As more of our daily devices go online, they become less reliant on screens, browsers and direct-entry keyboards, and instead are all operated remotely from the cloud. Both Amazon Echo and Google Home let you control web-connected elements of your home and perform simple searches using just your voice, and they respond based on streams of data from the cloud. These devices can and do operate without the use of URLs, so Google must also begin to operate in a more presentation-agnostic way. Our devices are moving away from needing browsers, so Google’s index should not be organized based on URLs.

With structured data, especially in JSON-LD, XML feeds and API’s, Google is building a strong understanding of the world that is less reliant on URLs for organization and evaluation of the data. Beyond that, Google will continue pushing developers and SEO’s to leverage their cloud-hosting services often for free, because of it offers such a significant benefit to their ability to index their content. With more content in Google’s cloud, Google will always know when content is updated because their server-logs will show it and automatically trigger a re-crawl. Hosting the content also allows Google to understand more about the user engagement; when they host all of the app information, they can know exactly what content was requested from the server, even if there was no new page URL requested, as might happen in a PWA. This is deeper engagement data than Google has ever had with web content before!

This article and the one before it have dissected what Google’s move towards Mobile-First indexing will mean to the practice of SEO in the short and long term. The next article in the series will outline a number of new development options that are best suited for Mobile-First Indexing, and why. It will also describe how and when they should be used, and what steps to take to build them for the future evolution of Mobile-First Indexing.

Now that we’re to segment 2 of 3 on Mobile-First indexing, if you have questions or want to reach out you can find Cindy & the MobileMoxie team on Twitter.

 

Understanding Mobile-First Indexing (1/3): The Immediate Impact on SEO

By: Cindy Krum

Google’s Mobile-First Indexing announcement has generated a lot of discussions, but no one seems too concerned just yet. This might be because Google has been touting various enterprise-wide ‘Mobile-First’ initiatives for over a year, but it could also be because Google appears to have changed their procedure for launching updates, or at least ‘mobile’ updates. If recent history is any indication of the future, Google’s new process looks somewhat like this:

  • Publicly announce the changes ahead of time (sometimes even with dates and deadlines)
  • Roll rankings adjustments out slowly (rather than all at once)
  • Pre-announce adjustments and related updates as needed

This pattern began with the Mobile-Friendly update (Mobilegeddon) and continued with modifications related to app interstitial, HTTPS and deep linking as well as the various AMP announcements.

This new release strategy is likely due to the impact that machine learning has over the algorithmic evaluations, but the key benefit to SEO’s is that algorithmic changes no-longer disrupt the rankings immediately after going live. This slow-roll out has led to many of the recent updates being initially mocked for having minimal impact, but over time all of these changes will prove to be quite significant. The switch to Mobile-First Indexing will be no exception. 

This article is the first in a three-part series to help SEO’s understand the long and the short-term impact that Google’s change to Mobile-First Indexing will have on SEO strategies and Best Practices. It will begin with a brief history of mobile indexing, then it will outline how SEOs can make an existing site compliant with Mobile-First Indexing. In the second article, we will discuss the crucial distance that Mobile-First Indexing appears to allow between URLs and corresponding content, the increasing push to host content in Google’s cloud and how we can expect these concepts to impact the web as a whole in the future. The final article in the series will review the various development options that Google considers natively Mobile-First when they should be used and the larger themes that drawl them all together.

 

A Brief Look Back on Mobile SEO & Mobile Indexing

Thanks to Peter Campbell for the awesome image. http://www.petecampbell.com/seo/mobile-seo-guide/

Historically, Google has struggled with different mobile indexes and different mobile crawlers. Briefly, during the days of mobile-only WAP sites, Google had a mobile-specific index, but as soon as color and images became common in mobile site designs, mobile search results began to include both mobile and desktop pages. When mobile ‘mDot’ subdomains were popular, mobile pages were indexed based on their relation to a desktop page which was generally determined by a type of server settings called ‘User-Agent Detection & Redirection’. Mobile pages with corresponding desktop page were difficult for Google to discover and index unless a sitemap had been submitted with the mobile URLs or someone had actively linked from a desktop page to the mobile-specific page. Even then, it was difficult for Google to determine when mobile pages should rank, because mobile content sometimes had to out-rank its desktop counterparts that had more history and positive SEO ranking signals, despite algorithmically looking like duplicate content.

Google’s Diagram for Proper User Agent Detection & Redirection: https://developers.google.com/webmasters/mobile-sites/mobile-seo/common-mistakes

To address the problem, Google suggested that web masters indicated when a desktop page had an alternate, mobile version by adding ‘rel=alternate’ and ‘rel=canonical’ links to the HTML of both pages. The ‘rel=alternate’ tag on the desktop page would be indexed, and trigger a mobile crawler to visit the mobile version of the page indicated in the tag. This way it could also be crawled and indexed. The mobile version of the page avoided being counted as duplicate content and passed its SEO value to the desktop version of the page, using the ‘rel=canonical’ tag, which linked back to the desktop version of the page.

That process was error-prone and resource-intensive. So to minimize the problem, Google began actively encouraging web masters to update their websites to Responsive-Design. Since the guideline was to build one Responsive website that would work on both mobile and desktop devices, Google retired the mobile-specific crawler and started crawling everything (desktop and mobile content) as a smartphone. (At this point crawling was mobile, but indexing was still based on old desktop standards.) While Responsive-Design solved some of the issues of complexity with crawling and indexing, it added new problems.

Using CSS media queries and JavaScript to adapt content for one URL to fit many different screen sizes tended to add to the latency of most websites dramatically – especially on mobile phones where the number of round-trip requests to the server for various pieces of content embedded in the HTML slowed the experience down considerably. Since then, Google has focused mobile SEO and development guidelines on speed, attempting to teach and persuade web masters to build clean and efficient sites with strong Critical Path Rendering in mind. (Critical Path Rendering is the practice of strategically ordering internal and external page elements to optimize load time.) The goal was to help make the sites faster and more usable, even on slow mobile networks. But eventually, Google recognized that consistently great Critical-Path Rendering was beyond the reach of most web masters and skilled development teams, especially when Responsive-Design became part of the standard requirements. Pages became larger and slower because of all the code needed to accommodate the extra variables needed to make the page Responsive. 

Thanks to Louis Holzman for the Illustration showing the difference between a web site and a web app: https://www.linkedin.com/pulse/web-application-vs-website-whats-difference-louis-holzman

Over time, Google also discovered that even when strong Critical Path Rendering was in play, mobile and desktop renderings of a Responsive Design page still struggled with intense caching and compression problems, often caused by the CDN (Content Distribution Network). These caching and compression problems were often difficult for SEOs and development teams to address and some were so significant that they made all the Critical-Path Rendering work nearly irrelevant anyway. While some smart web masters began using advanced rendering protocols to help speed things up (pre-cache, pre-fetch, and pre-render), most were just creating slower websites.

That gets us closer to the present time when users are demanding increasingly fast websites that look and behave more like their speedier native app cousins. They want ‘mobile web apps’ instead of ‘mobile web sites’. These demands put a massive strain on developers because all of the JavaScript that is required to make the websites look and feel more like apps. This creates even more delay in the load time of the site. To off-load some of the code, web masters are turning to selective-serving and progressive-rendering which leaves some page content on the server until it is requested by the user. JavaScript and HTML5 are used to request the content from the server but it is only retrieved as needed. This helps improve the load time of the site, but since this framework does not require unique content to have a unique URL, mobile web apps that are built without excess care for SEO can be very difficult for Google to index. It is the interplay of all of these complex elements that have created the need for Mobile-First Indexing. 

Fundamentals of Mobile-First Indexing

Mobile-First Indexing is Google’s next attempt to improve and standardize web development by incentivizing the change with a potential increase in search rankings. It is significant because historically, Google has based all of its indexing evaluations and most of its ranking evaluations on the desktop version of a page. While most algorithm changes relate to positive or negative ranking factors, Mobile-First Indexing is different because it alters the pre-algorithm evaluations of a page. Mobile-First Indexing will not generate a new, separate Mobile-First Index. Instead, it will change how content is added to the existing index. To put it simply: if your new web pages do not pass this initial evaluation, they may not be included in Google’s index, and remember, if a webpage is not included in Google’s index, it has absolutely no chance to rank at all. 

Don’t fret though. Google won’t overwrite well-indexed desktop content with mobile content that has fewer ranking signals and less information. That would be mobile-ONLY indexing, which is not what Google is announcing. If your website is indexed now, it will probably remain in the index, especially if it is are part of the 80% of the web that met the basic requirements to receive the now-hidden Mobile-Friendly designation. As a reminder, to be Mobile-Friendly, your content must have crawl-able JavaScript and CSS, and (ironically emphasized less in Google’s Mobile-Friendly tool evaluation) the content should be fast, properly sized, usable, and engaging when it is presented on a mobile phone. While the label is no longer presented in SERPs, the requirements remain and are likely quite important for Mobile-First Indexing.

Mobile-First Indexing may also be an indication that Google is becoming less dependent on traditional links and HTML URLs for ranking. Remember, mobile SEO strategies have never focused on links or link building to because rankings were driven primarily by the desktop site, where it was easier and more logical for developers to drive links. Mobile-First Indexing may allow the absolute location of content be more open to interpretation and nebulous. This concept will be discussed more in the second and final articles in this series. 

Mobile-First Updates for Existing Websites

For now, SEOs that want to update an existing website to ensure that it will remain indexed and relevant after the launch of Mobile-First Indexing face a simple challenge: With all of the historic mobile development options, the main thing SEOs must do is make sure that mobile devices are getting all the essential ranking and indexing instructions that have historically been included on the desktop pages. This especially means title tags, description tags, href=lang tags, canonical tags and Schema.org markup; It also includes all OG tags & Twitter cards, links to XML sitemaps and media sitemaps, robots instructions (on-page metas and robots.txt) and probably even links to a privacy policy page. In both cases, the task may also involve verifying the websites relationship with app-specific assets like the websites links to app association files so that deep links can be indexed and executed correctly.

The amount of work required will vary based on how the site has historically handled mobile traffic. If the site is currently built in Responsive-Design, minimal, if any, changes may be required. However, if the site is still built with separate mobile URLs on an ‘m.’ subdomain, or if it uses selective, dynamic or adaptive serving to send content to mobile devices, there will be more work to do, especially in terms of the different versions of the content. The best tools for an individual page or template verification are Google’s Mobile-Friendly tool, and Chrome Developer Tools viewing source code in the simulators using ‘inspect element‘ to get a clear picture of what might be captured during a crawl. For the large-scale investigation, SEOs should lean heavily on information that they can get from Google Search Console, and from tools like Screaming Frog, that allow you to crawl a site with Google’s smartphone crawler, extracting the tags that must be verified on each page.

Chrome Dev Tools, Evaluating a Responsive-Design Site: http://www.girliemac.com/blog/2016/08/16/developer-experience-matters/

Conclusion

Google has transitioned from launching updates that create significant jarring changes to now rolling out all changes gradually, over a period of time without major disruption. Their impact might not be as obvious right away, but Mobile-First Indexing is sure to have a profound impact on SEO and the rest of the digital landscape over time. This article is the first in a three-part series focused on Mobile-First Indexing. This article has focused on the immediate updates that SEO’s must consider in their strategy to bolster an existing website. The next article in this series will focus on the larger impact that Mobile-First Indexing will have on SEO strategies and the palpable shift towards the dissolution of ties between URLs, content and the device the content is presented on. It will also discuss the algorithmic shift towards Schema and feeds and cloud-hosting. The final article in the series will focus on the new group of Mobile-First site options that Google is favoring, presumably to help SEOs and web masters achieve strong, long-term Mobile-First Indexing. The World is going mobile and these strategies will ensure that your SEO efforts can keep up!

If you’d like to talk more about Mobile-First indexing, reach out to Cindy or the MobileMoxie team.