General

How Mobile-First Indexing Disrupts the Link Graph

ADVERTISEMENT

It’s been a reality for everyone. You open a website on your smartphone, but you discover that the feature you’re used to on your desktop isn’t accessible on your mobile. While it’s frustrating, it’s been a challenge for web designers and developers alike to make it easier and more compact their websites for mobile devices without having to eliminate elements or content that could otherwise be a nuisance to a smaller display. The worst-case scenario with the trade-offs involved is that certain features will be reserved for desktops or a user could be able to choose to not use the mobile version. Here is an illustration of how my blog shows the mobile version of my blog using the popular plugin from ElegantThemes known as HandHeld. As you will see, the page has been stripped of its bulk and much easier to understand… however, what is the price? What is the cost of the graph of links?

My personal blog loses all of the 75 links as well as every one hyperlinks to external sites, once the mobile version is used to access. What happens when mobile versions of websites are the primary method by which the web is accessedon a large an enormous scale, by the bots that power the big search engines?

Google’s announcement of an index that is mobile-first raises new questions regarding how the structure of links on the internet as a whole could be affected once these web experiences that are truncated will be the first (and at times the only) version of the web that Googlebot encounters.

What’s the significance?
The problem, which certainly Google engineers have researched in-house as well, is that mobile-friendly websites frequently remove links and content to enhance user experience when viewing on smaller screens. The abbreviated content significantly alters the structure of links, which is one of the primary aspects of Google’s rankings. The goal of this study is to attempt to comprehend the impact this may be having.

Before we begin the first thing to consider is one huge unknown factor which I’m quick to mention is that we do not know how much of the web Google will explore using its mobile and desktop bots. Maybe Google decides to go “mobile-first” only on sites which have traditionally had the same codebase for both desktop and mobile version of Googlebot. For the purpose of this research I’d like to present the worst-case scenario the case could be if Google decided to not just be “mobile-first,” but in the actual sense to go “mobile-only.”

Methodology for Comparing Mobile devices to desktop on a size
In this short study I decided to collect random web pages from Quantcast top million. I then scoured at least two layers deep and spoof Googlebot in both Google desktop and mobile versions of Googlebot. Google desktop version of Googlebot. Based on this information it is possible to look at how different the structure of links of the internet might appear.

Home metrics
Let’s get some basic information about the homepages of these randomly chosen websites. From the sites that were studied, 87.42 percent had the same number of hyperlinks on their homepages, regardless of whether their bots were mobile or desktop-oriented. In the other 12.58 percent, 9 percent had less links, and 3.58 percent had more. This isn’t too surprising at first look.

And perhaps more important Perhaps more importantly, just 79.87 percent had the same link on the homepage when they were visited by mobile and desktop bots. Even though the same number of links were identified doesn’t necessarily mean that they were identical links. It is crucial to take this into account since links are the paths that bots follow to locate information on the internet. Different paths mean a different index.

For the websites, we observed the following: a 7.4 percent decrease in the number of external links. This could indicate a dramatic change in some of the most significant hyperlinks on the internet as pages on the homepage often hold significant link equity. It is interesting to note that the most significant “losers” as a percentage were social websites. In hindsight, it’s normal for one of the most common kinds of links that a site might eliminate from their mobile version is social share buttons since they’re typically included in”the “chrome” of a page instead of the content in addition, this “chrome” often changes to accommodate a mobile-friendly version.

The most significant lossers in terms of percentage order were:

linkedin.com
instagram.com
twitter.com
facebook.com
What’s the significance about the difference of 5-15% in the number of links you can find when browsing the internet? It’s clear that these figures tend to be biased toward websites that have a large number of links that do not offer mobile versions. But, the majority of these links are primary navigational hyperlinks. When you dig deeper, you will discover the identical links. However, those that deviate result in having completely divergent second level crawls.

Second-level metrics
This is where the data starts to become fascinating. When we crawl to the web with crawl sets which are influenced by websites discovered by a mobile bot as opposed to the computer bot, you’ll find more and more dissimilar results. How far will they differ? Let’s begin with the size. Although we crawled the same amount of homepage pages, our results for the second-tier varied based on the amount of links on those first home pages. The mobile crawlset had 977,840 unique URLs. In contrast, the desktop crawlset contained 1,053,785. We already can observe a different index developing -the desktop index will be much bigger. Let’s dig deeper.

I would like you to take a few minutes and concentrate on this graph. There exist three kinds of graphs:

Mobile UniqueBlue bars are unique products that are discovered on the phone bot
The Desktop is UniqueOrange bars symbolize unique things that are discovered in the Desktop Bot
shared:Gray bars represent items that are found by both
Also, note that four tests to be taken:

The number of URLs found
The number of domains found
The number of links discovered
Numerous Root Linking Domains discovered
This is the most important issue, and it’s huge. More URLs, domains Links in addition to Root Linking Domains, that are exclusive to the results of the desktop crawl than are that are shared between the mobile and desktop crawlers. It is also larger than the gray bar. This means that at the second level of crawl, the vast majority of links websites, pages and domains differ within the search results. This is a huge change. It is a major shift in the graph of links that we’ve come to be accustomed to it.

The important question, what do we really care about most is external links.

The majority of external links are exclusive for the browser on desktops. In a mobile-only crawling environment the quantity of linked external sites decreased by a half.

What’s happening on the micro-level?
What’s causing this massive disparity in crawl? We know that it has to do with a couple of commonly used shortcuts to make an internet website “mobile-friendly,” which include:

The content is available in subdomains which are not as well-known or have fewer links
The elimination of features and links through user-agent detection plugins
Sure, these modifications could improve the experience for users however it will create bots with a completely different experience. Let’s study of a site to observe how this is played out.

The site hosts 10,000 pages as per Google and is an Domain Authority of 72 and 22670 referring domains as per the brand new Moz Link Explorer. However, the site utilizes the well-known WordPress plugin that cuts the content down to the pages and articles that are on the site, eliminating hyperlinks from the descriptions of article pages, and removing the majority or all links from the sidebar as well as the footer. The plugin is in use across more than 200,000 sites. What happens when we start an all-level deep crawl using Screaming Frog? (It’s ideal for this type of analysis since it is easy to change the user-agent , and limit settings to only the crawl HTML contents.)

The contrast is amazing. In the first place, look at the mobile crawl to the left, there’s clearly a very low amount of links on each page. Moreover, the number of links remains constant as you go deeper into the website. This is what creates constant, exponential growth curve. Also, note it was abruptly stopped at the fourth level. The site did not have additional pages for an mobile crawler! Only 3,000 of the pages Google has reported were found.

Then, take a look at that of the computer crawler. It explodes with pages at the second level with nearly double the pages from it at that point by itself. Remember the graph that showed there were more distinct desktop pages than there are shared ones when we crawled over 20,000 websites. This is a confirmation of what happens. The end result was that 6x the content was accessible to the desktop crawler at the same crawl depth.

What impact had this on links from outside?

Wow. 75 percent of the external links that were outbound to the outside world were eliminated in Mobile version. There were 4,905 external links on the desktop, whereas only 1,162 links were discovered in the mobile version. Be aware that this is an DA 72 site with over twenty thousand domains that are referred to. Imagine losing that link since the mobile index is no longer able to find the backlink. What do we do? Do you think the sky is falling?

Take a deep breath
Mobile-first doesn’t mean mobile-only.
The primary caveat to the findings is that Google does not abandon desktops — they’re merely prioritizing the mobile search. This is logical, considering that the majority of the search traffic is nowadays mobile. If Google intends to ensure the best mobile content is delivered it is necessary to change their focus on crawling. However, they have a desire to locate content. This requires a desktop crawler, as long as webmasters cut off desktop versions of their websites.

This isn’t hidden from Google. According to the original official Google Mobile First Announcement they wrote…

Google decided to say that a desktop version could be superior to the “incomplete mobile version.” I’m not going to take too much from the statement, beyond saying that Google is looking for a full mobile version and not postcard.

The best link placements will prevail
An interesting result of my study was that external links that were able to survive the ravages of mobile versions were typically placed directly within the content. External links on sidebars such as blog-rolls are essentially eliminated from the index, however inside-content links remained. This could be a sign Google detects. External links that function on desktop and mobile tend to be the kind of websites that users might click.

Thus, although there could be fewer hyperlinks on the graph of links (or at the very least, there could be a subset specifically recognized) If your links are of high quality ones that are content-based, then you stand a chance to experience a boost in performance.

I was able to verify this by looking through the subset of excellent websites. With Fresh Web Explorer I was able to find new hyperlinks for toysrus.com which is receiving a lot of attention because stores are closing. It is likely that the majority of these hyperlinks are in-content since the content itself is focused on the latest, relevant news about Toys R Us. After checking more than 300 mentions we discovered that the links appeared to be the same in both desktop and mobile crawls. They were excellent content links, and as a result, they were visible on both crawls.

The bias of selection and the convergence
It’s probably true that sites with a lot of traffic tend to offer a mobile version than less popular sites. However, they may be responsive, in which case they’d produce no noticeable distinctions in crawling -at the least, some proportion of them would likely be.* domains, or use plugins such as those listed above that reduce the content. On the lower rungs on the internet, more old and less professional content will likely to contain only one version that is displayed to desktop and mobile devices equally. In this case then we are likely to see that as time passes the variations in the index may begin to become more convergent rather than diverge, because my study only looked at sites in the top ten million and only crawled for two levels.

Additionally (this one is speculation) However, I think over time , there will be a convergence between desktop and mobile index. I don’t believe that the graphs of links will become exponentially since the internet is just as big. The routes by where certain pages can be reached and the frequency at how they’re reached can change quite in a small amount. Therefore, even though the graph of links will change from one another, the collection of URLs that make up the graph will mostly remain similar. Of course, a small portion that is mobile-friendly will be completely different. The majority of websites which use dedicated mobile subdomains or plug-ins which remove large sections of content will be mobile islands within the web linked.

Impact on SERPs
It’s not easy at this point to know what the impact on search results could have. It’s likely to change the SERPs. What’s the point of Google in announcing and announcing an update to its indexing method without enhancing the SERPs?

This study isn’t comprehensive without an impact evaluation. Thank you for JR Oakes for providing me with this critique. Without him, I’d have been unable to look at it.

The first is that there are two things that could thwart drastic changes in SERPs that are already in place regardless of the accuracy of the study:

Slow rollouts means that changes in SERPs could be not absorbed by the natural ranking fluctuations that we have seen.
Google can also seed URLs that are accessed via mobile devices or desktop to its respective crawlers restricting index divergence. (This is an important one!)
Google might decide to look at as a link-building strategy the totality of desktop and mobile crawls, but not without excluding the other.
Additionally, the relationship between domains could be less affected by different index elements. What’s the chance to be that the connection with Domain X and Domain Y (more or lesser links) is identical for both desktop and mobile indexes? If the relationship tends to be the same, then the impact on SERPs would be minimal. This is referred to as to be “directionally consistent.”

To complete this portion of my study I gathered a small sample of domain pair names in the index for mobile and then compared their performance (more or more or) with their performances on the index for desktop. Did the first index contain greater links than other on both desktop and mobile? Did they perform differently?

It turned out that indexes were quite similar in regards to directional consistency. This means that even though the link graphs in general differed greatly in comparison to different domains randomly, they were likely in both sets of data to be consistently in direction. About 88% of domains tested maintained the same direction through the indexes. The test was conducted only with the mobile index domains with those on desktops. In the future, research may investigate the reverse connection.

What’s next? : Moz and the mobile-first index
Our aim for Moz’s link index Moz link index was always to be as close to Google as we can. With that in mind that we are exploring the mobile-first index too. The new link index we are developing and Link Explorer in Beta is aiming to be more than one of the biggest link indexes available on the internet however, it is the most relevant and useful. We believe that a large part of that is making our index more user-friendly using techniques like Google. We’ll keep you informed!

 

Next Post