General

How Mobile-First Indexing Disrupts the Link Graph

ADVERTISEMENT

It’s been a reality for every one of us. When you open a web page using your mobile device and then discover that the feature you’re used to on your desktop isn’t accessible on your mobile. Although it can be frustrating, it has always been a challenge for web designers and developers alike to make it easier and more compact their websites for mobile devices without having to remove elements or content that could otherwise be a nuisance to a smaller display. The most likely scenario for these compromises is that some features are reserved for desktop-based environments and users may be able choose to not use the mobile version. Here is an illustration of how my blog shows the mobile version of my blog using an extremely popular plugin developed by ElegantThemes named HandHeld. As you will see, the site is stripped of all the unnecessary and is much more easy to understand… however, at what price? What is the cost of the graph of links?

My personal blog is able to remove all of the 75 links as well as the majority that are external hyperlinks when the mobile version is used to access. What happens when mobile versions of websites become the main way that the internet is accessedon a large an enormous scale, by bots that power the big search engines?

Google’s announcement that it will launch the mobile-first index has raised new questions on how the structure of links on the web in general will be affected when the truncated versions of web pages will be the first (and often the only) version of the web that Googlebot encounters.

What’s the significance?
The problem, which there’s certainly Google engineers have looked into in-house as well, is that mobile-friendly websites frequently remove links and content to provide a better the user experience when viewing on smaller screens. This reduced content significantly alters the structure of links, which is one of the primary aspects of Google’s rankings. The goal of this study is to attempt to understand the effect this may impact.

Before we begin there is a major unknown variable that I’d like to be quick to highlight is we do not know what percentage of web Google will be able to crawl using both its mobile and desktop bots. Maybe Google decides to go “mobile-first” only on sites that have previously had the same codebase for both the desktop and mobile variants of Googlebot. For the purpose of this research I would like to illustrate the worst-case scenario the case could be if Google did not choose to be “mobile-first,” but in actuality, to be “mobile-only.”

Methodology: Comparing the mobile devices to desktop on a large the scale
To conduct this research I chose to download randomly selected web pages from Quantcast top million. I then scoured 2 levels in depth, and spoof each of the Google Mobile and Google desktop version of Googlebot. Based on this information we are able to see the different ways that the link structure of the web could look.

Home metrics
Let’s get some basic information about the home pages of these 20,000 randomly chosen websites. From the sites that were studied, 87.42 percent had the same number of hyperlinks on their homepages regardless of the either desktop or mobile-focused. In the other 12.58 percent, 9 percent had fewer links while 3.58 percent had more. It’s not too surprising at first look.


More importantly Perhaps more importantly, just 79.87 percent of users had the same links on the homepage, when viewed by mobile and desktop bots. The fact that the same amount of links were discovered doesn’t necessarily mean that they were identical links. This should be taken into consideration since links are the routes that bots utilize to discover websites with content. Different paths mean a different index.

In the case of pages we examined, we saw an 7.4 percent decrease in the number of external links. This could indicate a dramatic shift in the most significant hyperlinks on the internet due to the fact that the homepage links typically carry significant link equity. Incredibly, the largest “losers” as a percentage tend to be social sites. In the end, it’s plausible to think that among the more common kinds of hyperlinks a website could eliminate from their mobile version is social share buttons since they’re typically included in”the “chrome” of a page instead of the content as this “chrome” often changes to accommodate mobile versions.

The most significant lossers in terms of percentage order were:

linkedin.com
instagram.com
twitter.com
facebook.com
What’s the fuss regarding the 5-15% difference in the number of links you can find when browsing the internet? It’s clear that these figures tend to be biased toward websites that have a large number of links that do not include a mobile version. However, the majority of these links are primary navigational hyperlinks. If you dig deeper, you’ll come across the identical links. However, the ones that diverge result in having completely distinct second-level links.

Second-level metrics
This is where the data starts to become fascinating. While we crawl through the internet using crawl sets which are influenced by websites discovered by a mobile bot as opposed to the computer bot, you’ll find more and more dissimilar results. What is the extent to which the results diverge? Let’s begin with size. While we crawled the exact amount of homepage pages, our results of the second tier differed based on the number of links that were found on the first home pages. The mobile crawlset contained 977,840 unique URLs. By contrast, the desktop crawlset had 1,053,785. Already , we can discern a new index being developed -the desktop index could be much bigger. Let’s dig deeper.

I would like for you to unwind for a minute and concentrate on this graph. You will notice that there exist three kinds of graphs:

Mobile UniqueBlue bars are unique products that are discovered in the mobile bot
The Desktop is UniqueOrange bars symbolize unique things discovered in the Desktop Bot
shared:Gray bars represent items discovered by both
Also, note that four tests:

Numerous URLs were discovered
Numerous Domains discovered
The number of links discovered
Numerous Root Linking Domains discovered
Here’s the main issue, and it’s huge. More URLs, domains Links in addition to Root Linking Domains, that are exclusive to the results of the desktop crawl than those common to the mobile and desktop crawler. It is also higher than the gray bar. This means that, at the second level of crawl, the vast majority of links pages, domains, and pages are not from the indexes. This is a huge change. It is a major shift in the graph of links as we recognize it.

The most important question: what do we really care about most external links.

An astounding 63% of external links belong with the crawler on the desktop. In a mobile-only crawling environment the amount of links external to the crawler decreased by a half.

What happens on the micro-level?
So, what’s the real reason behind this massive disparity in crawl? We know that it has to do with some typical shortcuts for making an internet website “mobile-friendly,” which include:

The content is available in subdomains which are not as well-known or have fewer features
The elimination of links and features through user-agent detection plugins
Sure, these modifications can improve the experience for your visitors however it will create an entirely different impression for robots. Let’s study of a site to examine how this works.

The site is home to 10,000 pages as per Google and is an Domain Authority of 72 and 22670 referring domains as per the brand new Moz Link Explorer. However, the site utilizes the well-known WordPress plugin that cuts the content to the pages and articles that are on the site, eliminating hyperlinks from the descriptions of categories pages and removing the majority of the links from the sidebar as well as the footer. This plugin is utilized across more than 200,000 sites. What happens when we run an all-level deep crawl using Screaming Frog? (It’s ideal for this type of analysis since we can alter the user-agent settings and limit settings to the crawl HTML web pages.)

The contrast is quite shocking. In the first place, look at the mobile crawl to the left side, there’s evidently a small number of links per page . Also, the number of links are constant as you go further into the website. This is what creates an exponential, steady growth curve. Also, note how the crawl suddenly stopped at the level of four. The website simply did not have more pages to provide to the crawler on mobile! Just 3,000 of the 10,000 pages Google has reported were found.

Then, take a look at that of the computer crawler. It explodes with pages at the second level with nearly double the pages from its mobile crawl on this point all by itself. Remember the graph that showed there were more distinct desktop pages than there are shared ones when crawling over 20,000 sites. This is a confirmation of what happens. In the end, 6x more content was accessible to desktop crawlers in the same crawl depth.

What impact had this on links to external websites?

Wow. 75 percent of the external links that were outbound to the outside world were removed from Mobile version. There were 4,905 external links on the desktop, whereas only 1,162 links were discovered in the mobile version. Keep in mind that this is an DA 72 site with over twenty thousand domains referred to. Imagine losing that link since the mobile index has stopped locating the backlink. What can we do? Do you think the sky is falling?

Take a deep breath
Mobile-first doesn’t mean mobile-only.
The most important point to note about the findings is that Google does not abandon desktops — they’re merely prioritizing the mobile search. This is understandable, since the majority of the search traffic is nowadays mobile. If Google intends to ensure that mobile-friendly content of high quality is available it is necessary to change their search engine crawling priorities. They also have a desire to locate content. This requires a desktop crawler, as long as webmasters reduce the desktop versions of their websites.

The reality of this isn’t ignored by Google. According to the original official Google Mobile First Announcement they wrote…

Google has taken the time to say that a desktop version could be superior to the “incomplete mobile version.” I’m not going to delve too deeply into the statement, beyond saying that Google is looking for a full mobile version and not postcard.

The best link placements will win
An anecdote result of my study was that the links from outside that seemed to survive the ravages of mobile versions were typically placed directly within the content. External links on sidebars such as blog-rolls were effectively eliminated from the index. However, the in-content links were able to survive. This could be a signal Google detects. External links that work in desktop and mobile are likely to be the types of websites that users might click.

Therefore, even though there might be fewer hyperlinks on the graph of links (or at least , there may be a subset specifically recognized) If your links are of high quality content-based, they you are likely to witness a better performance.

I could verify this through looking at the subset of excellent websites. Utilizing Fresh Web Explorer I was able to find new hyperlinks that lead to toysrus.com which is receiving a lot of attention because of stores closing. It is likely that the majority of these links are in-content since the articles are focused on current, newsworthy information concerning Toys R Us. After checking more than 300 mentions we discovered that the links appeared to be the same in both desktop and mobile crawls. These were great content links, and as a result, they were visible across both the versions.

The bias of selection and the convergence
It’s probably true that sites with a lot of traffic are more likely to feature mobile-friendly versions than less well-known websites. However, they may be responsive, at which the point that they will not show any significant differences in crawling — however, at least a portion of them would be the m.* domains or employ plugins, such as the ones mentioned above that reduce the content. In the lower levels on the internet, more old and less professional content is likely be limited to one version that is available to desktop and mobile devices in the same way. When this happens, then we could expect that in time the variations in the index may begin to become more convergent rather than diverge, because my study only looked at websites that were among the top million , and only crawled for two levels.

Furthermore (this one is speculation) however, I believe over time , there will be a convergence between desktop and mobile index. I don’t believe that the graphs of the links will be exponentially since the internet is just as big. Instead, the routes through where certain pages can be reached and the frequency at how they’re reached can change quite in a small amount. Thus, although the graph of links will change in its structure, the number of URLs comprising the graph of links will remain similar. Of course, a small portion that is mobile-friendly will be completely different. The majority of websites which use dedicated mobile subdomains, or plugins that eliminate large portions of content will be mobile islands within the web linked.

Impact on SERPs
It’s not easy at this point to determine what the effect on search results could have. It’s certain that it won’t change the SERPs. What’s the purpose of Google in announcing and announcing changes to its indexing techniques in the event that it doesn’t make a difference to the SERPs?

This study isn’t comprehensive without some kind of impact evaluation. Thanks for JR Oakes for providing me with this critique, or else I’d never thought to check it out.

There are two things that could thwart significant changes in the SERPs that are already in place regardless of the validity of the study:

A slow rollout will mean changes in SERPs will be inaccessible to the natural fluctuations in ranking that we’ve seen before.
Google can also seed URLs that are accessed via mobile devices or desktop users into the respective crawlers of each, thus decreasing index divergence. (This is an important one!)
Google may decide to take into consideration as a link-building strategy the totality of desktop and mobile crawls but not without excluding the other.
Additionally, the relationship between domains might be less affected by the other metrics of indexes. What’s the chance of the fact that the relation with Domain X and Domain Y (more or lesser links) is identical for both desktop and mobile indexes? If the relationships remain the same, SERPs’ impact will be minimal. We’ll refer to this as to be “directionally consistent.”

To complete this part of my research I took a selection of domain pair names in the index for mobile and then compared their relationships (more or links) with their performances on the index for desktop. Did the first index have higher links than other both on the desktop and mobile? Did they perform differently?

It was found that indexes were quite similar in regards to directional consistency. This means that although the graphs of links overall were quite different, when you compare the domains at random, they appeared to be consistently in direction. Around 88% of the domains that were compared had directional consistency by using indexes. This test was limited to with the mobile index domains with their desktop-based index domains. In the future, research may investigate the reverse connection.

What’s next? : Moz and the mobile-first index
The goal of Moz’s link index Moz link index has been to always be as similar to Google as we can. In that mind that we are exploring the mobile-first index too. Our brand new link index as well as Link Explorer in Beta is aiming to be more than just one of the biggest link indexes available on the internet but also the most useful and valuable, and we believe that a large part of that is making our index more user-friendly using techniques like Google. We’ll keep you informed!

 

Next Post