D5 Creation Blog

HTML JavaScript SEO

HTML JavaScript SEO: Server Side Rendering and Client-Side Rendering

HTML JavaScript SEO

Table of Content

  1. Introduction
  2.  What is JavaScript SEO?
  3. The Ultimate Guide to JavaScript SEO

                         a. Client-Side Rendering (CSR):

                         b. Server-Side Rendering (SSR):

  • What is Prerendering?
  • How does Google handle JavaScript rendering and indexing?
  • Making your JavaScript site SEO friendly
  •  How do I conduct an SEO audit of my JavaScript website?
  •  The Complexity of JavaScript Crawling and Rendering
  •  How Google Crawls a Video:

                        a. In the case of traditional HTML, everything is simple and direct:

                        b. Things get complex when it comes to a JavaScript-based site:

  1.  Don’t block CSS, JavaScript, and image files
  2.  Does Google “count” links in JavaScript (JS)?
  3.  Do SEO auditing Tools render JavaScript (JS) the same way as Google?
  4.  Conclusion
HTML JavaScript SEO

Introduction (HTML JavaScript SEO)

JavaScript is used to build a lot of websites these days.

Even though there is nothing inherently wrong with JavaScript, how we use it can have a big effect on how our users feel and how we rank in search results. Developers need to think about how JavaScript affects SEO, and SEOs and other digital marketers need to learn more about the technology that powers their content and website experiences.

Remember when most web pages were static? When web pages weren't interactive? Back then, websites only served to promote a company's products and generate sales leads. Therefore, server-side rendering was needed to display HTML. It was the only way to load HTML onto a server, which turned it into user-friendly documents.

Today: Changes are dramatic. Today's websites aren't just static pages with content. They're actually web apps. You can send messages, shop, and more. Server-side rendering is giving way to client-side rendering, which is growing.

Like most software development decisions, it depends on your website's purpose. Before making a decision, consider the advantages and disadvantages.

Websites are no longer static content pages. Today's websites are robust and dynamic. Progressive Web App (PWA) makes websites look like mobile apps. With SEO's growing importance, quick, responsive web pages are essential. When creating a website, you must choose a web rendering technique. Each rendering technique has pros and cons. In this article, we'll discuss server-side and client-side rendering (CSR).

What is JavaScript SEO?

JavaScript SEO is the practice of ensuring that content on a web page (executed via JS) is rendered, indexed, and ultimately ranked in Google or other search engine search results.

 This is specifically significant due to the growing popularity of client-side rendering and JavaScript framework-based websites.

The Ultimate Guide to HTML JavaScript SEO

JavaScript is a popular topic right now, and more and more websites are using its frameworks and libraries, such as Angular.js, React.js, backbone.js, Vue.js, Ember.js, Aurelia, and Polymer.

Google has two ways to crawl and run JavaScript.

      a. Client-Side Rendering (CSR):

Client-side rendering indicates that a website's JavaScript is rendered in your browser, as opposed to on the server.

According to Google's Martin Splitt, client-side rendering is the default when using a JavaScript framework. This means that you send HTML and JavaScript to the browser, and JavaScript retrieves and assembles the content."

Consider client-side rendering as ordering IKEA furniture. IKEA does not ship pre-assembled furniture to your home. Instead, they send you the components that you must assemble at your residence.

What are the benefits of client-side rendering?

As the client (i.e., the person or bot attempting to view your page) is solely responsible for rendering content, client-side rendering is the less expensive option for website owners, as it reduces the load on their servers.

It is also the default state for JavaScript websites, which makes client-side rendering easier for website owners than server-side rendering.

What are the risks of client-side rendering?

Client-side rendering has two significant disadvantages.

Client-side rendering increases the likelihood of a negative user experience, for one. JavaScript can add seconds to a page's load time, and if this burden is placed entirely on the client (website visitor), they may become frustrated and leave your site.

      b. Server-Side Rendering (SSR):

Using the conventional rendering method, all of the page's resources are stored on the server. The HTML is then delivered to the browser and rendered, JS and CSS are downloaded, and the final render is displayed to the user/bot.

Server-side rendering indicates that JavaScript is rendered on the server of a website. To use the furniture analogy once more, this would be equivalent to ordering fully assembled furniture.

What are the benefits of server-side rendering?

Due to the fact that JavaScript is rendered on the server, both search engine bots and humans experience a faster page load time. This not only results in a better UX (which is also a factor in Google's ranking algorithm), but it also eliminates crawl budget issues related to site speed.

Sending search engine bots fully rendered pages eliminates the risk of "partial indexing" that can occur with client-side rendered content. When Google and other search engine bots attempt to access your page, rather than having to wait for rendering resources to become available prior to viewing the entire page, they will receive the fully rendered page immediately.

What are the risks of server-side rendering?

Server-side rendering can be resource-intensive and costly. It can be costly because your servers are responsible for rendering your content for both bots and human website visitors. Since it is not the default for JavaScript websites, it can be resource-intensive to implement and will require work from your engineering team. Additionally, server-side rendering typically does not work with third-party JavaScript.

What is Prerendering? (HTML JavaScript SEO)

If you are using a Single Page Application (SPA) for a website that does not require a login, SEO should be a top priority. Google suggests using its built-in capability for interpreting JavaScript applications, but our recommendation is not to follow Google's advice. We would assert that is frequently insufficient and prerendering is frequently still necessary.

JavaScript is becoming the dominant language for developing web applications. Displaying all of your content is crucial, as PageRank considers not only the relevance and quality of content but also whether the content is visible within a reasonable amount of time.

Prerendering is the process of preloading all page elements in anticipation of a web crawler's visit. A prerender administration service will intercept a page request to determine whether the user-agent (client) viewing your website is a bot or spider. If the user-agent (client) is a bot, the prerender middleware will send a cached version of your website with all JavaScript, Images, etc. rendered statically.

Everything is loaded normally if the user agent (client) is not a bot; prerendering is only used to optimize the experience for bots.

HTML JavaScript SEO

Source: Netlify

When social network bots such as Facebook, Twitter, Linked In, etc., feature links to your website, the Open Graph information from the site's metadata will load instead of a pre-cached version from prerendering.

The Main Differences between Server-side rendering (SSR) and Client-side rendering (CSR)?

Server-side rendering can be somewhat faster on the initial request, primarily because it does not require as many round trips to the server. However, this is not the end; the execution also depends on additional factors. All of these factors can drastically alter server experiences.

  •  The web speed of the requesting client
  • How many active users are visiting the site at any given time?
  • The physical location of the server.
  • How pages are enhanced or optimized for speed

Again, client-side rendering is slower for the initial request because it makes multiple round trips to the server. Nonetheless, when these requests are complete, Client-side rendering provides a lightning-fast experience via the JS framework.

How does Google handle JavaScript rendering and indexing?

Google recently disclosed at Google I/O their current two-wave process for JS rendering and indexing.

In the first wave, HTML and CSS are crawled and indexed almost immediately, any existing links are added to the crawl queue, and HTTP response codes are downloaded.

Google will return a few hours to a week later to render and index JavaScript-generated content as part of the second phase.

In general, Google's rendering delay is not affected by SSR because all data is contained in the source code and indexed during the initial indexing wave. In CSR, where indexable content is only disclosed at render time, this substantial delay means that the data may not be ordered or appear in search results for an extended period of time or weeks.


Making your JavaScript site SEO friendly (HTML JavaScript SEO)

It's possible that some processes will be familiar to SEOs, but there may be some minor differences.

On-page SEO

Content, title tags, meta descriptions, alt attributes, meta robot tags, and other on-page SEO rules still apply. Seo for Your Website's Pages: A Practical Guide

Titles and descriptions are frequently reused, and image alt attributes are rarely set when working with JavaScript websites, which I've found to be a problem.

Allow crawling

Don’t block access to resources. Google needs to be able to access and download resources so that it can render the pages properly. The simplest way to allow crawling of the necessary resources is to add the following to your robots.txt file:


Change URLs when updating content.A router for JavaScript frameworks will allow you to map clean URLs to the History API, which I've already mentioned. The use of hashes (#) as a routing mechanism is not recommended. Vue and some older versions of Angular are particularly vulnerable to this issue. For example, if you type abc.com/#something into a browser, the server will typically ignore anything after the #. You can work with your developer to make the following changes to Vue to fix the problem:

Vue router:

Instead of the standard 'Hash' mode, use 'History' mode.

Duplicate content

If you use JavaScript, you may end up with multiple URLs for the same content. Capitalization, IDs, parameter IDs, etc. may be to blame. All of the following may be true:

The solution is simple. Set canonical tags for the version you want to appear in search results.

SEO “plugin” type options

These are typically referred to as modules in JavaScript frameworks. For popular frameworks like React, Vue, and Angular, you can search for "React Helmet" or "React Modules." You can set many of the most commonly used tags for SEO using meta tags, helmets, and the Head module.

Error pages

It's impossible for JavaScript frameworks to throw a server error like 404 because they aren't server-side. Error pages are available in a variety of ways.

If the page you're looking for doesn't return 404, use JavaScript to redirect you to another page that does.

Add a no-index tag to the page that’s failing along with some kind of error message like “404 Page Not Found”. This will be treated as a "soft 404" because the response code is a 200 okay instead.


Routing in JavaScript frameworks is generally done via URLs. In most cases, these routers come with an additional module for creating sitemaps. Look up your system + router sitemap by doing a search for "Vue router sitemap." Sitemap options may be available in many of the rendering solutions. Any system you use can be found by searching for "Gatsby sitemap" in Google or by simply finding the system and searching for "system + sitemap."


SEOs are used for 301/302 redirects, which are server-side. But Javascript is typically run client-side. This isn't a problem because Google treats the page the same way it did before the redirect. All signals, such as PageRank, are still passed through the redirects. You can usually find these redirects in the code by looking for “window.location.href”.


For most frameworks, there are a few modules that can help with internationalization, such as hreflang. Some of these tags have been ported and include i18n, intl, or many times the same modules used for header tags like Helmet can be used to add additional tags.

Lazy loading

There are usually modules for handling lazy loading. When it comes to working with JavaScript frameworks, there are a lot of modules that can help you out. In terms of lazy loading, Lazy and Suspense are the most popular modules out there. Lazy loading of images is desirable, but content should not be lazy-loaded. JavaScript can be used to accomplish this, but search engines may not be able to find it.

How do I conduct an SEO audit of my JavaScript website?

Before deciding on a solution, it is advisable to identify any SEO issues that may exist on your JavaScript website.

There are several ways to accomplish this:

Using a "disable JavaScript" extension — You can use a number of browser extensions to disable JavaScript on the page you're viewing. This is a simple method for locating JavaScript elements on your page. If content or links disappear when you disable JavaScript, you may have an SEO issue with JavaScript.

Conducting a Google search for JS-loaded content - Once you have identified JS-loaded content, try copying and pasting some of that text into a Google search. If your website returns no results, you may have a JavaScript SEO issue.

URL Inspection Tool for Google Search Console — Click "View Crawled Page" after running a page through this tool to see what Google has rendered. If portions of your page's content are missing, you may have an issue with JavaScript SEO. The same is true for Google's Rich Results Test and Mobile-Friendly Test.

Comparing an HTML-only crawl to a JS-enabled crawl — If you have a JavaScript-capable crawler such as SiteCrawler, you can crawl your website with and without JavaScript enabled. While other tools allow you to test a single page, this is a great way to get an overview of JavaScript issues across your entire website.

The Complexity of JavaScript Crawling and Rendering

The entire process of javascript crawling and rendering is more complex than HTML crawling. The time required for parsing, compiling, rendering, and executing JS files is considerable. In the case of JavaScript-heavy websites, Google must wait until all processes are complete before indexing the content.

The crawling and rendering of JavaScript is not the most time-consuming process. Similarly, it pertains to the process of discovering new links. Google cannot typically find new URLs on JavaScript-heavy websites without waiting until the page is rendered.

Google's Caffeine was another web indexing system. This new indexing system enabled Google to efficiently crawl, render, and store information.

The entire procedure is extremely quick

[ Source: Official Google Blog ]

Google converts PDFs, DOCs, and XLS documents to HTML for indexing purposes.

Google will crawl, follow, and transmit a link's value within an iframe web page.

Google is able to render JavaScript and index all JavaScript (JS) pages and JavaScript (JS) links. Google will be handled similarly to links in the HTML document.

Google cannot interpret text embedded in an image (.jpg, .png, etc.)

Matt Cutts also confirmed that GoogleBot is capable of processing AJAX POST requests and crawling AJAX to retrieve Facebook comments. Content Crawling and Rendering is a difficult task, but GoogleBot can accomplish it.

Google has enhanced Flash indexing capabilities and now supports an AJAX crawling strategy.

How Google Crawls a Video:

  1. Google can extract a thumbnail and preview from the video. It can also extract limited data meaning from the document's audio and video.
  2. Google can extract the text and metadata from the page hosting the video.
  3. Google can utilize structured data (VideoObject) or a video sitemap associated with the video.

a. In the case of traditional HTML, everything is simple and direct:

  • The web crawler of Google downloads an HTML file.
  • The CSS files are downloaded by Google's web crawler.
  • Google's web crawler separates links from source code and can simultaneously visit both.
  • Google's web crawler transmits all downloaded resources to the search Index (Caffeine)
  • The search Index (Caffeine) indexes the web page.

b. Things get complex when it comes to a JavaScript-based site:

  •     The web crawler from Google downloads an HTML file.
  •     The web crawler of Google downloads the CSS and JS files.
  •     The Google web crawler must then use the Google Web Rendering Service

     (a new search index   called Caffeine) to parse, compile, render, and execute JS code.

  •      The Google Web Rendering (search index Caffeine) service then retrieves the information

      from external APIs, the database, etc.

  •       Finally, the content can be indexed by the search index Caffeine.

Now Google can discover new links and add them to the Googlebot's queue for crawling and indexing.


Don’t block CSS, JavaScript, and image files

If you block CSS, JavaScript, and image files, you prevent Google and other search engines from determining whether your website functions properly. If you block CSS, JavaScript, and image files in your robots.txt file or X-Robots-Tag, Google will be unable to render and understand your site, which may cause your search engine rankings to drop.

Moz, Semrush, and Ahrefs, among other SEO auditing tools, have begun rendering web pages, executing CSS, JavaScript, and image resources, and displaying the results. Therefore, do not block CSS, JavaScript, and image files to improve SEO if you need to utilize your preferred SEO auditing tools.

Where we can check the blocked CSS, JavaScript, and image resources.

There is no site-wide view of blocked assets or resources, but the URL Inspection tool provides a list of blocked assets for individual URLs.

Does Google “count” links in JavaScript (JS)?

Mariya Moeva, a member of Google's Search Quality Team, says that Google treats links in JavaScript the same way it treats links in plain HTML.

Except for Google and Ask, none of the other big search engines can read JavaScript, so your content won't show up if it's not in HTML format.

JavaScript content is not properly indexed by Bing, Yahoo, AOL, DuckDuckGo, and Yandex.

Do SEO auditing Tools render JavaScript (JS) the same way as Google? (HTML JavaScript SEO)

No, Google doesn't tell us a lot about how they handle and display JavaScript (JS). Google can handle and display the different frameworks for JavaScript (JS).

Does Ahrefs run JavaScript (JS) on every website?

Yes, they render JavaScript (JS) on all web pages. The Ahref JS Crawler renders all JS pages and JS links. Ahrefs can crawl about 6 billion web pages per day, run JS on about 30 million JS pages, and give 250 million JS links per day.

Conclusion (HTML JavaScript SEO)

Google and other search engines will keep getting better at displaying JS pages and links on a large scale. JavaScript is being used more and more to make pages more interactive, improve the user experience, and make them easier to read.

All things considered, it's up to us as SEOs to talk to our JavaScript developers about this analysis before starting new projects.

But this will definitely show you new problems to solve. The development techniques and ideas in this article are meant to give you a high-level overview of JavaScript SEO and the effects it has. Google and other search engines will improve their JS scaling. As development techniques advance, this will create new obstacles.

Author : Archit WPWeb

Archit WPWeb works for the JavaScript developers

Comments are Closed