Why Turbo is SEO friendly

I always hear that single page application is not recommended for marketplace because its not SEO friendly. I want to build a marketplace website using turbo so that my website will be a single page application without using javascript framework such as React, Vue, etc. But I worried about SEO. Is Turbo SEO friendly? if it is, can you explain me why Turbo is SEO friendly but another SPA framework is not?

When people say that this or another platform or methodology is SEO friendly, they usually mean that the search engine “spider” doesn’t have to work very hard to derive a meaningful content graph from any given URL. There are at least two ways that any given URL can be unfriendly.

  1. URLs aren’t actually URLs. If your SPA uses something other than the actual URL to derive what content shows on the screen at any given time, which is to say you are replacing page content, and more importantly, changing page meaning without simultaneously changing the unique resource locator (URL), then you have broken the basic contract of the Web.

  2. There’s nothing in the DOM. If your HTML looks like <div id="application-goes-here"></div> and everything is filled in by JavaScript after page load, then the spider has to do way more work to figure out what that page should actually contain. It has to run a full JavaScript interpreter, and often execute a complex set of instructions in order to determine what if anything is actually meant to be on the page.

When you say “marketplace”, I hear “catalog of things for sale”. While it is very important for user satisfaction for that catalog to have a robust search engine, and recommendations, and other marketing tools, I would argue that it is even more important for that content to be indexed in the search engines, and available instantly at page load for easy inspection by the robots that rule those indexes.

It is actually quite rare (unless you are Amazon, and trust me, you are not) for people to type in (or bookmark) your home page address and start the process at your carefully-curated “SPA” front door. More likely, they will land on some deep-deep corner of your content hierarchy from a Google search, and start at the end. If they don’t find that deep-end page in the search engine, then that never even happens at all. Conversely, if you have managed to feed the search engine a specially-crafted set of non-visible content (which you don’t want to be caught doing, by the way) and the visitor arrives at your SPA front door instead of a deeply specific product page, or worse, lands at an empty page that then takes a while to hydrate from an empty shell, then you are probably done for, commercially speaking.

So bringing it back to Turbo and the rest of the Hotwire stack, why is that different? Because it proceeds from the basic premise that the server is the natural place for HTML to be made, not the browser. Instead of an empty shell at first load, you see a complete “flat” HTML page. Everything that should be on it, is in it. The URL is just that: Unique.

When you click a button to do something wizzy to the page, instead of a heavy JavaScript application running in the browser interpreting that button click and manipulating the DOM or a Shadow DOM, you actually send a request to the server, which returns only the parts of the page that have changed, and the replacement is made by a really tiny JavaScript class that loads only once, along with the rest of the page “chrome”. This replacement is truly as fast as running in the browser, because the amount of data being exchanged is very tiny, and because the HTML arrives fully-formed, so there’s no translation needed.

Now, let’s think about security and state, two things that React and other JS-first paradigms get truly wrong, in my opinion.

Unless you really like to lose money, it is a Very Bad Idea to let the browser do anything related to security. Period. Full stop. The browser does not belong to you, and anything it does is immediately suspect. It should always be relegated to asking the server for anything it wants, and never trusted further. So if you let people log into your application, you have to establish sessions with those people’s browsers. If the application running on the browser is responsible for attesting that the person making a purchase is still the person who signed in, then you lose. If you allow the application running in the browser to calculate the discount from a coupon code, then you lose. If you want to keep a user’s cart around from one visit to the next, but they use a different device the next time they sign in, then you lose. Do you see where this train is going?

Now if you want (for reasons of speed, or making the app feel “responsive”) to let the browser do those things, but then re-do everything once the actual purchase request arrives at your server (where you can usually trust the world not to have been replaced by an attacker) then you’ve just signed up for double work. Because front-end and back-end programming are wildly different disciplines these days, it is usual and customary to then have two separate teams doing those things, and communicating between each other, and coordinating their updates etc. This forces a certain team scale on your project, raising your costs and therefore prices, and making you less competitive or responsive to change.

Hotwire starts from the premise that a single programmer and designer, or other small team, can build “the whole widget” and understand what it does at every level at once. This is not to say that Hotwire applications are kiddy toys or in any way less ambitious or fully-featured than React or other heavy front-end systems. They just have their telescope turned in a different direction, and their priorities ordered differently. By concentrating decision-making and content formatting in the server, rather than distributing it between the browser and the server (with huge areas of overlap and competition for resources of all kinds) they can manage the trick of appearing responsive and “reactive” in page, while avoiding the trap of building everything twice.

Wow. I realize I have a lot of strong feelings about this, probably the result of having built serious commercial Web applications since 1997. I’m not telling you to get off my lawn or anything. I’m definitely telling you that we learned all these lessons a very long time ago, back when there were fewer choices available. Like everything else that has to do with computers, fashions come and go. Mainframes and terminals, servers and PCs, cloud and mobile devices, it all washes back and forth from shore to shore. Each new paradigm has to explain why it is necessary, and why the old way is wrong.

I recently worked on a project to build a “reading experience” inside a Web page. The client wanted a page-turn effect, so as you were reading the book, you would feel like the pages were real. I’m not here to explain that thought, but the amount of money they spent with a big agency (who sold them a React application to do it) compared with the really lightweight system that preceded it (which had the content actually in the page, where it could be indexed) was truly astounding. And the lengths they had to go to in order to make that effect work included translating the original HTML content into JSON, serving that to the React app, and then having the React app generate new HTML out of it made me chuckle at the time: “Congratulations, you’ve built a Web browser in JavaScript to run inside another Web page.”

Hope this helps you make a decision,



Thank you for the great explanation!, it makes more sense to me now :slight_smile: