Any experience in cost of scaling Stimulus/Turbolinks?

Heavier front-end Javascript frameworks (React, Angular, Vue, etc) offload some work that servers might normally do in rendering a UI. They connect to an API, download JSON and then perform business logic on rendering a UI, handling interactions, etc.

While you could do this with Stimulus/Turbolinks, it seems this framework prefers to push the business logic to the backend.

Has anyone seen or had issues with scaling large applications using this? Is the difference negligible, or is it more expensive because you have to put more server power into scaling? Do you need to be more aggressive with backend caching (which can cause other issues)?


Servers (and frameworks like Rails, with its refined caching strategies) are highly optimized for building HTML and serving it to browsers. If you’re not sending a megabyte of graphics on your sub-page update, the trade-off between hosting an entire JSON->HTML converter in the browser and just letting the browser do what it was born to do (along with only sending the changed parts of the page) can make this a win for Turbolinks. This will always depend on the exact architecture of your application. But for me, for the kind of work I do, I would never want to build the same app twice, maintain state in two places, and perform authorization in two places if I could avoid it.


I appreciate the explanation of the perspective of Turbolinks/Stimulus approach. I’m hoping to find more info on developers experience in server impact.

Our clients noticed a significant improvement as we removed the whole JSON from the server and html on the client. For clients “significant” means that some of them wrote us an email saying that they really like how well it worked p, but I don’t have much data here that I could easily share.

For the server I noticed that it was the same to render to json and to render to html so we just stopped rendering json and we render html. We removed like half of the code base that was responsible for rendering things on client from json and we are quite happy.

Are we a large scale app? I would not say so, but we are running at minimum resources, like handling 2000-3000 clients per one Heroku 2x Dyno.

Did you replace 1:1 JSON calls to Ajax calls, or did you add/take away any? I can imagine the Stimulus architecture might cause more towards more Ajax calls vs a heaver javascript framework with less JSON calls. This is my primary assumption.

Thanks for sharing your experience.

I would like to give more context here so that you can make a good decision for your case.

First we made the decision not to have a JS framework on the client and to drop this idea as a whole. For our case it was just adding complexity and one more framework. This does not happen overnight, but it could happen. So we decide that there is no JS and the whole platform should work in the case of JS disabled on the browser (this bootstrap navigation menus are a pain in the a…). It should be a progressive web application (PWA).

After this decisions we did not replace JSON with Ajax calls. We skipped most of them entirely. Some JSON requests could not be skipped, but we changed them as AJAX - for example “generating a username”. When users register they could choose a username, but to make it easier for them we generate one by default. When generating we must make sure it is a username that does not exists in the DB. For this we need to make a request to the server and this is one place we are using Stimulus to submit the username.

A place that we still use JSON is with Datatables- it is just so convenient. There are also a few progress bars that are making some legacy JSON requests.

Overall we have Ajax here and there, an a few JSON requests, but that’s it. Like 90-95% of the workflow is working with JS disabled.

We even took this to the extreme. We are testing it with browsers with JS and browsers without JS. So a delete button on a browser without JS is not opening a confirmation. But with JS enabled the delete opens a confirmation. I was afraid this will introduce a lot of logic in the specs, but I am still surprised it did not. We have one method “js_agnostic_delete” with an if statement that check if JS is enabled and decides what to do.

My point is that moving JSON to Ajax 1:1 was not for us. It would not pay off as we would basically be doing the same, but in another format. What really payed off and allowed us to reduce the code base with like 30-40%, increase the speed and make the specs not so fragile was to say - “let’s deliver our platform to a JS disabled browser, and if it has JS, than great.”

To give you even more context this was a set of decisions we made in April 2020 after years of getting tired with JS on client. We are also quite experience with JS as we’ve build a pretty large framework for 3D that is running entirely in browser so it was not like a lack of knowledge and experience with JS on our side that brought us to these decisions. I think whole team grew up enough to finally do without JS.


Great explanation of your journey. Thank you for sharing.

Hey @tleish! A little late to the game here, but I figured I’d chip in.

The only accurate answer here is, “It depends”. It depends on your application’s flow, whether using a frontend framework requires more or less calls to the backend (I’ve seen many instances where it requires more - these days the solution usually ends up being sticking GraphQL in there to help, which can open up a whole other can of worms). Often times, the API calls returning JSON end up fetching and returning more data from the database and/or remote services than is needed (especially for lists), which can slow down backend response times, or a single page ends up making multiple calls to the backend vs. a single call that might be made in a Stimulus/Turbolinks-type world. Of course, there are ways around this (the backend for frontend pattern or the more general GraphQL approach), but that assumes the time and knowledge is there to implement those. Often times in a tight timeline, the simplest approach is taken (at least initially). My gut feeling is you’re more likely to run into issues scaling the backend with an API + JS approach, based on my experiences. But, as I said, it really does depend.

Personally, scalability (over an API + JS approach) isn’t something I’m concerned with when starting a new project with Stimulus and Turbolinks. In my most recent experience, we had a higher traffic site that used Stimulus and Turbolinks that was much easier on the backend than another team’s lower traffic Rails API + React (and in one instance + GraphQL) app, but I don’t have data to quantify it.

Prior to that, I worked for years on extremely high traffic Rails apps that served global audiences at Disney. Rendering HTML on the backend vs. rendering JSON was never a decision of scale in those cases (and load was something that had to be estimated and stress test before launch, it was taken very seriously), it was just about the business case and what made sense. Again, no quantifiable data.

In regards to caching, it’s unlikely that you’ll need that due to performance differences in rendering HTML vs. JSON. Rather, you’re more likely to need it due to data access (database or remote services). With a Stimulus app, it’s easier to optimize for this because each controller action can just query for and render only the data it needs. With a typical REST API, you’re often fetching and returning more data than you need without the opportunity to optimize it for each page (again, without relying on other patterns/technologies) and thus caching might become more of a requirement rather than a nice to have for acceptable performance.

This isn’t all to say that you can’t have blazing fast APIs. You certainly can. But blazing fast never comes free, regardless of the approach.

Anyhow, these days I prefer to stick with the simpler approach (Rails/Stimulus/Turbolinks) unless there’s a business reason not to (e.g. the need for a public-facing API - in which case, I’d still likely have my apps use Rails/Stimulus/Turbolinks and just consume the API in Rails). So far, so good. Your mileage may vary.


1 Like

@welearnednothing, you make some great points here. Specifically about a common pattern of an API + JS approach to query more than it might need.

Thanks for the thoughts.

The more deadlines a company has, the simpler the architecture should be.