I’m looking for advice on the topic of broadcasting model changes to a large number of streams. Imagine I want to broadcast a new message to a large group of users (hundreds or thousands).
I need to broadcast this change to each user independently because the message includes user-specific behaviour and content.
We could imagine something like this:
class Message < ApplicationRecord
has_many :users, through: :group
after_create_commit :broadcast_to_users
private
def broadcast_to_users
users.each do |user|
broadcast_prepend_later_to "messages:#{user.id}", target: "messages"
end
end
end
This code is not exactly ideal for a couple reasons:
Rendering a lot of views from the background like this is slow
We could potentially be rendering and broadcasting view templates to many users that are not listening, making it a waste of resources
What I would like to do instead is something like:
def broadcast_to_users
users.each do |user|
ActionCable.server.broadcast "messages:#{user.id}", action: "new_message", message_id: id
end
end
Whilst this still requires looping through all of the messages and broadcasting a change (potentially to offline users), we avoid the rendering overhead. This way we could have a listener on the frontend that fetches /messages/{id} and calls Turbo.renderStreamMessage, allowing us to only generate these templates for the users who need them.
This isn’t too difficult to implement, but I am curious about building framework-level support for this. I haven’t really seen other applications do it this way, so I’m curious if anyone else has tried this or has suggestions for libraries or example applications that work similarly?
EDIT: I suppose I could also track users who have actively subscribed to websockets in Redis or something and then only broadcast the views to those users, but that doesn’t feel great either.
The current implementation has been in production for around 18 months now. It’s not awful, but we are seeing significant spikes when the streams are broadcasting to many users. Optimising this to avoid broadcasting to offline users would be a good sensible improvement for us.
I’ve made a change to our code that stores online user IDs in Redis via a new ActionCable Channel and then only broadcasts to those users and the difference is substantial.
I think my interest in some sort of general-approach that we can use everywhere probably is a premature optimisation and maybe we just need to handle this particular example differently. I’m not much a fan of the default approach of attaching broadcasting code to model callbacks* and figured/hoped others would see the redundant broadcasting as a problem.
* To be clear, I’m aware this is optional and we have decided against using it
@leejarvis nothing helpful from my end but i just saw your topic and just wanted to say me and my team is having exactly the same concern - having a code responsible for rendering and broadcasting views inside model callback sounds like a horrible idea. Not to mention the amount of users that should receive it + not to mention a situation in the future when app grows and we have dozens of such partials to be broadcasted to hundreds of thousands of users.
I tried asking about it on discord but got just one interesting reply plus my own solution:
What we are currently doing is - we broadcast a WS event on each update/create/delete etc. of a record to all the listening users. If we need to reload a componenent based on WS events - we have a special stimulus controller created for each component that listens to these WS events and reloads appropriate component using stimulus_reflex, turbo-frame or whatever else is being used in the project
I never got any feedback about if my solution (2) is good but some discord users suggested that it’s not a good idea to use stimulus_reflex without any user interactions (no clicks, no input changes etc) but i’m not sure why.
Hope that helps.
PS. It’s comforting knowing there are more of us thinking broadcasting views from model sounds a bit weird
Oh broadcasting from the model is such a strange concept! It ignores the idea that a model could have more than one view representation. I ended up doing it from the controller:
Figuring that out took a lot of time as it seemed to run against the grain.
My only other thought @leejarvis was that perhaps if the user customised part of the rendered partial was slight, you could somehow render a generic version once with interpolation sections, then either transform it just prior to broadcasting it to each connected user via a supplied hash of customisation values, or have it be transformed on the browser side just prior to it being displayed. I think there is now a hook that allows you to intercept the mechanism that updates the page.
@mxb I checked out CableReady but didn’t see Updatable — looks interesting. I’ll dig a bit more in to that.
I have created a prototype app that basically does what you’re doing, too. It broadcasts tiny event payloads on model changes and then expects the client to request the actual data, by reloading or fetching the resource. I’m just using ActionCable and listening to a channel that broadcasts the event along with a URL for requesting the resource. It could see this getting a little complicated in more complex applications but it feels better than the broadcasting suggested by default.
@brendon Yeah, most of our broadcasting code is happening from the controller as well actually. I have an app/broadcasters directory and it allows doing things like MessageBroadcaster.new("messages").broadcast_create.
if the user customised part of the rendered partial was slight, you could somehow render a generic version once with interpolation sections, then either transform it just prior to broadcasting it to each connected user via a supplied hash of customisation values, or have it be transformed on the browser side just prior to it being displayed. I think there is now a hook that allows you to intercept the mechanism that updates the page.
Yeah I considered this because in our particular example the differences are relatively small. Just felt like this might have been a solved problem without having to do this. I think I still prefer websocket clients fetching updates when the client publishes them without sending all of these rendered views over the wire. At least for this particular example with a lot of similar broadcasts.
I had a similar use-case (broadcasting user-specific views to a potentially large collection of users) recently, and solved it using a combination of turbo-streams and turbo-frames.
Rather than sending a notification event to notify the client that there are changes available, I instead send a turbo-frame with a src attribute to serve the same task. This turbo-frame is not user-dependent, so I can just broadcast it to all users (in my case, I actually broadcast it to an organization channel) like normal, and each client that is connected will then request an update from my server because of the src attribute on the turbo frame.
For the turbo-frame itself, I had it render a loading state, which then gets replaced once the turbo frame request returns.
This approach is slightly heavier than sending a “updates-available” notification event, but the nice thing about it is that you don’t have to write any custom JS - you can just lean on the existing turbo components
Sounds good. I always have had that uncomfortable thought that streams wouldn’t scale well in that regard. I suppose it could be possible to defer the rendering of a users template until a live connection from that user is verified but I think the current stack doesn’t make that easy/possible. I’d assume one would pass the broadcaster a block to execute conditionally?