How should we handle entrypoints into an app for deep linking? in a traditional web request, a URL identifies a unique resource and returns a complete html response rendering that resource. every route is an entrypoint, because this is how the web works: unique URIs provide stateless interaction.
If you are building a SPA, a typical entrypoint is something like / or /app. the response returns an entire document including a js payload which contains your SPA. html rendering beyond the entrypoint is performed clientside via requests to data (json, typically) endpoints. to deep link into a location inside an app such as
/app/entitya/1/entityb/2, you typically have to build a pretty involved router. one such implementation would be this: the server recognizes a GET to the URL and uses some heuristic to note that the request didn’t come from your SPA, but directly from a web client (a header is a good heuristic, for example). the server, instead of returning a complete html document for
/entitya/1/entityb/2, returns the root entrypoint response to load up the SPA. the client SPA then traverses to the correct page to render, which probably triggers xhr fetches for data payloads entitya-1 and entityb-2 (as well as many other handshakes like /user /permissions, etc.), which are rendered as html.
i’ve been thinking about this for a bit, and i think any turbo application might have to do something similar to support deep linking, right? the consequence of turning some server endpoints into html-fragments-over-the-wire (via frames or streams), is that you no longer get to leverage the power of the web and unique URLs for purely stateless interaction. a GET to
/entitya/1/entityb/2 very likely returns an html fragment for entityb-2, not the entire document which contains important things like the turbo.js payload. i see a strategy where you write endpoints to return entire html documents every request and, thanks to the uniqueness properties of frames/streams, you can lean on turbo to perform more-optimized rendering. this strategy would immediately defeat the caching advantages of frames/streams, however, and we’re perhaps better off not using turbo at all. maybe the argument goes: any sufficiently large / complex application needs to partition off resource-heavy documents intro fragments eventually, and leverage better caching semantics; turbo provides semantics to make this easier and as a tradeoff, other things get complicated.
please let me know if i’m confused about this, i would love to see if there is a simpler way to achieve deep linking.