My experience with SSE so far (take this as recommendations if you wish)
You need to implement a server-side heartbeat feature.
You need to handle the close event from EventSource and be able to reconnect.
Tabs can be problematic. When you subscribe, you use a URL with a nominal ID to identify the client. For example, on a chat app, you would use /api/sse/userA/subscribe
Problem is, if userA starts opening tabs, each tab creates a new subscription for userA so you need to randomize each connection (userA-UUID).
If you don't use a nominal id, the server won't know to which subscriber to send the data and you don't want to broadcast all your chats.
I've used the Broadcast channel API in conjunction with SSE to have only one tab handle the SSE connection, and broadcast incoming SSEs to the other tabs which also reduces the number of connections to the server to one.
On the server it's also a PITA because not all instances/pods have the subscribers list. The way I've found to solve this is with clustering the instances with Hazelcast or Redis or a MQ.
But once you figure out all this, SSE works quite well.
Sounds like a service worker, for which there's only one active at a time for all tabs (and can communicate with them) could help with your client side issues.
This chimes a lot with my experience. Although I think SSEs are brilliant if the stack I’m using supports Websockets I would probably default to them even for a simple event stream now.
To add to your list of problems, I have had memory leaks with SSE responses stuck open on the server even when the client disconnects. Resorted to killing the response on the server every couple of minutes and relying on the client reconnecting.
Also, when using an nginx reverse proxy, including 'X-Accel-Buffering: no' in the HTTP header of the server response may be required to keep events from being buffered.
Yeah my personal experience is that if a SSE server has clients subscribed, and it goes down, the clients reconnect automatically once the server starts back up.
Only thing is the message ids reset when this happens.
I think this depends on the client/implementation. SSE call is just a http request with a continuous body stream. If it gets terminated clients could try to make another of those requests or not. That said the EventSource client in Browsers - which most people are likely using- will automatically reconnect
The SSE spec contains the `retry` and `Last-Event-ID` mechanism. Of course if you are making your own implementation, nothing is provided, but I'm saying that the spec has such mechanisms built-in.
These are all potential issues with plain sse, but are pretty easily solved in a few lines of code.
Granted the benefits of sse: no additional port to open, no dependencies - just slip your logic into your routing at /events. Of course this may not be important for some. Websockets are a good choice as well. I tend to id clients on ip address not connections however, depends on the specific app.
You need to implement a server-side heartbeat feature.
You need to handle the close event from EventSource and be able to reconnect.
Tabs can be problematic. When you subscribe, you use a URL with a nominal ID to identify the client. For example, on a chat app, you would use /api/sse/userA/subscribe
Problem is, if userA starts opening tabs, each tab creates a new subscription for userA so you need to randomize each connection (userA-UUID).
If you don't use a nominal id, the server won't know to which subscriber to send the data and you don't want to broadcast all your chats.
I've used the Broadcast channel API in conjunction with SSE to have only one tab handle the SSE connection, and broadcast incoming SSEs to the other tabs which also reduces the number of connections to the server to one.
On the server it's also a PITA because not all instances/pods have the subscribers list. The way I've found to solve this is with clustering the instances with Hazelcast or Redis or a MQ.
But once you figure out all this, SSE works quite well.