Discussion:
Streaming large response bodys to slow clients
v***@ibiblio.org
2017-02-22 00:07:42 UTC
Permalink
Hello everybody,

I'm playing with various frameworks and techniques to stream large,
dynamically generated response bodies to (slow) clients, on a server with
limited memory resources. In other words, I have to handle back pressure.

While it has been said that EventMachine had not "a proper API" for this, I
managed to get into the event loop with Thin, and the results are not bad
at all (half the CPU time, half the memory usage as the Reel equivalent).
But I don't like too much that I'm actively generating content, then
actively watching Thin's queue and "actively" waiting (my code is still
called at each EM tick).

The ideal scenario would be to provide an Enumerator ("generator function")
as the Response and have the web server call that at the same speed as the
HTTP client consumes the data. But am I right that this is not how Rack
works? When I give an object responding to #each(), it seems that Rack
calls it all at once, independent of how fast the client consumes the data.

I had a look at Sinatra::Streaming::FileStreamer, which uses this
technique. Its doc says "Sends the file by streaming it 8192 bytes at a
time. This way the whole file doesn’t need to be read into memory at once.
This makes it feasible to send even large files." - but it is only feasible
when the client consumes the data quickly. Am I missing something?

Thank you for reading,
Viktor.
--
You received this message because you are subscribed to the Google Groups "sinatrarb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sinatrarb+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...