Date | Commit message (Collapse) |
|
In our race to have more concurrency options than real sites
using this server, we've added two new and fully supported
concurrency models: WriterThreadSpawn and WriterThreadPool
They're both designed to for serving large static files and work
best with IO.copy_stream (sendfile!) under Ruby 1.9. They may
also be used to dynamically generate long running, streaming
responses after headers are sent (use "proxy_buffering off" with
nginx).
Unlike most concurrency options in Rainbows!, these are designed
to run behind nginx (or haproxy if you don't support POST/PUT
requests) and are vulnerable to slow client denial of service
attacks.
I floated the idea of doing something along these lines back in
the early days of Unicorn, but deemed it too dangerous for some
applications. But nothing is too dangerous for Rainbows! So
here they are now for your experimentation.
|
|
|
|
I still have a hard time keeping track of what's capable of
what.
|
|
it's hard to make this test reliable, but try to
add a small fudge factor based on the MRI default
GC malloc limit.
|
|
|
|
This should be logical, since we keep the connection alive
when writing in our writer threads.
|
|
|
|
|
|
|
|
no major internal changes until 2.0.0+
|
|
It's not worth the trouble and testability since having
a single thread tends to bottleneck if there's a bad
client.
|
|
This allows it to be a symlink to /dev/shm or similar
|
|
curl < 7.18.0 did not check for errors when doing chunked
uploads. Unfortunately some distros are slow moving and
bundle ancient versions of curl.
|
|
|
|
since we don't set maximum time boundaries, just rely on
the logs to properly log the dead processes.
|
|
since we've already waited for time to elapse, there's no
point in watching the upper limit here
|
|
This test can cause a lot of I/O, especially when
run in parallel. Just rely on the fixed rsha1 code
to compute the SHA1 of the response.
|
|
non-random_blob arguments weren't being taken into account
correctly :x
|
|
slow test runners can buffer us and bloat memory usage
unpredictably when tests are run under load
|
|
On busy sytems, this timing-sensitive test is likely to fail,
so give it some extra slack
|
|
|
|
Idle threads are cheap enough and having responses
queued up with a single slow client on a large response
is bad.
|
|
This is based on an idea I originally had for Unicorn but never
implemented in Unicorn since the benefits were unproven and the
risks were too high.
|
|
It'll be useful later on for a variety of things!
|
|
|
|
|
|
|
|
Mostly internal cleanups and small improvements.
The only backwards incompatible change was the addition of the
"client_max_body_size" parameter to limit upload sizes to
prevent DoS. This defaults to one megabyte (same as nginx), so
any apps relying on the limit-less behavior of previous will
have to configure this in the Unicorn/Rainbows! config file:
Rainbows! do
# nil for unlimited, or any number in bytes
client_max_body_size nil
end
The ThreadSpawn and ThreadPool models are now optimized for serving
large static files under Ruby 1.9 using IO.copy_stream[1].
The EventMachine model has always had optimized static file
serving (using EM::Connection#stream_file_data[2]).
The EventMachine model (finally) gets conditionally deferred app
dispatch in a separate thread, as described by Ezra Zygmuntowicz
for Merb, Ebb and Thin[3].
[1] - http://euruko2008.csrug.cz/system/assets/documents/0000/0007/tanaka-IOcopy_stream-euruko2008.pdf
[2] - http://eventmachine.rubyforge.org/EventMachine/Connection.html#M000312
[3] - http://brainspl.at/articles/2008/04/18/deferred-requests-with-merb-ebb-and-thin
|
|
IO#readpartial on zero bytes will always return an empty
string, so ensure the emulator for Revactor does that as
well.
|
|
|
|
Even if it's just an empty file for now, it's critical in
case we ever add any code that returns user-visible strings
since Rack::Lint (and mere sanity) require binary encoding
for "rack.input".
|
|
We expect no API changes in Unicorn for a while
|
|
|
|
Paragraph ordering matters psychologically.
|
|
Since deferred requests run in a separate thread, this affects
the root (non-deferred) thread as well since it may share
data with other threads.
|
|
Since we have conditional deferred execution in the regular
EventMachine concurrency model, we can drop this one.
This concurrency model never fully worked due to lack of
graceful shut downs, and was never promoted nor supported, either.
|
|
There doesn't appear to be a good/easy way to do this with
the built-in EventMachine thread pool :/
|
|
|
|
Merb (and possibly other) frameworks that support conditionally
deferred app dispatch can now use it just like Ebb and Thin.
http://brainspl.at/articles/2008/04/18/deferred-requests-with-merb-ebb-and-thin
|
|
|
|
* avoid needless links to /Rainbows.html
* keepalive_timeout has been 5 seconds by default for a while
* update "Gemcutter" references to "RubyGems.org"
|
|
|
|
WAvoid mucking with Unicorn::TeeInput, since other apps may
depend on that class, so we subclass it as Rainbows::TeeInput
and modify as necessary in worker processes.
For Revactor, remove the special-cased
Rainbows::Revactor::TeeInput class and instead emulate
readpartial for Revactor sockets instead.
|
|
|
|
It turns out we were painfully lacking in tests for HTTP
requests where the Content-Length header _is_ set.
|
|
Since Rainbows! is supported when exposed directly to the
Internet, administrators may want to limit the amount of data a
user may upload in a single request body to prevent a
denial-of-service via disk space exhaustion.
This amount may be specified in bytes, the default limit being
1024*1024 bytes (1 megabyte). To override this default, a user
may specify `client_max_body_size' in the Rainbows! block
of their server config file:
Rainbows! do
client_max_body_size 10 * 1024 * 1024
end
Clients that exceed the limit will get a "413 Request Entity Too
Large" response if the request body is too large and the
connection will close.
For chunked requests, we have no choice but to interrupt during
the client upload since we have no prior knowledge of the
request body size.
|
|
Since Rainbows! allows for graceful termination, let
EM kill and reap the tail(1) processes it spawned.
|
|
|
|
|
|
Rack allows anything as the status, as long as it
returns a valid status integer on status.to_i.
|