Date | Commit message (Collapse) |
|
it's hard to make this test reliable, but try to
add a small fudge factor based on the MRI default
GC malloc limit.
|
|
|
|
This allows it to be a symlink to /dev/shm or similar
|
|
curl < 7.18.0 did not check for errors when doing chunked
uploads. Unfortunately some distros are slow moving and
bundle ancient versions of curl.
|
|
since we don't set maximum time boundaries, just rely on
the logs to properly log the dead processes.
|
|
since we've already waited for time to elapse, there's no
point in watching the upper limit here
|
|
This test can cause a lot of I/O, especially when
run in parallel. Just rely on the fixed rsha1 code
to compute the SHA1 of the response.
|
|
non-random_blob arguments weren't being taken into account
correctly :x
|
|
slow test runners can buffer us and bloat memory usage
unpredictably when tests are run under load
|
|
On busy sytems, this timing-sensitive test is likely to fail,
so give it some extra slack
|
|
This is based on an idea I originally had for Unicorn but never
implemented in Unicorn since the benefits were unproven and the
risks were too high.
|
|
|
|
Since deferred requests run in a separate thread, this affects
the root (non-deferred) thread as well since it may share
data with other threads.
|
|
Since we have conditional deferred execution in the regular
EventMachine concurrency model, we can drop this one.
This concurrency model never fully worked due to lack of
graceful shut downs, and was never promoted nor supported, either.
|
|
Merb (and possibly other) frameworks that support conditionally
deferred app dispatch can now use it just like Ebb and Thin.
http://brainspl.at/articles/2008/04/18/deferred-requests-with-merb-ebb-and-thin
|
|
It turns out we were painfully lacking in tests for HTTP
requests where the Content-Length header _is_ set.
|
|
Since Rainbows! is supported when exposed directly to the
Internet, administrators may want to limit the amount of data a
user may upload in a single request body to prevent a
denial-of-service via disk space exhaustion.
This amount may be specified in bytes, the default limit being
1024*1024 bytes (1 megabyte). To override this default, a user
may specify `client_max_body_size' in the Rainbows! block
of their server config file:
Rainbows! do
client_max_body_size 10 * 1024 * 1024
end
Clients that exceed the limit will get a "413 Request Entity Too
Large" response if the request body is too large and the
connection will close.
For chunked requests, we have no choice but to interrupt during
the client upload since we have no prior knowledge of the
request body size.
|
|
Since Rainbows! allows for graceful termination, let
EM kill and reap the tail(1) processes it spawned.
|
|
Although advertised as being Thin-only, the rack-fiber_pool gem
works with our EventMachine concurrency model as well.
Note that it's impossible to expose the streaming "rack.input"
behavior of the native FiberSpawn/FiberPool models via
middleware, but most people don't need streaming a "rack.input"
See http://github.com/mperham/rack-fiber_pool for more details
on the rack-fiber_pool gem.
|
|
Unicorn stopped reading all config.ru files as binary
starting with 0.97.0 for compatibility with rackup(1),
so systems that defaulted to US-ASCII encoding would
have trouble running this.
|
|
The Unicorn.builder helper will help us avoid namespace
conflicts inside config.ru, allowing us to pass tests.
While we're at it, port some tests over from the latest
unicorn.git for dealing with bad configs.
|
|
enabling ready_pipe in Unicorn 0.96.0 breaks this.
|
|
too dangerous with the ready_pipe feature in Unicorn 0.96+
|
|
Ruby 1.9 will complain otherwise
|
|
Tested with cramp-0.7 and eventmachine 0.12.10
|
|
Rev 0.3.2 makes performance with Threads* under Ruby 1.8
tolerable.
|
|
Some async apps rely on more than just "async.callback" and
make full use of Deferrables provided by the EM::Deferrable
module. Thanks to James Tucker for bringing this to our
attention.
|
|
Under all MRI 1.8, a blocking Socket#accept Ruby method (needs
to[1]) translate to a non-blocking accept(2) system call that may
wake up threads/processes unnecessarily. Unfortunately, we
failed to trap and ignore EAGAIN in those cases.
This issue did not affect Ruby 1.9 running under modern Linux
kernels where a _blocking_ accept(2) system call is not (easily,
at least) susceptible to spurious wakeups. Non-Linux systems
running Ruby 1.9 may be affected.
[1] - using a blocking accept(2) on a shared socket with
green threads is dangerous, as noted in
commit ee7fe220ccbc991e1e7cbe982caf48e3303274c7
(and commit 451ca6997b4f298b436605b7f0af75f369320425)
|
|
We'll export this across the board to all Rack applications
to sleep with. This provides the optimum method of sleeping
regardless of the concurrency model you choose. This method
is still highly not recommended for pure event-driven models
like Rev or EventMachine (but the threaded/fiber/actor-based
variants are fine).
|
|
|
|
This is like the traditional FiberSpawn, but more scalable (but
not necessarily faster) as it can use epoll or kqueue.
|
|
There's a good chunk of tests that fail with this, still.
Worse, I haven't been able to figure out what's wrong since
it looks like it would involve looking at C++ code...
|
|
This should be like RevThreadSpawn except with more predictable
performance (but higher memory usage under low load).
|
|
|
|
If we logged "ERROR", we should know about it.
|
|
|
|
Some people fork processes, so it avoid hanging a connection
open because of that...
|
|
One bad thing to defaulting to ksh93 for my tests during
development, small cleanups while we're at it, too for
extra checks
|
|
The test itself already exits immediately if it's
running an incompatible concurrency model, so avoid
having redundant logic in the GNUmakefile.
|
|
|
|
This lets us make further tests for compatibility without
dirtying our working tree.
|
|
Make sure app errors get logged correctly, and we no longer
return a 500 response when a client EOFs the write end (but not
the read end) of a connection.
|
|
Both FiberSpawn and FiberPool share similar main loops, the
only difference being the handling of connection acceptance.
So move the scheduler into it's own function for consistency.
We'll also correctly implement keepalive timeout so clients
get disconnected at the right time.
|
|
This enables the safe use of Rainbows::AppPool with all
concurrency models, not just threaded ones. AppPool is now
effective with *all* Fiber-based concurrency models including
Revactor (and of course the new Fiber{Pool,Spawn} ones).
|
|
It works exactly like Actor.sleep and similar to Kernel.sleep
(no way to sleep indefinitely), but is compatible with the
IO.select-based Fiber scheduler we run. This method only works
within the context of a Rainbows! application dispatch.
|
|
This is another Fiber-based concurrency model that can exploit
a streaming "rack.input" for clients. Spawning Fibers seems
pretty fast, but maybe there are apps that will benefit from
this.
|
|
This one seems a easy to get working and supports everything we
need to support from the server perspective. Apps will need
modified drivers, but it doesn't seem too hard to add
more/better support for wrapping IO objects with Fiber::IO.
|
|
Exposing a synchronous interface is too complicated for too
little gain. Given the following factors:
* basic ThreadSpawn performs admirably under REE 1.8
* both ThreadSpawn and Revactor work well under 1.9
* few applications/requests actually need a streaming "rack.input"
We've decided its not worth the effort to attempt to support
streaming rack.input at the moment. Instead, the new
RevThreadSpawn model performs much better for most applications
under Ruby 1.9
|
|
And change the default to 2 seconds, most clients can
render the page and load all URLs within 2 seconds.
|
|
Make sure any aborted/broken clients don't screw up
our connection accounting.
|