Date | Commit message (Collapse) |
|
For HTTP clients living on the edge and pipelining uploads, we
now fully support pipelined requests (as long as the application
consumes each request in its entirety).
|
|
All synchronous models have this fixed in unicorn 3.0.1,
so only Rev and EventMachine-based concurrency models
require code changes.
|
|
Hopefully it makes more sense now and is easier to
digest for new hackers.
|
|
We may have other uses for this in the future...
|
|
This release is targeted at the minority of web applications
that deal heavily with uploads.
Thanks to Unicorn 3.x, we now support HTTP keepalive for
requests with bodies as long as the application consumes them.
Unicorn 3.x also allows disabling the rewindability requirement
of "rack.input" (in violation of the Rack 1.x spec).
The global client_body_max_size may also be applied per-endpoint
using the Rainbows::MaxBody middleware described in:
http://rainbows.rubyforge.org/Rainbows/MaxBody.html
|
|
Oops, last commit was rushed
|
|
Unicorn 3.x includes HttpParser#next? which will reset the
parser for keepalive requests without extra steps.
|
|
To avoid denial-of-service attacks, the wrappers need to
intercept requests *before* they hit the memory allocator, so we
need to reimplement the read(all) and gets cases to use
smaller buffers whenever the application does not specify one.
|
|
Those already use CapInput, just like the rest of the evented
Rainbows! world.
|
|
Kgio 2.0.0 has a superior API and less likely to conflict or
blow up with other applications. Unicorn 3.x requires Kgio 2.x,
too.
|
|
This allows the client_max_body_size implementation to not rely
on Unicorn::TeeInput internals, allowing it to be used with
Unicorn::StreamInput (or any other (nearly)
Rack::Lint-compatible input object).
|
|
Errno::EAGAIN is still a problem under Ruby 1.9.2, so try harder
to avoid it and use kgio methods. Even when 1.9.3 is available,
kgio will still be faster as exceptions are slower than normal
return values.
|
|
The underlying symbolic names are easier to type and
recommended.
|
|
The long-term goal is to make the Unicorn API more terse when
handling keepalive.
|
|
This release is merely a milestone in our evolving internal API.
Use of kgio may result in performance improvements under Ruby
1.9.2 with non-blocking I/O-intensive workloads.
The only bugfix is that SIGHUP reloads restores defaults on
unset settings. A similar fix is included in Unicorn 2.0.0
as well.
|
|
These allow for small reductions in the amount of variables
we have to manage, more changes coming with later Unicorns.
|
|
For consistency, changed settings are reset back to
their default values if they are removed or commented
out from the config file.
|
|
Mostly internal changes for kgio (and Unicorn) integration.
There should be no (supported) user-visible changes from
Rainbows! 0.97.0. kgio should improve performance for
concurrency models that use non-blocking I/O internally,
especially under Ruby 1.9.2
|
|
Once again we avoid documenting internals on the public
website and use code comments for other developers.
|
|
kgio_trywrite is superior if it is available.
|
|
It does not appear to be needed, for now, since the
parser and Unicorn::HttpRequest are one and the same.
|
|
This simplifies and disambiguates most constant resolution
issues as well as lowering our identation level. Hopefully
this makes code easier to understand.
|
|
Applications may use wait_readable-aware methods directly
to work with Rainbows!
|
|
Reduces confusion for constant resolution/scoping rules
and lowers LoC.
|
|
Rainbows::Client takes care of the I/O wait/read-ability
for us already.
|
|
Despite the large number of changes, most of it is code
movement here.
|
|
We get basic internal API changes from Unicorn,
code simplifications coming next.
|
|
Sometimes we have stupid syntax or constant resolution
errors in our code.
|
|
It removes the burden of byte slicing and setting file
descriptor flags. In some cases, we can remove unnecessary
peeraddr calls, too.
|
|
Noise is bad.
|
|
We now depend on Unicorn 1.1.3 to avoid race conditions during
log cycling. This bug mainly affected folks using Rainbows! as
a multithreaded static file server.
"keepalive_timeout 0" now works as documented for all backends
to completely disable keepalive. This was previously broken
under EventMachine, Rev, and Revactor.
There is a new Rainbows::ThreadTimeout Rack middleware which
gives soft timeouts to apps running on multithreaded backends.
There are several bugfixes for proxying IO objects and the usual
round of small code cleanups and documentation updates.
See the commits in git for all the details.
|
|
Although this behavior is mentioned on the documentation,
this was broken under EventMachine, Rev*, and Revactor.
Furthermore, we set the "Connection: close" header to allow the
client to optimize is handling of non-keepalive connections.
|
|
Proxying IO objects with threaded Rev concurrency models
occasionally failed with pipelined requests (t0034). By
deferring the on_write_complete callback until the next
"tick" (similar to what we do in Rev::Client#write),
we prevent clobbering responses during pipelining.
|
|
Remove an unused constant.
|
|
No constant resolution changes, avoid redefining
modules needlessly since this is not meant to be
used standalone.
|
|
Trying to avoid adding singleton methods since it's too easily
accessible by the public and not needed by the general public.
This also allows us (or just Zbatery) to more easily add support
systems without FD_CLOEXEC or fcntl, and also to optimize
away a fcntl call for systems that inherit FD_CLOEXEC.
|
|
This allows for per-dispatch timeouts similar to (but not exactly)
the way Mongrel (1.1.x) implemented them with threads.
|
|
First off we use an FD_MAP to avoid creating redundant IO
objects which map to the same FD. When that doesn't work, we'll
fall back to trapping Errno::EBADF and IOError where
appropriate.
|
|
Our keep-alive timeout mechanism does not need to kick in and
redundantly close when a client. Fortunately there is no danger
of redundantly closing the same numeric file descriptors (and
perhaps causing difficult-to-track-down errors).
|
|
/dev/fd/0 may not be stat()-able on some systems after dropping
permissions from root to a regular user. So just check for
"/dev/fd" which seems to work on RHEL 2.6.18 kernels. This also
allow us to be used independently of Unicorn in case somebody
ever feels the compelling need to /close/ stdin.
|
|
That is the official name of the project and we will not lead
people to believe differently.
|
|
For concurrency models that use sendfile or IO.copy_stream, HTTP
Range requests are honored when serving static files. Due to
the lack of known use cases, multipart range responses are not
supported.
When serving static files with sendfile and proxying
pipe/socket bodies, responses bodies are always properly closed
and we have more test cases for dealing with prematurely
disconnecting clients.
Concurrency model specific changes:
EventMachine, NeverBlock -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using EM::FileStreamer
* these remain the only concurrency models _without_
Range support (EM::FileStreamer doesn't support ranges)
Rev, RevThreadSpawn, RevThreadPool -
* keepalive is now supported when proxying pipes/sockets
* pipelining works properly when using sendfile
RevThreadPool -
* no longer supported under 1.8, it pegs the CPU at 100%.
Use RevThreadSpawn (or any other concurrency model) if
you're on 1.8, or better yet, switch to 1.9.
Revactor -
* proxying pipes/sockets with DevFdResponse is much faster
thanks to a new Actor-aware IO wrapper (used transparently
with DevFdResponse)
* sendfile support added, along with Range responses
FiberSpawn, FiberPool, RevFiberSpawn -
* Range responses supported when using sendfile
ThreadPool, ThreadSpawn, WriterThreadPool, WriterThreadSpawn -
* Range responses supported when using sendfile or
IO.copy_stream.
See the full git logs for a list of all changes.
|
|
It's an internal implementation detail and not for
user consumption.
|
|
EventMachine may close the underlying file descriptor on us if
there are unrecoverable errors during write. So IO#closed? is
a pointless check because EM does not invalidate the underlying
file descriptor.
|
|
Due to the synchronous nature of Revactor, we can
be certain sendfile won't overstep the userspace
output buffering done by Rev.
|
|
This makes life easier for the lazy GC when proxying
large responses (and also improves memory locality).
|
|
Proxying regular Ruby IO objects while Revactor is in use is
highly suboptimal, so proxy it with an Actor-aware wrapper for
better scheduling.
|
|
Since TCP sockets stream, HTTP requests do not come in at
well-defined boundaries and it's possible for pipelined requests
to come in in a staggered form. We need to ensure our
receive_data callback doesn't fire any actions at all while
responding with a deferrable @body.
We still need to be careful about buffering, since EM does not
appear to allow temporarily disabling read events (without
pausing writes), so we shutdown the read end of the socket
if it reaches a maximum header size limit.
|
|
Not sure where this is happening, but this can trigger
Errno::EBADF under heavy load.
|
|
When proxying pipes/sockets, it's possible for the Rev::IO#write
to fail and close our connection. In that case we do not want
our client to continue with the on_write_complete callback.
|