Date | Commit message (Collapse) |
|
We'll export this across the board to all Rack applications
to sleep with. This provides the optimum method of sleeping
regardless of the concurrency model you choose. This method
is still highly not recommended for pure event-driven models
like Rev or EventMachine (but the threaded/fiber/actor-based
variants are fine).
|
|
|
|
Thanks to Ben Sandofsky for the extra set of eyes
|
|
This is like the traditional FiberSpawn, but more scalable (but
not necessarily faster) as it can use epoll or kqueue.
|
|
Under MRI 1.8, listen sockets do not appear to have the
nonblocking I/O flag on by default, nor does it set the
nonblocking I/O flag when calling #accept (but it does
when using #accept_nonblock, of course).
Normally this is not a problem even when using green threads
since MRI will internally select(2) on the file descriptor
before attempting a blocking (and immediately successful)
accept(2).
However, when sharing a listen descriptor across multiple
processes, spurious wakeups are likely to occur, causing
multiple processes may be woken up when a single client
connects.
This causes a problem because accept(2)-ing on multiple
threads/processes for a single connection causes blocking accepts in
multiple processes, leading to stalled green threads.
This is not an issue under 1.9 where a blocking accept() call
unlocks the GVL to let other threads run.
|
|
TeeInput may explicitly close on client disconnects to
avoid error messages being written to the socket, likewise
with "hack.io" users.
|
|
|
|
|
|
|
|
This makes them easier to override in subclasses.
|
|
It gets in the way of Rev/EM-based models that won't use EvCore.
It doesn't actually do anything useful except making an extra
layer of indirection to follow.
|
|
Make RACK_DEFAULTS == Unicorn::HttpRequest::DEFAULTS
and LOCALHOST == Unicorn::HttpRequest::LOCALHOST
No point in having a duplicate objects, and it also makes it
easier to share runtime constant modifications between servers.
|
|
This release introduces compatibility with Sunshowers, a library
for Web Sockets, see http://rainbows.rubyforge.org/sunshowers
for more information. Several small cleanups and fixes.
Eric Wong (20):
add RevThreadPool to README
rev: do not initialize a Rev::Loop in master process
rainbows.1: update headers
do not log IOError raised during app processing
move "async.callback" constant to EvCore
larger thread pool default sizes ({Rev,}ThreadPool)
ev_core: no need to explicitly close TmpIOs
EventMachine: allow usage as a base class
NeverBlock: resync with recent our EM-related expansion
RevThread*: move warning message to a saner place
EventMachineDefer: preliminary (and) broken version
TODO: add EM Deferrables
RevThread*: remove needless nil assignment
README: HTML5 Web Sockets may not be supported, yet...
env["hack.io"] for Fiber*, Revactor, Thread* models
EventMachineDefer is experimental
README: add Sunshowers reference
Rakefile: resync with Unicorn
doc/comparison: add Web Sockets to comparison
README updates
|
|
Not enough time or interest at the moment to get this
fully working...
|
|
This exposes a client IO object directly to the underlying
application.
|
|
We no longer explicitly close @input
|
|
There's a good chunk of tests that fail with this, still.
Worse, I haven't been able to figure out what's wrong since
it looks like it would involve looking at C++ code...
|
|
Don't expect RevThreadPool to work with Rev <= 0.3.1, either.
|
|
The last change to our EventMachine support code broke
our (lightly-tested) NeverBlock support badly.
|
|
We'll be adding EventMachine-based concurrency models.
|
|
Just let the GC deal with it
|
|
This matches what EM sets for its built-in thread pool
|
|
Rev/Packet-based models may support it in the future
|
|
A client disconnect could possibly trigger IOError on
close whereas EOFError does not occur.
|
|
It may make it harder to switch between concurrency models with
SIGHUP this way...
|
|
This release fixes a memory leak in our existing Revactor
concurrency model. A new RevThreadPool concurrency model has
been added as well as small cleaups to exit handling in workers.
|
|
This should be like RevThreadSpawn except with more predictable
performance (but higher memory usage under low load).
|
|
We now correctly exit!(2) if our master can't kill us.
|
|
This model has basically been rewritten to avoid unbounded
memory growth (slow without keepalive) due to listeners
not properly handling :*_closed messages.
Performance is much more stable as a result, too.
|
|
Just die naturally here if threads don't die on
their own.
|
|
keepalive_timeout (default: 2 seconds) is now supported to
disconnect idle connections. Several new concurrency models
added include: NeverBlock, FiberSpawn and FiberPool; all of
which have only been lightly tested. RevThreadSpawn loses
streaming input support to become simpler and faster for the
general cases. AppPool middleware is now compatible with all
Fiber-based models including Revactor and NeverBlock.
A new document gives a summary of all the options we give you:
http://rainbows.rubyforge.org/Summary.html
If you're using any of the Rev-based concurrency models, the
latest iobuffer (0.1.3) gem will improve performance. Also,
RevThreadSpawn should become usable under MRI 1.8 with the next
release of Rev (0.3.2).
|
|
iobuffer 0.1.3 already sets this.
|
|
|
|
|
|
Eventually we hope to be able to accept arguments like
the way Rack handlers do it:
use :Foo, :bool1, :bool2, :option => value
|
|
It's a tad faster for non-keepalive connections and should do
better on large SMP machines with many workers AND threads.
That means the ActorSpawn model in Rubinius is nothing more than
ThreadSpawn underneath (for now).
|
|
I so wish it used Fibers/green-threads underneath instead.
|
|
Some people fork processes, so it avoid hanging a connection
open because of that...
|
|
|
|
Broken in 145185b76dafebe5574e6a3eefd3276555c72016
|
|
Rubinius Actor specs seem a bit lacking at the moment.
If we find time, we'll fix them, otherwise we'll let
somebody else do it.
|
|
It seems to basically work, this is based heavily on the
Revactor one...
|
|
Not sure what drugs the person that wrote it was on at the
time.
|
|
It can noticeably improve performance if available.
ref: http://rubyforge.org/pipermail/rev-talk/2009-November/000116.html
|
|
|
|
Patches submitted to rev-talk, awaiting feedback and
hopefully a new release.
|
|
While we're at it, ensure our encoding is sane
|
|
While Revactor uses Fiber::Queue in AppPool, we don't want/need
to expose the rest of our Fiber stuff to it since it can lead to
lost Fibers if misused. This includes the Rainbows::Fiber.sleep
method which only works inside Fiber{Spawn,Pool} models and
the Rainbows::Fiber::IO wrapper class.
|
|
Make sure app errors get logged correctly, and we no longer
return a 500 response when a client EOFs the write end (but not
the read end) of a connection.
|
|
Both FiberSpawn and FiberPool share similar main loops, the
only difference being the handling of connection acceptance.
So move the scheduler into it's own function for consistency.
We'll also correctly implement keepalive timeout so clients
get disconnected at the right time.
|