Date | Commit message (Collapse) |
|
Covering my ass from draconian legislation.
|
|
Rack 2.x is less of a jump than initially expected,
and we've already supported it for a few releases, already.
|
|
We can do it!
|
|
Disabling TeeInput is possible now, so the filesystem
is no longer a bottleneck :>
|
|
This allows users to override the current Rack spec and disable
the rewindable input requirement. This can allow applications
to use less I/O to minimize the performance impact when
processing uploads.
|
|
This gives us some things to think about.
|
|
* Bourne shell - TAP test suite stolen from Rainbows!
* tests currently pass under FreeBSD 7.2
|
|
Not fun, but maybe this can help us spot _real_ problems
more easily in the future.
|
|
|
|
|
|
|
|
|
|
I'm still having a hard time justifying this...
|
|
Rainbows! is more ambitious and a separate project now.
|
|
|
|
Note that Rubinius itself is still under heavy development, so
things we fix may break again. The pure-Ruby parts of Unicorn
don't even work properly on Rubinius.
|
|
The code I was _about_ to commit to support them was too ugly
and the performance benefit in real applications/clients is
unproven.
Support for these things also had the inevitable side effect of
adding overhead to non-persistent requests. Worst of all,
interaction with people tweaking TCP_CORK/TCP_NOPUSH suddenly
becomes an order of magnitude more complex because of the
_need_ to flush the socket out in between requests (for
most apps).
Aggressive pipelining can actually help a lot with a direct
connection to Unicorn, not via proxy. But the applications (and
clients) that would benefit from it are very few and moving
those applications away from HTTP entirely would be an even
bigger benefit.
The C/Ragel parser will maintain keepalive support for now since
I've always intended for that to be used in more general-purpose
HTTP servers than Unicorn.
For documentation purposes, the keep-alive/pipelining patch is
included below in its entirety. I don't have the
TCP_CORK/TCP_NOPUSH parts in there because that's when I finally
gave up and decided it would not be worth supporting.
diff --git a/lib/unicorn.rb b/lib/unicorn.rb
index b185b25..cc58997 100644
--- a/lib/unicorn.rb
+++ b/lib/unicorn.rb
@@ -439,22 +439,24 @@ module Unicorn
# once a client is accepted, it is processed in its entirety here
# in 3 easy steps: read request, call app, write app response
def process_client(client)
+ response = nil
client.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC)
- response = app.call(env = REQUEST.read(client))
-
- if 100 == response.first.to_i
- client.write(Const::EXPECT_100_RESPONSE)
- env.delete(Const::HTTP_EXPECT)
- response = app.call(env)
- end
+ begin
+ response = app.call(env = REQUEST.read(client))
- HttpResponse.write(client, response)
+ if 100 == response.first.to_i
+ client.write(Const::EXPECT_100_RESPONSE)
+ env.delete(Const::HTTP_EXPECT)
+ response = app.call(env)
+ end
+ end while HttpResponse.write(client, response, HttpRequest::PARSER.keepalive?)
+ client.close
# if we get any error, try to write something back to the client
# assuming we haven't closed the socket, but don't get hung up
# if the socket is already closed or broken. We'll always ensure
# the socket is closed at the end of this function
- rescue EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL,Errno::EBADF
- client.write_nonblock(Const::ERROR_500_RESPONSE) rescue nil
+ rescue EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL,
+ Errno::EBADF,Errno::ENOTCONN
client.close rescue nil
rescue HttpParserError # try to tell the client they're bad
client.write_nonblock(Const::ERROR_400_RESPONSE) rescue nil
diff --git a/lib/unicorn/http_request.rb b/lib/unicorn/http_request.rb
index 1358ccc..f73bdd0 100644
--- a/lib/unicorn/http_request.rb
+++ b/lib/unicorn/http_request.rb
@@ -42,6 +42,7 @@ module Unicorn
# to handle any socket errors (e.g. user aborted upload).
def read(socket)
REQ.clear
+ pipelined = PARSER.keepalive? && 0 < BUF.size
PARSER.reset
# From http://www.ietf.org/rfc/rfc3875:
@@ -55,7 +56,9 @@ module Unicorn
TCPSocket === socket ? socket.peeraddr.last : LOCALHOST
# short circuit the common case with small GET requests first
- PARSER.headers(REQ, socket.readpartial(Const::CHUNK_SIZE, BUF)) and
+ PARSER.headers(REQ,
+ pipelined ? BUF :
+ socket.readpartial(Const::CHUNK_SIZE, BUF)) and
return handle_body(socket)
data = BUF.dup # socket.readpartial will clobber data
diff --git a/lib/unicorn/http_response.rb b/lib/unicorn/http_response.rb
index 5602a43..e22b5f9 100644
--- a/lib/unicorn/http_response.rb
+++ b/lib/unicorn/http_response.rb
@@ -20,12 +20,15 @@ module Unicorn
# to most clients since they can close their connection immediately.
class HttpResponse
+ include Unicorn::SocketHelper
# Every standard HTTP code mapped to the appropriate message.
CODES = Rack::Utils::HTTP_STATUS_CODES.inject({}) { |hash,(code,msg)|
hash[code] = "#{code} #{msg}"
hash
}
+ CLOSE = "close".freeze # :nodoc
+ KEEPALIVE = "keep-alive".freeze # :nodoc
# Rack does not set/require a Date: header. We always override the
# Connection: and Date: headers no matter what (if anything) our
@@ -34,7 +37,7 @@ module Unicorn
OUT = [] # :nodoc
# writes the rack_response to socket as an HTTP response
- def self.write(socket, rack_response)
+ def self.write(socket, rack_response, keepalive = false)
status, headers, body = rack_response
status = CODES[status.to_i] || status
OUT.clear
@@ -50,6 +53,10 @@ module Unicorn
end
end
+ if keepalive && (0 == HttpRequest::BUF.size)
+ keepalive = IO.select([socket], nil, nil, 0.0)
+ end
+
# Rack should enforce Content-Length or chunked transfer encoding,
# so don't worry or care about them.
# Date is required by HTTP/1.1 as long as our clock can be trusted.
@@ -57,10 +64,10 @@ module Unicorn
socket.write("HTTP/1.1 #{status}\r\n" \
"Date: #{Time.now.httpdate}\r\n" \
"Status: #{status}\r\n" \
- "Connection: close\r\n" \
+ "Connection: #{keepalive ? KEEPALIVE : CLOSE}\r\n" \
"#{OUT.join(Z)}\r\n")
body.each { |chunk| socket.write(chunk) }
- socket.close # flushes and uncorks the socket immediately
+ keepalive
ensure
body.respond_to?(:close) and body.close rescue nil
end
|
|
|
|
|
|
This change gives applications full control to deny clients
from uploading unwanted message bodies. This also paves the
way for doing things like upload progress notification within
applications in a Rack::Lint-compatible manner.
Since we don't support HTTP keepalive, so we have more freedom
here by being able to close TCP connections and deny clients the
ability to write to us (and thus wasting our bandwidth).
While I could've left this feature off by default indefinitely
for maximum backwards compatibility (for arguably broken
applications), Unicorn is not and has never been about
supporting the lowest common denominator.
|
|
Support for the "Trailer:" header and associated Trailer
lines should be reasonably well supported now
|
|
By responding with a "HTTP/1.1 100 Continue" response to
encourage a client to send the rest of the body.
This is part of the HTTP/1.1 standard but not often implemented
by servers:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3
This will speed up curl uploads since curl sleeps up to 1 second if
no response is received:
http://curl.haxx.se/docs/faq.html#My_HTTP_POST_or_PUT_requests_are
|
|
|
|
Timeouts of less than 2 seconds are unsafe due to the lack of
subsecond resolution in most POSIX filesystems. This is the
trade-off for using a low-complexity solution for timeouts.
Since this type of timeout is a last resort; 2 seconds is not
entirely unreasonable IMNSHO. Additionally, timing out too
aggressively can put us in a fork loop and slow down the system.
Of course, the default is 60 seconds and most people do not
bother to change it.
|
|
This allows dynamic tuning of the worker_processes count without
having to restart existing ones. This also allows
worker_processes to be set to a low initial amount in the config
file for low-traffic deployments/upgrades and then scaled up as
the old processes are killed off.
Remove the proposed reexec_worker_processes from TODO since this
is far more flexible and powerful.
This will allow not-yet-existent third-party monitoring tools to
dynamically change and scale worker processes according to site
load without increasing the complexity of Unicorn itself.
|
|
|
|
|
|
Instead of having global options for all listeners,
make all socket options per-listener. This allows
reverse-proxies to pick different listeners to get
different options on different sockets.
Given a cluster of machines (10.0.0.1, 10.0.0.2, 10.0.0.3)
running Unicorn with the following config:
------------------ 8< ----------------
listen "/tmp/local.sock", :backlog => 1
listen "*:8080" # use the backlog=1024 default
------------------ 8< ----------------
It is possible to configure a reverse proxy to try to use
"/tmp/local.sock" first and then fall back to using the
TCP listener on port 8080 in a failover configuration.
Thus the nginx upstream configuration on 10.0.0.1 to
compliment this would be:
------------------ 8< ----------------
upstream unicorn_cluster {
# reject connections ASAP if we are overloaded
server unix:/tmp/local.sock;
# fall back to other machines in the cluster via "backup"
# listeners which have a large backlog queue.
server 10.0.0.2:8080 backup;
server 10.0.0.3:8080 backup;
}
------------------ 8< ----------------
This removes the global "backlog" config option which
was inflexible with multiple machines in a cluster
and exposes the ability to change SO_SNDBUF/SO_RCVBUF
via setsockopt(2) for the first time.
|
|
|
|
|
|
|
|
|
|
Note: since we've stripped everything down to hell, we're Ruby
1.9 compatible at the moment. Also remove references to that
new school stuff like JRuby and threads.
|
|
|
|
|
|
* 'master' of git@github.com:fauna/mongrel:
Merge pivotal code.
Moving toward using a logger instead of dumping to STDERR all over the place.
TODO been did.
No commands.
|
|
|
|
|
|
* 'master' of git@github.com:fauna/mongrel:
Did that.
|
|
|
|
|
|
|
|
git-svn-id: svn+ssh://rubyforge.org/var/svn/mongrel/branches/stable_1-1@969 19e92222-5c0b-0410-8929-a290d50e31e9
|
|
git-svn-id: svn+ssh://rubyforge.org/var/svn/mongrel/trunk@865 19e92222-5c0b-0410-8929-a290d50e31e9
|
|
git-svn-id: svn+ssh://rubyforge.org/var/svn/mongrel/trunk@854 19e92222-5c0b-0410-8929-a290d50e31e9
|