Show request details on timeout killing
- by Bráulio Bhavamitra @ 12/04 21:15 UTC - next

Hello all,

Currently, unicorn kills a worker that reached timeout with the
following message:
E, [2014-12-04T19:12:23.646053 #32612] ERROR -- : worker=4 PID:11911
timeout (61s > 60s), killing

How can I see which URL was being processed by that worker?

I need to identify the problematic request.

cheers,
bráulio

message raw reply threadlink

  Re: Show request details on timeout killing
  - by Eric Wong @ 12/05 01:06 UTC - next/prev

  Bráulio Bhavamitra <braulio@eita.org.br> wrote:
  > Hello all,
  > 
  > Currently, unicorn kills a worker that reached timeout with the
  > following message:
  > E, [2014-12-04T19:12:23.646053 #32612] ERROR -- : worker=4 PID:11911
  > timeout (61s > 60s), killing
  > 
  > How can I see which URL was being processed by that worker?
  
  I suggest adding better logging to your app, perhaps to log when
  requests start.
  
  > I need to identify the problematic request.
  
  And to use application-level timeouts, see:
  
  	http://unicorn.bogomips.org/Application_Timeouts.html
  
  The SIGKILL timeout in unicorn is only a last resort when the Ruby VM
  is broken beyond repair and cannot respond to any signals[1].
  
  [1] SIGKILL and SIGSTOP are special signals which the kernel
      enforces, Ruby has no chance to block/catch/ignore them.

  message raw reply parent threadlink

    Re: Show request details on timeout killing
    - by Bráulio Bhavamitra @ 12/05 21:58 UTC - next/prev

    Ok Eric, the problem is that some requests are in a infinite loop,
    consuming a lot of resources.
    
    Rails/nginx already log many request on production.log/access.log, but
    I can't know which of them timeouted.
    
    cheers,
    bráulio
    
    On Thu, Dec 4, 2014 at 10:06 PM, Eric Wong <e@80x24.org> wrote:
    > <Bráulio Bhavamitra <braulio@eita.org.br> wrote: >> ...>

    message raw reply parent threadlink

      Re: Show request details on timeout killing
      - by Eric Wong @ 12/05 22:16 UTC - next/prev

      Bráulio Bhavamitra <braulio@eita.org.br> wrote:
      > Rails/nginx already log many request on production.log/access.log, but
      > I can't know which of them timeouted.
      
      Can't Rails log when a request starts and finishes?
      (I haven't touched Rails in years)
      
      Look for starts on PIDs without finishes.
      
      Something along the lines of the following middlware (totally untested):
      
          class LogBeforeAfter
            def initialize(app)
              @app = app
            end
      
            def call(env)
              env["rack.logger"].info("#$$ start #{env['PATH_INFO']}")
              response = @app.call(env)
              env["rack.logger"].info("#$$   end #{env['PATH_INFO']}")
              response
            end
          end
      
          -------------- config.ru ---------------
          use LogBeforeAfter
          # other middlewares...
          run YourApp.new
      
      See Rack::CommonLogger source for more examples/detail.

      message raw reply parent threadlink

No, passenger 5.0 is not faster than unicorn :)
- by Bráulio Bhavamitra @ 12/03 09:50 UTC - next/prev

Hello all,

I've just tested a one instance each (one worker with unicorn and
--max-pool-size 1 passenger 5) on the rails app I work.

And the results are just as I expected, no miracle at all: Unicorn is
still the fatest!
(the difference is only a few milliseconds less per request)

The blocking design of unicorn is proving itself very efficient.

cheers!
bráulio

message raw reply threadlink

  Re: No, passenger 5.0 is not faster than unicorn :)
  - by Sam Saffron @ 12/03 09:56 UTC - next/prev

  I covered this here:
  http://discuss.topazlabs.com/t/amidst-blizzards-they-rest/1147
  
  it seems like an odd marketing move to me ... optimising a bit that
  needs very little help. heck ripping out hashie and the 50 frames
  omniauth injects would have a significantly bigger impact on rails
  apps out there than optimising the 0.5% that needs little optimising.
  
  On Wed, Dec 3, 2014 at 8:50 PM, Bráulio Bhavamitra <braulio@eita.org.br> wrote:
  > <Hello all, > > I've just tested a one instance each (one ...>

  message raw reply parent threadlink

    Re: No, passenger 5.0 is not faster than unicorn :)
    - by Sam Saffron @ 12/03 09:57 UTC - next/prev

    oops sent wrong link meant to send this
    
    https://meta.discourse.org/t/raptor-web-server/21304/6
    
    On Wed, Dec 3, 2014 at 8:56 PM, Sam Saffron <sam.saffron@gmail.com> wrote:
    > <I covered this here: > ...>

    message raw reply parent threadlink

  Re: No, passenger 5.0 is not faster than unicorn :)
  - by Hongli Lai @ 12/03 11:00 UTC - next/prev

  Unicorn *is* in general very good and very efficient, no doubt about that.
  Eric Wong has made great design choices and is an excellent programmer.
  
  Having said that, in certain specific cases there's still room for
  improvement. That's why we focused so much on microoptimizations and
  specific optimizations like turbocaching. Have you followed Phusion
  Passenger's Server Optimization Guide?
  https://www.phusionpassenger.com/documentation/ServerOptimizationGuide.html
  
  Also, you have to ensure that your Rails app sets the correct caching
  headers. By default, Rails sets "Cache-Control: private, no-store" so that
  the turbocache cannot kick in. You should see very different results if you
  add "headers['Cache-Control'] = 'public'" to your Rails app. If you need
  any help with this, please feel free to contact me off-list. I'd be happy
  to help. We have also a benchmarking kit so that you can double check the
  results; email me if you're interested in this.
  
  As Sam said, most of the time will be spent in the Rails app. But
  turbocaching is one notable exception: it's the one feature that can speed
  things up even if your app is slow - provided that you set HTTP caching
  headers correctly.
  
  Unicorn is excellent at what it does: it's a minimal server with a specific
  I/O model that is supposed to be used behind a buffering reverse proxy.
  There is nothing wrong with that, and for the workloads that it's designed
  for, it's great. Phusion Passenger has merely chosen a non-generalist
  approach that aims to squeeze additional performance from specific cases.
  Of course, nothing's a silver bullet. Like any tool, it only works if you
  use it correctly.
  
  On Wed, Dec 3, 2014 at 10:50 AM, Bráulio Bhavamitra <braulio@eita.org.br>
  wrote:
  
  > <Hello all, > > I've just tested a one instance each (one ...>

  more... raw reply parent threadlink

    Re: No, passenger 5.0 is not faster than unicorn :)
    - by Sam Saffron @ 12/03 11:05 UTC - next/prev

    Yeah, anonymous caching is super critical, we monkey it in here:
    https://github.com/discourse/discourse/blob/master/lib/middleware/anonymous_cache.rb
    to be honest this really should be part of rails.
    
    On Wed, Dec 3, 2014 at 10:00 PM, Hongli Lai <hongli@phusion.nl> wrote:
    > <Unicorn *is* in general very good and very efficient, no doubt about ...>

    message raw reply parent threadlink

      Re: No, passenger 5.0 is not faster than unicorn :)
      - by Xavier Noria @ 12/03 12:42 UTC - next/prev

      I've often found Cache-Control: public to be of limited use in practice
      because you cannot invalidate that cache by hand. (Sometimes that's fine of
      course.)
      
      While the reverse proxy cache could provide a mechanism for explicit
      expiration, there may be caches between your servers and the client, a
      corporate cache for example. Those other ones are out of your control.
      
      I suggested an extension for Rack::Cache called r-maxage that got merged:
      
          https://github.com/rtomayko/rack-cache/pull/55
      
      With that directive you trade the benefits of caching in intermediate
      proxies with more expiration control locally.

      message raw reply parent threadlink

    Re: No, passenger 5.0 is not faster than unicorn :)
    - by Bráulio Bhavamitra @ 12/03 14:10 UTC - next/prev

    Hello Hongli,
    
    Thank you the guide, I've already learned a bit from it.
    
    We already use nginx for static files and ssl and varnish for caching
    public pages, so maybe turbocaching won't help too much.
    
    In this test I've tested passenger in standalone mode (--max-pool-size
    1) and unicorn with one worker. On a slow page, the variation was
    minimal (~8.26 req/s in unicorn and ~8.11 in passenger). I haven't
    tested fast and cacheable page.
    
    Also, I've used ab for benchmarking. Next time will try wrk.
    
    cheers,
    bráulio
    
    On Wed, Dec 3, 2014 at 8:00 AM, Hongli Lai <hongli@phusion.nl> wrote:
    > <Unicorn *is* in general very good and very efficient, no doubt about ...>

    more... raw reply parent threadlink

worker freeze and strace interpretation
- by Jérémy Lecour @ 12/02 11:21 UTC - next/prev

Hi,

I've been trying to debug a weird situation on a Rails app (4.1) using
Unicorn (4.8.3).
Sometimes, some requests are hanging and I can't find why.


I've hooked "strace" to the workers and I have a lot of lines with
"Resource temporarily unavailable" and I wonder if it's normal.

Here is a snippet :

11587 write(7, "\27\3\3\1\253\36\274d\263\340\375\250d\374~X\364\306^\227{F\357~\223\347\245M\351-\360\301"...,
432 <unfinished ...>
11581 read(3,  <unfinished ...>
11587 <... write resumed> )             = 432 <0.000016>
11581 <... read resumed> 0x7f5b04878cc0, 1024) = -1 EAGAIN (Resource
temporarily unavailable) <0.000012>
11581 read(5,  <unfinished ...>
11587 write(7, "\27\3\3\t`+\22zIY\242\252L\346?n\245!\347c\251\341\fo\202Is\346\23\10\320\34"...,
2405 <unfinished ...>
11581 <... read resumed> "!", 1024)     = 1 <0.000016>
11587 <... write resumed> )             = 2405 <0.000015>
11581 read(5, 0x7f5b04878cc0, 1024)     = -1 EAGAIN (Resource
temporarily unavailable) <0.000014>
11587 read(7,  <unfinished ...>
11581 poll([{fd=3, events=POLLIN}], 1, 100 <unfinished ...>
11587 <... read resumed> 0x7f5af47c0123, 5) = -1 EAGAIN (Resource
temporarily unavailable) <0.000009>
11587 futex(0x7f5b04885054, FUTEX_WAKE_OP_PRIVATE, 1, 1,
0x7f5b04885050, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 <0.000005>
1184  <... futex resumed> )             = 0 <0.000226>
11587 futex(0x7f5b0488508c, FUTEX_WAIT_PRIVATE, 231, NULL <unfinished ...>
1184  futex(0x7f5b0488508c, FUTEX_WAKE_OP_PRIVATE, 1, 1,
0x7f5b04885088, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1} <unfinished ...>
11587 <... futex resumed> )             = -1 EAGAIN (Resource
temporarily unavailable) <0.000015>
1184  <... futex resumed> )             = 0 <0.000009>
11587 futex(0x7f5b04885020, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
1184  futex(0x7f5b04885020, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
11587 <... futex resumed> )             = -1 EAGAIN (Resource
temporarily unavailable) <0.000007>
1184  <... futex resumed> )             = 0 <0.000008>



Is there a better way to hook into a worker process when the request
is hanging, to see what it is doing ?



Also, I didn't find a way to instrument how a worker handles a request.
I was looking for a debug message when a new request is taken and when
it is returned.


Thanks for any help.

more... raw reply threadlink

  Re: worker freeze and strace interpretation
  - by Eric Wong @ 12/02 18:58 UTC - next/prev

  Jérémy Lecour <jeremy.lecour@gmail.com> wrote:
  > Hi,
  > 
  > I've been trying to debug a weird situation on a Rails app (4.1) using
  > Unicorn (4.8.3).
  > Sometimes, some requests are hanging and I can't find why.
  > 
  > 
  > I've hooked "strace" to the workers and I have a lot of lines with
  > "Resource temporarily unavailable" and I wonder if it's normal.
  
  Depends on the FD, but from yours, yes, EAGAIN is common for
  non-blocking IO.
  
  > Here is a snippet :
  > 
  > 11587 write(7, "\27\3\3\1\253\36\274d\263\340\375\250d\374~X\364\306^\227{F\357~\223\347\245M\351-\360\301"...,
  > 432 <unfinished ...>
  > 11581 read(3,  <unfinished ...>
  
  lsof -p $PID will tell you what FD=3 is, but it looks to me 11581 is the
  timer thread.
  
  That means 11587 could be the main thread which handles your app.
  I'm more curious to know what FD=7 is (some encrypted service your
  app connects to?)
  
  > 11587 <... write resumed> )             = 432 <0.000016>
  > 11581 <... read resumed> 0x7f5b04878cc0, 1024) = -1 EAGAIN (Resource
  > temporarily unavailable) <0.000012>
  
  1024 matches CCP_READ_BUFF_SIZE in the MRI sources (thread_pthread.c)
  in recent-ish versions of Ruby, so yes,
  
  > 11581 read(5,  <unfinished ...>
  > 11587 write(7, "\27\3\3\t`+\22zIY\242\252L\346?n\245!\347c\251\341\fo\202Is\346\23\10\320\34"...,
  > 2405 <unfinished ...>
  > 11581 <... read resumed> "!", 1024)     = 1 <0.000016>
  
  Yep, "!" is the character the MRI timer thread uses to communicate.
  
  > 11587 <... write resumed> )             = 2405 <0.000015>
  > 11581 read(5, 0x7f5b04878cc0, 1024)     = -1 EAGAIN (Resource
  > temporarily unavailable) <0.000014>
  
  Hmm, FD=5...  I suspect that's the other timer thread pipe.
  Which version of Ruby are you using?  Newere ones have two
  timer thread pipes.
  
  > 11587 read(7,  <unfinished ...>
  > 11581 poll([{fd=3, events=POLLIN}], 1, 100 <unfinished ...>
  > 11587 <... read resumed> 0x7f5af47c0123, 5) = -1 EAGAIN (Resource
  > temporarily unavailable) <0.000009>
  
  OK, now FD=7 is probably the most interesting to your process.
  
  > 11587 futex(0x7f5b04885054, FUTEX_WAKE_OP_PRIVATE, 1, 1,
  > 0x7f5b04885050, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 <0.000005>
  > 1184  <... futex resumed> )             = 0 <0.000226>
  
  However, 1184 is a thread we haven't seen before.  The TID
  is far off from 11581 and 11587, so it could've been spawned after TID
  wraparound.
  
  Just to confirm, is 11587 or 1184 the PID you saw in "ps" output?
  (and unicorn logs, in the time period you were stracing)
  
  (Of course, PIDs and TIDs get recycled over time, so they're only valid
   in the context while these straced processes was running)
  
  > 11587 futex(0x7f5b0488508c, FUTEX_WAIT_PRIVATE, 231, NULL <unfinished ...>
  > 1184  futex(0x7f5b0488508c, FUTEX_WAKE_OP_PRIVATE, 1, 1,
  > 0x7f5b04885088, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1} <unfinished ...>
  > 11587 <... futex resumed> )             = -1 EAGAIN (Resource
  > temporarily unavailable) <0.000015>
  > 1184  <... futex resumed> )             = 0 <0.000009>
  > 11587 futex(0x7f5b04885020, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
  > 1184  futex(0x7f5b04885020, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
  > 11587 <... futex resumed> )             = -1 EAGAIN (Resource
  > temporarily unavailable) <0.000007>
  > 1184  <... futex resumed> )             = 0 <0.000008>
  
  Anything more after this?
  
  So I'm curious for your process, what FD=7 is, and what TID=1184 is
  doing.  That above futex calls looks like normal thread switching
  behavior in MRI with no interesting syscalls going on for network I/O.
  
  There could be some CPU-intensive stuff going on but no interesting
  syscalls, so maybe we need to check CPU usage for this process in "top",
  too.
  
  > Is there a better way to hook into a worker process when the request
  > is hanging, to see what it is doing ?
  
  Maybe, but I usually don't need more than strace + lsof
  (use lsof to figure out which FD is which).
  
  If I need more, I prefer to I sprinkle warn() calls in the application
  to figure out what and where things are going.  This can aid in
  the strace, as you'll see your warn() calls become:
  
  	write(2, "warning...", ...)
  
  calls in strace.
  
  unicorn always sets "$stderr.sync = true" immediately to ensure any
  warnings are flushed to the OS so it's visible in strace immediately.
  
  In C programs and C extensions, I may also use to write() on an invalid
  (out-of-bounds too large or negative) FD to view the location in strace.
  
  	int preserve_errno = errno; /* avoid errno-side effects */
  	write(-1, "message", msg_len);
  	errno = preserve_errno;
  
  > Also, I didn't find a way to instrument how a worker handles a request.
  > I was looking for a debug message when a new request is taken and when
  > it is returned.
  
  You can strace earlier and look for the following syscalls, in order:
  
  - accept() (or accept4() on recent Linux) succeeding and
    returning the client FD
  
  - read() succeeding on the client FD returned by accept()
    You'll see "something like "GET /... HTTP/1.0" from nginx, you may
    want to use "-s 16384" or some bigger number for big requests
  
    There's usually only one read() for small requests when talking
    to nginx.
  
  - possibly lots of application processing here
    You might see more read() calls to the client FD if you're
    handling uploads
  
  - write()s on the client FD you saw your HTTP requests with
    earlier.  You should see your HTTP response here, something
    like: write(FD, "HTTP/1.1 200 OK\r\n ...", ...)
  
  - shutdown() + close() on the client FD.
  
  Hope that helps.

  message raw reply parent threadlink

    Re: worker freeze and strace interpretation
    - by Jérémy Lecour @ 12/02 21:04 UTC - next/prev

    On Tue, Dec 2, 2014 at 7:58 PM, Eric Wong <e@80x24.org> wrote:
    > Hope that helps.
    
    Man, your answer is detailed. Thanks.
    
    The PID 11587 was indeed for the worker process.
    
    FD 7 might be a connection to NewRelic over HTTPS.
    
    This app is doing a lot : connections to Redis, MySQL, Memcached, …
    It uses Ruby 2.1.5.
    
    
    If those EAGAIN are probably expected, I wouldn't dig deeper.
    I'm definitely not in my comfort zone here and wanted to be sure that
    there were not an elephant in the room that I couldn't see.
    
    I'll try to put some warn() in the code at the beginning and end of
    the request cycle to see if something shows up.
    
    
    Thank you very much for your time and help.

    more... raw reply parent threadlink

[PATCH] http_server: save 450+ bytes of memory on x86-64
- by Eric Wong @ 11/27 22:46 UTC - next/prev

Replacing the Regexp argument to a rarely-called String#split with a
literal String can save a little memory.  The removed Regexp memsize
is 469 bytes on Ruby 2.1:

	ObjectSpace.memsize_of(/,/) => 469

Is slightly smaller at 453 bytes on 2.2.0dev (r48474).
These numbers do not include the 40-byte object overhead.

Nevertheless, this is a waste for non-performance-critical code
during the socket inheritance phase.  A literal string has less
overhead at 88 bytes:

	* 48 bytes for table entry in the frozen string table
	* 40 bytes for the object itself

The downside of using a literal string for the String#split argument
is a 40-byte string object gets allocated on every call, but this
piece of code is only called once in a process lifetime.

more... raw reply threadlink

Unicorn configuration to increase max header size
- by Jim Zhan @ 11/20 21:54 UTC - next/prev

Hi,

We are using Unicorn as the http server for one of our ruby applications
and we recently encountered an issue that some browsers won't limit the
cookie size so we will get requests with http header greater than 8k and
users are receiving "400-bad request". Is there a way to increase the
maximum allowed header size? I searched online but didn't find a lot of
useful information on it.

Thanks,
Jim Zhan

message raw reply threadlink

  Re: Unicorn configuration to increase max header size
  - by Eric Wong @ 11/20 21:59 UTC - next/prev

  Jim Zhan <cjzhan2000@gmail.com> wrote:
  > We are using Unicorn as the http server for one of our ruby applications
  > and we recently encountered an issue that some browsers won't limit the
  > cookie size so we will get requests with http header greater than 8k and
  > users are receiving "400-bad request". Is there a way to increase the
  > maximum allowed header size? I searched online but didn't find a lot of
  > useful information on it.
  
  This is subject to change in the next major release, but you can
  change it in unicorn 4.x using:
  
    Unicorn::HttpRequest.max_header_len = <number>
  
  However, the default is already 112K, so I'm wondering if the 8K is
  the result of your nginx configuration or similar.

  message raw reply parent threadlink

    Re: Unicorn configuration to increase max header size
    - by Jim Zhan @ 11/20 23:35 UTC - next/prev

    Hi Eric,
    
    Thank you for the quick reply. I checked out hosts and we are using Unicorn
    3.4.1. Unfortunately there is only one setting for client_body_buffer_size.
    So how does this parameter work? Will it only put a limitation on the body
    itself or it applied proportionally to header and body (e.g., header 8k,
    body 104k, etc).
    
    We did experiments using curl by sending header exceeding 8k manually and I
    am getting 404. So it's unicorn itself, not nginx that has the 8k header
    size limitation.
    
    The command we used for the experiment:
    curl -v -H "$(./http-header-pumper.bat 8000)" <service_url>
    
    The script we used to generate header:
    #!/bin/bash
    
    printf "x-header-pump: "
    for ((i=0; i<$1; i++))
    do
       let "n = $i % 10"
       if [ $n = 0 ]; then
          printf "_"
       else
          printf "%d" $n
       fi
    done
    
    Thank you and I am looking forward to hearing from you soon on the issue!
    
    Rgds,
    Jim Zhan
    
    
    
    On Thu, Nov 20, 2014 at 1:59 PM, Eric Wong <e@80x24.org> wrote:
    
    > <Jim Zhan <cjzhan2000@gmail.com> wrote: > > We are using ...>

    message raw reply parent threadlink

      Re: Unicorn configuration to increase max header size
      - by Eric Wong @ 11/21 00:31 UTC - next/prev

      Jim Zhan <cjzhan2000@gmail.com> wrote:
      > Hi Eric,
      > 
      > Thank you for the quick reply. I checked out hosts and we are using Unicorn
      > 3.4.1. Unfortunately there is only one setting for client_body_buffer_size.
      > So how does this parameter work? Will it only put a limitation on the body
      > itself or it applied proportionally to header and body (e.g., header 8k,
      > body 104k, etc).
      
      client_body_buffer_size in unicorn is only for request bodies (uploads),
      and not relevant to header sizes.
      
      > We did experiments using curl by sending header exceeding 8k manually and I
      > am getting 404. So it's unicorn itself, not nginx that has the 8k header
      > size limitation.
      
      I suspect you're hitting the nginx large_client_header_buffers default
      limit of 8K:
      
        http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers
      
      I checked the unicorn source, and ext/unicorn_http/global_variables.h
      defines the maximum field value as 80K, 10 times more than what you're
      seeing:
      
      	DEF_MAX_LENGTH(FIELD_VALUE, 80 * 1024);
      
      This value was inherited from Mongrel many years ago and never changed.
      
      > The command we used for the experiment:
      > curl -v -H "$(./http-header-pumper.bat 8000)" <service_url>
      
      I just tried your script with the following config.ru to hit unicorn
      directly (no nginx), and I got the expected lobster response.
      
      $ unicorn -E none config.ru
      ----------- config.ru -----------
      require 'rack/lobster'
      use Rack::ContentLength
      run Rack::Lobster.new

      more... raw reply parent threadlink

        Re: Unicorn configuration to increase max header size
        - by Jim Zhan @ 11/21 01:33 UTC - next/prev

        Thank you. Will check out tomorrow on our production hosts based on your
        comments and directions.
        
        
        On Thu, Nov 20, 2014 at 4:31 PM, Eric Wong <e@80x24.org> wrote:
        
        > <Jim Zhan <cjzhan2000@gmail.com> wrote: > > Hi Eric, > ...>

        message raw reply parent threadlink

          Re: Unicorn configuration to increase max header size
          - by Jim Zhan @ 11/22 03:15 UTC - next/prev

          Tried out and indeed it is because of the header size limitation of load
          balancer, not the rails app itself. Thank you for your answer!
          
          
          On Thu, Nov 20, 2014 at 5:33 PM, Jim Zhan <cjzhan2000@gmail.com> wrote:
          
          > <Thank you. Will check out tomorrow on our production hosts based on your ...>

          message raw reply parent threadlink

[RFC] http: TypedData C-API conversion
- by Eric Wong @ 11/16 08:32 UTC - next/prev

This provides some extra type safety if combined with other C
extensions, as well as allowing us to account for memory usage of
the HTTP parser in ObjectSpace.

Note: this means we are finally dropping Ruby 1.8 support as
TypedData requires Ruby 1.9 and later.  Future changes will
require Ruby 1.9.2 and later (which is already EOL), but still
in use at some places.  This compiles with warnings on Ruby 1.9.2,
but is warning-free on modern Ruby versions.

This also currently leaks memory under 1.9.2-p330 x86_64-linux if
test_memory_leak is enabled in test/unit/test_http_parser.rb
Since 1.9.2 is EOL and 1.9.3+ all work fine (including trunk),
I'm not going to spend more time with this problem.

Also, keep in mind this type of memory leak wouldn't affect unicorn
as we only ever allocate a single parser.  This leak would only
affect other (concurrent) HTTP servers using this parser, and only
under Ruby 1.9.2.

more... raw reply threadlink

RE: Issue with Unicorn: Big latency when getting a request
- by Roberto Cordoba del Moral @ 11/14 15:01 UTC - next/prev

I have installed Puma and it´s working.
It´s the combination of Unicorn and Chrome which is not working. I don´t know why.

> <Date: Fri, 14 Nov 2014 11:03:06 +0000 > From: e@80x24.org > To: ...>

message raw reply parent threadlink

  Re: Issue with Unicorn: Big latency when getting a request
  - by Eric Wong @ 11/14 18:34 UTC - next/prev

  Roberto Cordoba del Moral <roberto.chingon@hotmail.com> wrote:
  > I have installed Puma and it´s working.
  > It´s the combination of Unicorn and Chrome which is not working. I
  > don´t know why.
  
  Just curious, can you try configuring "worker_processes 4" in unicorn
  (or maybe increase 4 to 8 or any hihgher number if you have enough
  memory).  If extra workers works, it's definitely the lack of baked-in
  concurrency in unicorn (this is by design).
  
  So I suspect puma (or any other server) is better for your use case
  anyways.

  message raw reply parent threadlink

    RE: Issue with Unicorn: Big latency when getting a request
    - by Roberto Cordoba del Moral @ 11/14 22:36 UTC - next/prev

    I have tested with 5 workers and still the same issue.
    It´s strange, If this is happening to everyone, it should be a more known problem. I have search everywhere in Internet and I couldn´t find anything related to it.
    My guess is that probably I have something in my Rails code related to sessions or cookies that is handled in a different way in the browsers but it´s also related to the application server.
    I don´t know. But although I feel really curious with the isssue and I would love to find it out, I think I will try with Puma.
    Thank you so much for your time.
    
    > <Date: Fri, 14 Nov 2014 18:34:34 +0000 > From: e@80x24.org > To: ...>

    message raw reply parent threadlink


page: next      atom permalink
- unicorn Rack HTTP server user/dev discussion
A public-inbox, anybody may post in plain-text (not HTML):
unicorn-public@bogomips.org
git URL for ssoma: git://bogomips.org/unicorn-public.git
homepage: http://unicorn.bogomips.org/
subscription optional: unicorn-public+subscribe@bogomips.org