unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
From: Eric Wong <e@80x24.org>
To: Aaron Price <aprice@flipgive.com>
Cc: unicorn-public@bogomips.org
Subject: Re: Unicorn workers timing out intermittently
Date: Tue, 30 Dec 2014 18:12:03 +0000	[thread overview]
Message-ID: <20141230181203.GA5686@dcvr.yhbt.net> (raw)
In-Reply-To: <CA+sHtzrdVatZz2UDSdHNNrsdR+PaFqC8BA0V3swrC3gengVu=g@mail.gmail.com>

Aaron Price <aprice@flipgive.com> wrote:
> Hey guys,
> 
> I'm having the following issue and I was hoping you might be able to shed
> some light on the issue. Maybe give some suggestions on how to debug the
> actual problem.
> 
> http://serverfault.com/questions/655430/unicorn-workers-timing-out-intermittently

(quoting the problem in full to save folks from having to load the page)

>    I'm getting intermittent timeouts from unicorn workers for seemingly no
>    reason, and I'd like some help to debug the actual problem. It's worse
>    because it works about 10 - 20 requests then 1 will timeout, then
>    another 10 - 20 requests and the same thing will happen again.
> 
>    I've created a dev environment to illustrate this particular problem,
>    so there is NO traffic except mine.
> 
>    The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn
>    4.8.3, Nginx 1.6.2.
> 
>    The Problem
> 
>    I'll describe in detail the time that it doesn't work.
> 
>    I request a url through the browser.
> Started GET "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T1
> 8:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z" for 127.0
> .0.1 at 2014-12-30 15:58:59 +0000
> Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
> 
>    As you can see, the request completed successfully with a 200 response
>    status in just 10.3ms.
> 
>    However, the browser hangs for about 30 seconds and Unicorn kills the
>    worker:
> E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594 timeout (31
> s > 30s), killing
> E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped #<Process::Status: pid
> 14594 SIGKILL (signal 9)> worker=0
> I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
> 
>    And the following error in the Nginx logs:
> 2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed connection
> while reading response header from upstream, client: 127.0.0.1, server: localhos
> t, request: "GET /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-
> 28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z HTTP/1
> .1", upstream: "http://unix:/app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5
> bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less
> _than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer: "http:/
> /localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:0
> 1:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"

OK, the problem should be reproducible hitting unicorn directly (easy to
configure a TCP listener) and eliminating nginx as a possible
point-of-failure.  Also, I suggest using curl so you can script and see
everything going on more easily at the protocol/syscall level.

>    Again. There's no load on the server at all. The only requests going
>    through are my own and every 10 - 20 requests at random have this same
>    problem.

OK, the main culprit is usually external connections acting up.  Postgres,
external monitoring services, network failures, etc...

>    It doesn't look like unicorn is eating memory at all. I know this
>    because I'm using watch -n 0.5 free -m and this is the result.

What does the CPU usage look like?

>              total       used       free     shared    buffers     cached
> Mem:          1995        765       1229          0         94        405
> -/+ buffers/cache:        264       1730
> Swap:          511          0        511
> 
>    So the server isn't running out of memory.
> 
>    Is there anything further I can do to debug this issue? or any insight
>    into what's happening?

Basically, minimize and isolate things.  Use a single worker, strace
that worker ("strace -f -p $PID_OF_WORKER") and send a request to it
using curl.

Is the Postgres doing OK?  How is the network connection to that?
What other external resources (monitoring services, memcached, redis,
etc...) might you be hitting?  You'll see blocking (on things like
read*/write*/select/poll/recvmsg/sendmsg) on strace.

You can map numeric file descriptors to ports via:
 lsof -p $PID_OF_WORKER

Also, is your server running out of entropy?  (You'll see blocking on
reading /dev/random via strace) and you may also see low values in
/proc/sys/kernel/random/entropy_avail

  reply	other threads:[~2014-12-30 18:12 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-30 16:38 Unicorn workers timing out intermittently Aaron Price
2014-12-30 18:12 ` Eric Wong [this message]
2014-12-30 20:02   ` Aaron Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://yhbt.net/unicorn/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141230181203.GA5686@dcvr.yhbt.net \
    --to=e@80x24.org \
    --cc=aprice@flipgive.com \
    --cc=unicorn-public@bogomips.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).