unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / Atom feed
* Request Queueing after deploy + USR2 restart
@ 2015-03-03 22:24 Sarkis Varozian
  2015-03-03 22:32 ` Michael Fischer
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-03 22:24 UTC (permalink / raw)
  To: unicorn-public

We have a rails application with the following unicorn.rb:
http://goo.gl/qZ5NLn

When we deploy to the application, a USR2 signal is sent to the unicorn
master which spins up a new master and we use the before_fork in the
unicorn.rb config above to send signals to the old master as the new
workers come online.

I've been trying to debug a weird issue that manifests as "Request
Queueing" in our Newrelic APM. The graph shows what happens after a
deployment (represented by the vertical lines). Here is the graph:
http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
there is always a latency spike - however, at times Request Queueing is
higher than previous deploys.

Any ideas on what exactly is going on here? Any suggestions on
tools/profilers to use to get to the bottom of this? Should we expect this
to happen on each deploy?

Thanks,

-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-03 22:24 Request Queueing after deploy + USR2 restart Sarkis Varozian
@ 2015-03-03 22:32 ` Michael Fischer
  2015-03-04 19:48   ` Sarkis Varozian
  2015-03-03 22:47 ` Bráulio Bhavamitra
       [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
  2 siblings, 1 reply; 20+ messages in thread
From: Michael Fischer @ 2015-03-03 22:32 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

If the response times are falling a minute or so after the reload, I'd
chalk it up to a cold CPU cache.  You will probably want to stagger your
reloads across backends to minimize the impact.

--Michael

On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> We have a rails application with the following unicorn.rb:
> http://goo.gl/qZ5NLn
>
> When we deploy to the application, a USR2 signal is sent to the unicorn
> master which spins up a new master and we use the before_fork in the
> unicorn.rb config above to send signals to the old master as the new
> workers come online.
>
> I've been trying to debug a weird issue that manifests as "Request
> Queueing" in our Newrelic APM. The graph shows what happens after a
> deployment (represented by the vertical lines). Here is the graph:
> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
> there is always a latency spike - however, at times Request Queueing is
> higher than previous deploys.
>
> Any ideas on what exactly is going on here? Any suggestions on
> tools/profilers to use to get to the bottom of this? Should we expect this
> to happen on each deploy?
>
> Thanks,
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>
>
>




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-03 22:24 Request Queueing after deploy + USR2 restart Sarkis Varozian
  2015-03-03 22:32 ` Michael Fischer
@ 2015-03-03 22:47 ` Bráulio Bhavamitra
  2015-03-04 19:50   ` Sarkis Varozian
       [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
  2 siblings, 1 reply; 20+ messages in thread
From: Bráulio Bhavamitra @ 2015-03-03 22:47 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

Maybe a warm up could help the new servers?
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77

How is the CPU usage during USR2?

On Tue, Mar 3, 2015 at 7:24 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> We have a rails application with the following unicorn.rb:
> http://goo.gl/qZ5NLn
>
> When we deploy to the application, a USR2 signal is sent to the unicorn
> master which spins up a new master and we use the before_fork in the
> unicorn.rb config above to send signals to the old master as the new
> workers come online.
>
> I've been trying to debug a weird issue that manifests as "Request
> Queueing" in our Newrelic APM. The graph shows what happens after a
> deployment (represented by the vertical lines). Here is the graph:
> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
> there is always a latency spike - however, at times Request Queueing is
> higher than previous deploys.
>
> Any ideas on what exactly is going on here? Any suggestions on
> tools/profilers to use to get to the bottom of this? Should we expect this
> to happen on each deploy?
>
> Thanks,
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>
>
>


-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-03 22:32 ` Michael Fischer
@ 2015-03-04 19:48   ` Sarkis Varozian
  2015-03-04 19:51     ` Michael Fischer
  0 siblings, 1 reply; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-04 19:48 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Michael,

Thanks for this - I have since changed the way we are restarting the
unicorn servers after a deploy by changing capistrano task to do:

in :sequence, wait: 30

We have 4 backends and the above will restart them sequentially, waiting
30s (which I think should be more than enough time), however, I still get
the following latency spikes after a deploy: http://goo.gl/tYnLUJ

This is what the individual servers look like for the same time interval:
http://goo.gl/x7KcKq



On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
wrote:

> If the response times are falling a minute or so after the reload, I'd
> chalk it up to a cold CPU cache.  You will probably want to stagger your
> reloads across backends to minimize the impact.
>
> --Michael
>
> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> We have a rails application with the following unicorn.rb:
>> http://goo.gl/qZ5NLn
>>
>> When we deploy to the application, a USR2 signal is sent to the unicorn
>> master which spins up a new master and we use the before_fork in the
>> unicorn.rb config above to send signals to the old master as the new
>> workers come online.
>>
>> I've been trying to debug a weird issue that manifests as "Request
>> Queueing" in our Newrelic APM. The graph shows what happens after a
>> deployment (represented by the vertical lines). Here is the graph:
>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>> there is always a latency spike - however, at times Request Queueing is
>> higher than previous deploys.
>>
>> Any ideas on what exactly is going on here? Any suggestions on
>> tools/profilers to use to get to the bottom of this? Should we expect this
>> to happen on each deploy?
>>
>> Thanks,
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>>
>>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-03 22:47 ` Bráulio Bhavamitra
@ 2015-03-04 19:50   ` Sarkis Varozian
  0 siblings, 0 replies; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-04 19:50 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: unicorn-public

I tried this out by just using '/' as a warmup url. It looks like it did
not do much in helping the situation. I want to believe it is because our
homepage isn't doing a "heavy" enough call to get the server warmed up.
Looking into the gist a bit further, I noticed that you are disconnecting
the db connection after warming up:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L85-L86

Does the above matter? Should I be looking to do a more db intensive
request as the "warmup" request?

2015-03-03 14:47 GMT-08:00 Bráulio Bhavamitra <braulio@eita.org.br>:

> Maybe a warm up could help the new servers?
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>
> How is the CPU usage during USR2?
>
> On Tue, Mar 3, 2015 at 7:24 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> We have a rails application with the following unicorn.rb:
>> http://goo.gl/qZ5NLn
>>
>> When we deploy to the application, a USR2 signal is sent to the unicorn
>> master which spins up a new master and we use the before_fork in the
>> unicorn.rb config above to send signals to the old master as the new
>> workers come online.
>>
>> I've been trying to debug a weird issue that manifests as "Request
>> Queueing" in our Newrelic APM. The graph shows what happens after a
>> deployment (represented by the vertical lines). Here is the graph:
>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>> there is always a latency spike - however, at times Request Queueing is
>> higher than previous deploys.
>>
>> Any ideas on what exactly is going on here? Any suggestions on
>> tools/profilers to use to get to the bottom of this? Should we expect this
>> to happen on each deploy?
>>
>> Thanks,
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>>
>>
>
>
> --
> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
> ideologia. Morra por sua ideologia" P.R. Sarkar
>
> EITA - Educação, Informação e Tecnologias para Autogestão
> http://cirandas.net/brauliobo
> http://eita.org.br
>
> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
> destruídas nas fases de extroversão e introversão do fluxo imaginativo
> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
> naquele momento, essa pessoa é a única proprietária daquilo que ela
> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
> por um milharal também imaginado, a pessoa imaginada não é a propriedade
> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
> a propriedade deste universo é de Brahma, e não dos microcosmos que também
> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
> patrimônio comum de todos."
> Restante do texto em
> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:48   ` Sarkis Varozian
@ 2015-03-04 19:51     ` Michael Fischer
  2015-03-04 19:58       ` Sarkis Varozian
  0 siblings, 1 reply; 20+ messages in thread
From: Michael Fischer @ 2015-03-04 19:51 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

What does your I/O latency look like during this interval?  (iostat -xk 10,
look at the busy %).  I'm willing to bet the request queueing is strongly
correlated with I/O load.

Also is preload_app set to true?  This should help.

--Michael

On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Michael,
>
> Thanks for this - I have since changed the way we are restarting the
> unicorn servers after a deploy by changing capistrano task to do:
>
> in :sequence, wait: 30
>
> We have 4 backends and the above will restart them sequentially, waiting
> 30s (which I think should be more than enough time), however, I still get
> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>
> This is what the individual servers look like for the same time interval:
> http://goo.gl/x7KcKq
>
>
>
> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> If the response times are falling a minute or so after the reload, I'd
>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>> reloads across backends to minimize the impact.
>>
>> --Michael
>>
>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> We have a rails application with the following unicorn.rb:
>>> http://goo.gl/qZ5NLn
>>>
>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>> master which spins up a new master and we use the before_fork in the
>>> unicorn.rb config above to send signals to the old master as the new
>>> workers come online.
>>>
>>> I've been trying to debug a weird issue that manifests as "Request
>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>> deployment (represented by the vertical lines). Here is the graph:
>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>> there is always a latency spike - however, at times Request Queueing is
>>> higher than previous deploys.
>>>
>>> Any ideas on what exactly is going on here? Any suggestions on
>>> tools/profilers to use to get to the bottom of this? Should we expect
>>> this
>>> to happen on each deploy?
>>>
>>> Thanks,
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>>
>>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:51     ` Michael Fischer
@ 2015-03-04 19:58       ` Sarkis Varozian
  2015-03-04 20:17         ` Michael Fischer
  0 siblings, 1 reply; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-04 19:58 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

Yes, preload_app is set to true, I have not made any changes to the
unicorn.rb from OP: http://goo.gl/qZ5NLn

Hmmmm, you may be onto something - Here is the i/o metrics from the server
with the highest response times: http://goo.gl/0HyUYt (in this graph:
http://goo.gl/x7KcKq)

Looks like it may be i/o related as you suspect - is there much I can do to
alleviate that?

On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
wrote:

> What does your I/O latency look like during this interval?  (iostat -xk
> 10, look at the busy %).  I'm willing to bet the request queueing is
> strongly correlated with I/O load.
>
> Also is preload_app set to true?  This should help.
>
> --Michael
>
> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Michael,
>>
>> Thanks for this - I have since changed the way we are restarting the
>> unicorn servers after a deploy by changing capistrano task to do:
>>
>> in :sequence, wait: 30
>>
>> We have 4 backends and the above will restart them sequentially, waiting
>> 30s (which I think should be more than enough time), however, I still get
>> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>>
>> This is what the individual servers look like for the same time interval:
>> http://goo.gl/x7KcKq
>>
>>
>>
>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>> wrote:
>>
>>> If the response times are falling a minute or so after the reload, I'd
>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>> reloads across backends to minimize the impact.
>>>
>>> --Michael
>>>
>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> We have a rails application with the following unicorn.rb:
>>>> http://goo.gl/qZ5NLn
>>>>
>>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>>> master which spins up a new master and we use the before_fork in the
>>>> unicorn.rb config above to send signals to the old master as the new
>>>> workers come online.
>>>>
>>>> I've been trying to debug a weird issue that manifests as "Request
>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>> deployment (represented by the vertical lines). Here is the graph:
>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>>> there is always a latency spike - however, at times Request Queueing is
>>>> higher than previous deploys.
>>>>
>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>> this
>>>> to happen on each deploy?
>>>>
>>>> Thanks,
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 19:58       ` Sarkis Varozian
@ 2015-03-04 20:17         ` Michael Fischer
  2015-03-04 20:24           ` Sarkis Varozian
  0 siblings, 1 reply; 20+ messages in thread
From: Michael Fischer @ 2015-03-04 20:17 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

I'm not exactly sure how preload_app works, but I suspect your app is
lazy-loading a number of Ruby libraries while handling the first few
requests that weren't automatically loaded during the preload process.

Eric, your thoughts?

--Michael

On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Yes, preload_app is set to true, I have not made any changes to the
> unicorn.rb from OP: http://goo.gl/qZ5NLn
>
> Hmmmm, you may be onto something - Here is the i/o metrics from the server
> with the highest response times: http://goo.gl/0HyUYt (in this graph:
> http://goo.gl/x7KcKq)
>
> Looks like it may be i/o related as you suspect - is there much I can do
> to alleviate that?
>
> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> What does your I/O latency look like during this interval?  (iostat -xk
>> 10, look at the busy %).  I'm willing to bet the request queueing is
>> strongly correlated with I/O load.
>>
>> Also is preload_app set to true?  This should help.
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Michael,
>>>
>>> Thanks for this - I have since changed the way we are restarting the
>>> unicorn servers after a deploy by changing capistrano task to do:
>>>
>>> in :sequence, wait: 30
>>>
>>> We have 4 backends and the above will restart them sequentially, waiting
>>> 30s (which I think should be more than enough time), however, I still get
>>> the following latency spikes after a deploy: http://goo.gl/tYnLUJ
>>>
>>> This is what the individual servers look like for the same time
>>> interval: http://goo.gl/x7KcKq
>>>
>>>
>>>
>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> If the response times are falling a minute or so after the reload, I'd
>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>> reloads across backends to minimize the impact.
>>>>
>>>> --Michael
>>>>
>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> We have a rails application with the following unicorn.rb:
>>>>> http://goo.gl/qZ5NLn
>>>>>
>>>>> When we deploy to the application, a USR2 signal is sent to the unicorn
>>>>> master which spins up a new master and we use the before_fork in the
>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>> workers come online.
>>>>>
>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>>>>> there is always a latency spike - however, at times Request Queueing is
>>>>> higher than previous deploys.
>>>>>
>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>> this
>>>>> to happen on each deploy?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:17         ` Michael Fischer
@ 2015-03-04 20:24           ` Sarkis Varozian
  2015-03-04 20:27             ` Michael Fischer
  2015-03-04 20:35             ` Eric Wong
  0 siblings, 2 replies; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-04 20:24 UTC (permalink / raw)
  To: Michael Fischer; +Cc: unicorn-public

That does make sense - I was looking at another suggestion from a user here
(Braulio) of running a "warmup" using rack MockRequest:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77

The only issue I am having with the above solution is it is happening in
the before_fork block - shouldn't I warmup the connection in after_fork? If
I follow the above gist properly it warms up the server with the old
activerecord base connection and then its turned off, then turned back on
in after_fork. I think I am not understanding the sequence of events
there... If this is the case, I should warmup and also check/kill the old
master in the after_fork block after the new db, redis, neo4j connections
are all created. Thoughts?

On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
wrote:

> I'm not exactly sure how preload_app works, but I suspect your app is
> lazy-loading a number of Ruby libraries while handling the first few
> requests that weren't automatically loaded during the preload process.
>
> Eric, your thoughts?
>
> --Michael
>
> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Yes, preload_app is set to true, I have not made any changes to the
>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>
>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>> server with the highest response times: http://goo.gl/0HyUYt (in this
>> graph: http://goo.gl/x7KcKq)
>>
>> Looks like it may be i/o related as you suspect - is there much I can do
>> to alleviate that?
>>
>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>> wrote:
>>
>>> What does your I/O latency look like during this interval?  (iostat -xk
>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>> strongly correlated with I/O load.
>>>
>>> Also is preload_app set to true?  This should help.
>>>
>>> --Michael
>>>
>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Michael,
>>>>
>>>> Thanks for this - I have since changed the way we are restarting the
>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>
>>>> in :sequence, wait: 30
>>>>
>>>> We have 4 backends and the above will restart them sequentially,
>>>> waiting 30s (which I think should be more than enough time), however, I
>>>> still get the following latency spikes after a deploy:
>>>> http://goo.gl/tYnLUJ
>>>>
>>>> This is what the individual servers look like for the same time
>>>> interval: http://goo.gl/x7KcKq
>>>>
>>>>
>>>>
>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>> wrote:
>>>>
>>>>> If the response times are falling a minute or so after the reload, I'd
>>>>> chalk it up to a cold CPU cache.  You will probably want to stagger your
>>>>> reloads across backends to minimize the impact.
>>>>>
>>>>> --Michael
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> We have a rails application with the following unicorn.rb:
>>>>>> http://goo.gl/qZ5NLn
>>>>>>
>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>> unicorn
>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>> workers come online.
>>>>>>
>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent
>>>>>> -
>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>> is
>>>>>> higher than previous deploys.
>>>>>>
>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>> tools/profilers to use to get to the bottom of this? Should we expect
>>>>>> this
>>>>>> to happen on each deploy?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> --
>>>>>> *Sarkis Varozian*
>>>>>> svarozian@gmail.com
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>


-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:24           ` Sarkis Varozian
@ 2015-03-04 20:27             ` Michael Fischer
  2015-03-04 20:35             ` Eric Wong
  1 sibling, 0 replies; 20+ messages in thread
From: Michael Fischer @ 2015-03-04 20:27 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: unicorn-public

before_fork should work fine.  The children which will actually handle the
requests will inherit everything from the parent, including any libraries
that were loaded by the master process as a result of handling the mock
requests.  It'll also conserve memory, which is a nice benefit.

On Wed, Mar 4, 2015 at 12:24 PM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> That does make sense - I was looking at another suggestion from a user
> here (Braulio) of running a "warmup" using rack MockRequest:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>
> The only issue I am having with the above solution is it is happening in
> the before_fork block - shouldn't I warmup the connection in after_fork? If
> I follow the above gist properly it warms up the server with the old
> activerecord base connection and then its turned off, then turned back on
> in after_fork. I think I am not understanding the sequence of events
> there... If this is the case, I should warmup and also check/kill the old
> master in the after_fork block after the new db, redis, neo4j connections
> are all created. Thoughts?
>
> On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
>
>> I'm not exactly sure how preload_app works, but I suspect your app is
>> lazy-loading a number of Ruby libraries while handling the first few
>> requests that weren't automatically loaded during the preload process.
>>
>> Eric, your thoughts?
>>
>> --Michael
>>
>> On Wed, Mar 4, 2015 at 11:58 AM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Yes, preload_app is set to true, I have not made any changes to the
>>> unicorn.rb from OP: http://goo.gl/qZ5NLn
>>>
>>> Hmmmm, you may be onto something - Here is the i/o metrics from the
>>> server with the highest response times: http://goo.gl/0HyUYt (in this
>>> graph: http://goo.gl/x7KcKq)
>>>
>>> Looks like it may be i/o related as you suspect - is there much I can do
>>> to alleviate that?
>>>
>>> On Wed, Mar 4, 2015 at 11:51 AM, Michael Fischer <mfischer@zendesk.com>
>>> wrote:
>>>
>>>> What does your I/O latency look like during this interval?  (iostat -xk
>>>> 10, look at the busy %).  I'm willing to bet the request queueing is
>>>> strongly correlated with I/O load.
>>>>
>>>> Also is preload_app set to true?  This should help.
>>>>
>>>> --Michael
>>>>
>>>> On Wed, Mar 4, 2015 at 11:48 AM, Sarkis Varozian <svarozian@gmail.com>
>>>> wrote:
>>>>
>>>>> Michael,
>>>>>
>>>>> Thanks for this - I have since changed the way we are restarting the
>>>>> unicorn servers after a deploy by changing capistrano task to do:
>>>>>
>>>>> in :sequence, wait: 30
>>>>>
>>>>> We have 4 backends and the above will restart them sequentially,
>>>>> waiting 30s (which I think should be more than enough time), however, I
>>>>> still get the following latency spikes after a deploy:
>>>>> http://goo.gl/tYnLUJ
>>>>>
>>>>> This is what the individual servers look like for the same time
>>>>> interval: http://goo.gl/x7KcKq
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 3, 2015 at 2:32 PM, Michael Fischer <mfischer@zendesk.com>
>>>>> wrote:
>>>>>
>>>>>> If the response times are falling a minute or so after the reload,
>>>>>> I'd chalk it up to a cold CPU cache.  You will probably want to stagger
>>>>>> your reloads across backends to minimize the impact.
>>>>>>
>>>>>> --Michael
>>>>>>
>>>>>> On Tue, Mar 3, 2015 at 2:24 PM, Sarkis Varozian <svarozian@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> We have a rails application with the following unicorn.rb:
>>>>>>> http://goo.gl/qZ5NLn
>>>>>>>
>>>>>>> When we deploy to the application, a USR2 signal is sent to the
>>>>>>> unicorn
>>>>>>> master which spins up a new master and we use the before_fork in the
>>>>>>> unicorn.rb config above to send signals to the old master as the new
>>>>>>> workers come online.
>>>>>>>
>>>>>>> I've been trying to debug a weird issue that manifests as "Request
>>>>>>> Queueing" in our Newrelic APM. The graph shows what happens after a
>>>>>>> deployment (represented by the vertical lines). Here is the graph:
>>>>>>> http://goo.gl/iFZPMv . As you see from the graph, it is
>>>>>>> inconsistent -
>>>>>>> there is always a latency spike - however, at times Request Queueing
>>>>>>> is
>>>>>>> higher than previous deploys.
>>>>>>>
>>>>>>> Any ideas on what exactly is going on here? Any suggestions on
>>>>>>> tools/profilers to use to get to the bottom of this? Should we
>>>>>>> expect this
>>>>>>> to happen on each deploy?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> --
>>>>>>> *Sarkis Varozian*
>>>>>>> svarozian@gmail.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Sarkis Varozian*
>>>>> svarozian@gmail.com
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:24           ` Sarkis Varozian
  2015-03-04 20:27             ` Michael Fischer
@ 2015-03-04 20:35             ` Eric Wong
  2015-03-04 20:40               ` Sarkis Varozian
  1 sibling, 1 reply; 20+ messages in thread
From: Eric Wong @ 2015-03-04 20:35 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Michael Fischer, unicorn-public

Sarkis Varozian <svarozian@gmail.com> wrote:
> On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> wrote:
> 
> > I'm not exactly sure how preload_app works, but I suspect your app is
> > lazy-loading a number of Ruby libraries while handling the first few
> > requests that weren't automatically loaded during the preload process.
> >
> > Eric, your thoughts?

(top-posting corrected)

Yeah, preload_app won't help startup speed if much of the app is
autoloaded.

Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
startup performance compared to 1.9.3 and later in case you're stuck on
1.9.2

> That does make sense - I was looking at another suggestion from a user here
> (Braulio) of running a "warmup" using rack MockRequest:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
> 
> The only issue I am having with the above solution is it is happening in
> the before_fork block - shouldn't I warmup the connection in after_fork?

If preload_app is true, you can warmup in before_fork; otherwise it
needs to be after_fork.

> If
> I follow the above gist properly it warms up the server with the old
> activerecord base connection and then its turned off, then turned back on
> in after_fork. I think I am not understanding the sequence of events
> there...

With preload_app and warmup, you need to ensure any stream connections
(DB, memcached, redis, etc..) do not get shared between processes, so
it's standard practice to disconnect in the parent and reconnect in the
child.

> If this is the case, I should warmup and also check/kill the old
> master in the after_fork block after the new db, redis, neo4j connections
> are all created. Thoughts?

I've been leaving killing the master outside of the unicorn hooks
and doing it as a separate step; seemed too fragile to do it in
hooks from my perspective.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:35             ` Eric Wong
@ 2015-03-04 20:40               ` Sarkis Varozian
  2015-03-05 17:07                 ` Sarkis Varozian
  0 siblings, 1 reply; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-04 20:40 UTC (permalink / raw)
  To: Eric Wong; +Cc: Michael Fischer, unicorn-public

Eric,

Thanks for the quick reply.

We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
lazy loading - at least thats what all signs point to. I am going to try
and mock request some url endpoints. Currently, I can only think of '/', as
most other parts of the app require a session and auth. I'll report back
with results.



On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:

> Sarkis Varozian <svarozian@gmail.com> wrote:
> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
> > wrote:
> >
> > > I'm not exactly sure how preload_app works, but I suspect your app is
> > > lazy-loading a number of Ruby libraries while handling the first few
> > > requests that weren't automatically loaded during the preload process.
> > >
> > > Eric, your thoughts?
>
> (top-posting corrected)
>
> Yeah, preload_app won't help startup speed if much of the app is
> autoloaded.
>
> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
> startup performance compared to 1.9.3 and later in case you're stuck on
> 1.9.2
>
> > That does make sense - I was looking at another suggestion from a user
> here
> > (Braulio) of running a "warmup" using rack MockRequest:
> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
> >
> > The only issue I am having with the above solution is it is happening in
> > the before_fork block - shouldn't I warmup the connection in after_fork?
>
> If preload_app is true, you can warmup in before_fork; otherwise it
> needs to be after_fork.
>
> > If
> > I follow the above gist properly it warms up the server with the old
> > activerecord base connection and then its turned off, then turned back on
> > in after_fork. I think I am not understanding the sequence of events
> > there...
>
> With preload_app and warmup, you need to ensure any stream connections
> (DB, memcached, redis, etc..) do not get shared between processes, so
> it's standard practice to disconnect in the parent and reconnect in the
> child.
>
> > If this is the case, I should warmup and also check/kill the old
> > master in the after_fork block after the new db, redis, neo4j connections
> > are all created. Thoughts?
>
> I've been leaving killing the master outside of the unicorn hooks
> and doing it as a separate step; seemed too fragile to do it in
> hooks from my perspective.
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-04 20:40               ` Sarkis Varozian
@ 2015-03-05 17:07                 ` Sarkis Varozian
  2015-03-05 17:13                   ` Bráulio Bhavamitra
  0 siblings, 1 reply; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-05 17:07 UTC (permalink / raw)
  To: Eric Wong; +Cc: Michael Fischer, unicorn-public, Bráulio Bhavamitra

Hey All,

So I changed up my unicorn.rb a bit from my original post:
https://gist.github.com/sarkis/1aa296044b1dfd3695ab

I'm also still sending the USR2 signals on deploy staggered with 30 second
delay via capistrano:

on roles(:web), in: :sequence, wait: 30

As you can see I am now doing a warup via rack MockRequest (I hoped this
would warmup the master). However, this is what a deploy looks like on
newrelic:

https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0

https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0

I'm running out of ideas to get rid of thse latency spikes. Would you guys
recommend I try anything else at this point?



On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
wrote:

> Eric,
>
> Thanks for the quick reply.
>
> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
> lazy loading - at least thats what all signs point to. I am going to try
> and mock request some url endpoints. Currently, I can only think of '/', as
> most other parts of the app require a session and auth. I'll report back
> with results.
>
>
>
> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>
>> Sarkis Varozian <svarozian@gmail.com> wrote:
>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com>
>> > wrote:
>> >
>> > > I'm not exactly sure how preload_app works, but I suspect your app is
>> > > lazy-loading a number of Ruby libraries while handling the first few
>> > > requests that weren't automatically loaded during the preload process.
>> > >
>> > > Eric, your thoughts?
>>
>> (top-posting corrected)
>>
>> Yeah, preload_app won't help startup speed if much of the app is
>> autoloaded.
>>
>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>> startup performance compared to 1.9.3 and later in case you're stuck on
>> 1.9.2
>>
>> > That does make sense - I was looking at another suggestion from a user
>> here
>> > (Braulio) of running a "warmup" using rack MockRequest:
>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>> >
>> > The only issue I am having with the above solution is it is happening in
>> > the before_fork block - shouldn't I warmup the connection in after_fork?
>>
>> If preload_app is true, you can warmup in before_fork; otherwise it
>> needs to be after_fork.
>>
>> > If
>> > I follow the above gist properly it warms up the server with the old
>> > activerecord base connection and then its turned off, then turned back
>> on
>> > in after_fork. I think I am not understanding the sequence of events
>> > there...
>>
>> With preload_app and warmup, you need to ensure any stream connections
>> (DB, memcached, redis, etc..) do not get shared between processes, so
>> it's standard practice to disconnect in the parent and reconnect in the
>> child.
>>
>> > If this is the case, I should warmup and also check/kill the old
>> > master in the after_fork block after the new db, redis, neo4j
>> connections
>> > are all created. Thoughts?
>>
>> I've been leaving killing the master outside of the unicorn hooks
>> and doing it as a separate step; seemed too fragile to do it in
>> hooks from my perspective.
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:07                 ` Sarkis Varozian
@ 2015-03-05 17:13                   ` Bráulio Bhavamitra
  2015-03-05 17:28                     ` Sarkis Varozian
  0 siblings, 1 reply; 20+ messages in thread
From: Bráulio Bhavamitra @ 2015-03-05 17:13 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

In the graphs you posted, what is the grey part? It is not described in the
legend and it seems the problem is entirely there. What reverse proxy are
you using?

Can you reproduce this with a single master instance?

Could you try this sleep:
https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91


On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Hey All,
>
> So I changed up my unicorn.rb a bit from my original post:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>
> I'm also still sending the USR2 signals on deploy staggered with 30 second
> delay via capistrano:
>
> on roles(:web), in: :sequence, wait: 30
>
> As you can see I am now doing a warup via rack MockRequest (I hoped this
> would warmup the master). However, this is what a deploy looks like on
> newrelic:
>
>
> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>
>
> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>
> I'm running out of ideas to get rid of thse latency spikes. Would you guys
> recommend I try anything else at this point?
>
>
>
> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Eric,
>>
>> Thanks for the quick reply.
>>
>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
>> lazy loading - at least thats what all signs point to. I am going to try
>> and mock request some url endpoints. Currently, I can only think of '/', as
>> most other parts of the app require a session and auth. I'll report back
>> with results.
>>
>>
>>
>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>
>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <mfischer@zendesk.com
>>> >
>>> > wrote:
>>> >
>>> > > I'm not exactly sure how preload_app works, but I suspect your app is
>>> > > lazy-loading a number of Ruby libraries while handling the first few
>>> > > requests that weren't automatically loaded during the preload
>>> process.
>>> > >
>>> > > Eric, your thoughts?
>>>
>>> (top-posting corrected)
>>>
>>> Yeah, preload_app won't help startup speed if much of the app is
>>> autoloaded.
>>>
>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>> 1.9.2
>>>
>>> > That does make sense - I was looking at another suggestion from a user
>>> here
>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>> >
>>> > The only issue I am having with the above solution is it is happening
>>> in
>>> > the before_fork block - shouldn't I warmup the connection in
>>> after_fork?
>>>
>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>> needs to be after_fork.
>>>
>>> > If
>>> > I follow the above gist properly it warms up the server with the old
>>> > activerecord base connection and then its turned off, then turned back
>>> on
>>> > in after_fork. I think I am not understanding the sequence of events
>>> > there...
>>>
>>> With preload_app and warmup, you need to ensure any stream connections
>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>> it's standard practice to disconnect in the parent and reconnect in the
>>> child.
>>>
>>> > If this is the case, I should warmup and also check/kill the old
>>> > master in the after_fork block after the new db, redis, neo4j
>>> connections
>>> > are all created. Thoughts?
>>>
>>> I've been leaving killing the master outside of the unicorn hooks
>>> and doing it as a separate step; seemed too fragile to do it in
>>> hooks from my perspective.
>>>
>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:13                   ` Bráulio Bhavamitra
@ 2015-03-05 17:28                     ` Sarkis Varozian
  2015-03-05 17:31                       ` Bráulio Bhavamitra
                                         ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Sarkis Varozian @ 2015-03-05 17:28 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: Eric Wong, Michael Fischer, unicorn-public

Braulio,

Are you referring to the vertical grey line? That is the deployment event.
The part that spikes in the first graph is request queue which is a bit
different on newrelic:
http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/

We are using HAProxy to load balance (round robin) to 4 physical hosts
running unicorn with 6 workers.

I have not tried to reproduce this on 1 master - I assume this would be the
same.

I do in fact do the sleep now:
https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
the deployment results above had the 1 second sleep in there.

On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
wrote:

> In the graphs you posted, what is the grey part? It is not described in
> the legend and it seems the problem is entirely there. What reverse proxy
> are you using?
>
> Can you reproduce this with a single master instance?
>
> Could you try this sleep:
> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>
>
> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
> wrote:
>
>> Hey All,
>>
>> So I changed up my unicorn.rb a bit from my original post:
>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>
>> I'm also still sending the USR2 signals on deploy staggered with 30
>> second delay via capistrano:
>>
>> on roles(:web), in: :sequence, wait: 30
>>
>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>> would warmup the master). However, this is what a deploy looks like on
>> newrelic:
>>
>>
>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>
>>
>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>
>> I'm running out of ideas to get rid of thse latency spikes. Would you
>> guys recommend I try anything else at this point?
>>
>>
>>
>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Eric,
>>>
>>> Thanks for the quick reply.
>>>
>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is the
>>> lazy loading - at least thats what all signs point to. I am going to try
>>> and mock request some url endpoints. Currently, I can only think of '/', as
>>> most other parts of the app require a session and auth. I'll report back
>>> with results.
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>
>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>> mfischer@zendesk.com>
>>>> > wrote:
>>>> >
>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>> is
>>>> > > lazy-loading a number of Ruby libraries while handling the first few
>>>> > > requests that weren't automatically loaded during the preload
>>>> process.
>>>> > >
>>>> > > Eric, your thoughts?
>>>>
>>>> (top-posting corrected)
>>>>
>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>> autoloaded.
>>>>
>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>> 1.9.2
>>>>
>>>> > That does make sense - I was looking at another suggestion from a
>>>> user here
>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>> >
>>>> > The only issue I am having with the above solution is it is happening
>>>> in
>>>> > the before_fork block - shouldn't I warmup the connection in
>>>> after_fork?
>>>>
>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>> needs to be after_fork.
>>>>
>>>> > If
>>>> > I follow the above gist properly it warms up the server with the old
>>>> > activerecord base connection and then its turned off, then turned
>>>> back on
>>>> > in after_fork. I think I am not understanding the sequence of events
>>>> > there...
>>>>
>>>> With preload_app and warmup, you need to ensure any stream connections
>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>> child.
>>>>
>>>> > If this is the case, I should warmup and also check/kill the old
>>>> > master in the after_fork block after the new db, redis, neo4j
>>>> connections
>>>> > are all created. Thoughts?
>>>>
>>>> I've been leaving killing the master outside of the unicorn hooks
>>>> and doing it as a separate step; seemed too fragile to do it in
>>>> hooks from my perspective.
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>
>
>
> --
> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
> ideologia. Morra por sua ideologia" P.R. Sarkar
>
> EITA - Educação, Informação e Tecnologias para Autogestão
> http://cirandas.net/brauliobo
> http://eita.org.br
>
> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
> destruídas nas fases de extroversão e introversão do fluxo imaginativo
> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
> naquele momento, essa pessoa é a única proprietária daquilo que ela
> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
> por um milharal também imaginado, a pessoa imaginada não é a propriedade
> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
> a propriedade deste universo é de Brahma, e não dos microcosmos que também
> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
> patrimônio comum de todos."
> Restante do texto em
> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>



-- 
*Sarkis Varozian*
svarozian@gmail.com


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28                     ` Sarkis Varozian
@ 2015-03-05 17:31                       ` Bráulio Bhavamitra
  2015-03-05 17:32                       ` Bráulio Bhavamitra
  2015-03-05 21:12                       ` Eric Wong
  2 siblings, 0 replies; 20+ messages in thread
From: Bráulio Bhavamitra @ 2015-03-05 17:31 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I would try to reproduce this locally with a production env and `ab -n 200
-c 2 http://localhost:3000/`



On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28                     ` Sarkis Varozian
  2015-03-05 17:31                       ` Bráulio Bhavamitra
@ 2015-03-05 17:32                       ` Bráulio Bhavamitra
  2015-03-05 21:12                       ` Eric Wong
  2 siblings, 0 replies; 20+ messages in thread
From: Bráulio Bhavamitra @ 2015-03-05 17:32 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Eric Wong, Michael Fischer, unicorn-public

I also use newrelic and never saw this grey part...

On Thu, Mar 5, 2015 at 2:28 PM, Sarkis Varozian <svarozian@gmail.com> wrote:

> Braulio,
>
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/
>
> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.
>
> I have not tried to reproduce this on 1 master - I assume this would be
> the same.
>
> I do in fact do the sleep now:
> https://gist.github.com/sarkis/1aa296044b1dfd3695ab#file-unicorn-rb-L37 -
> the deployment results above had the 1 second sleep in there.
>
> On Thu, Mar 5, 2015 at 9:13 AM, Bráulio Bhavamitra <braulio@eita.org.br>
> wrote:
>
>> In the graphs you posted, what is the grey part? It is not described in
>> the legend and it seems the problem is entirely there. What reverse proxy
>> are you using?
>>
>> Can you reproduce this with a single master instance?
>>
>> Could you try this sleep:
>> https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L91
>>
>>
>> On Thu, Mar 5, 2015 at 2:07 PM, Sarkis Varozian <svarozian@gmail.com>
>> wrote:
>>
>>> Hey All,
>>>
>>> So I changed up my unicorn.rb a bit from my original post:
>>> https://gist.github.com/sarkis/1aa296044b1dfd3695ab
>>>
>>> I'm also still sending the USR2 signals on deploy staggered with 30
>>> second delay via capistrano:
>>>
>>> on roles(:web), in: :sequence, wait: 30
>>>
>>> As you can see I am now doing a warup via rack MockRequest (I hoped this
>>> would warmup the master). However, this is what a deploy looks like on
>>> newrelic:
>>>
>>>
>>> https://www.dropbox.com/s/beh7nc8npdfijqp/Screenshot%202015-03-05%2009.05.15.png?dl=0
>>>
>>>
>>> https://www.dropbox.com/s/w08gpvp7mpik3vs/Screenshot%202015-03-05%2009.06.51.png?dl=0
>>>
>>> I'm running out of ideas to get rid of thse latency spikes. Would you
>>> guys recommend I try anything else at this point?
>>>
>>>
>>>
>>> On Wed, Mar 4, 2015 at 12:40 PM, Sarkis Varozian <svarozian@gmail.com>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> We are on Ruby 2.1.5p273 and unicorn 4.8.3. I believe our problem is
>>>> the lazy loading - at least thats what all signs point to. I am going to
>>>> try and mock request some url endpoints. Currently, I can only think of
>>>> '/', as most other parts of the app require a session and auth. I'll report
>>>> back with results.
>>>>
>>>>
>>>>
>>>> On Wed, Mar 4, 2015 at 12:35 PM, Eric Wong <e@80x24.org> wrote:
>>>>
>>>>> Sarkis Varozian <svarozian@gmail.com> wrote:
>>>>> > On Wed, Mar 4, 2015 at 12:17 PM, Michael Fischer <
>>>>> mfischer@zendesk.com>
>>>>> > wrote:
>>>>> >
>>>>> > > I'm not exactly sure how preload_app works, but I suspect your app
>>>>> is
>>>>> > > lazy-loading a number of Ruby libraries while handling the first
>>>>> few
>>>>> > > requests that weren't automatically loaded during the preload
>>>>> process.
>>>>> > >
>>>>> > > Eric, your thoughts?
>>>>>
>>>>> (top-posting corrected)
>>>>>
>>>>> Yeah, preload_app won't help startup speed if much of the app is
>>>>> autoloaded.
>>>>>
>>>>> Sarkis: which Ruby version are you running?  IIRC, 1.9.2 had terrible
>>>>> startup performance compared to 1.9.3 and later in case you're stuck on
>>>>> 1.9.2
>>>>>
>>>>> > That does make sense - I was looking at another suggestion from a
>>>>> user here
>>>>> > (Braulio) of running a "warmup" using rack MockRequest:
>>>>> > https://gist.github.com/brauliobo/11298486#file-unicorn-conf-rb-L77
>>>>> >
>>>>> > The only issue I am having with the above solution is it is
>>>>> happening in
>>>>> > the before_fork block - shouldn't I warmup the connection in
>>>>> after_fork?
>>>>>
>>>>> If preload_app is true, you can warmup in before_fork; otherwise it
>>>>> needs to be after_fork.
>>>>>
>>>>> > If
>>>>> > I follow the above gist properly it warms up the server with the old
>>>>> > activerecord base connection and then its turned off, then turned
>>>>> back on
>>>>> > in after_fork. I think I am not understanding the sequence of events
>>>>> > there...
>>>>>
>>>>> With preload_app and warmup, you need to ensure any stream connections
>>>>> (DB, memcached, redis, etc..) do not get shared between processes, so
>>>>> it's standard practice to disconnect in the parent and reconnect in the
>>>>> child.
>>>>>
>>>>> > If this is the case, I should warmup and also check/kill the old
>>>>> > master in the after_fork block after the new db, redis, neo4j
>>>>> connections
>>>>> > are all created. Thoughts?
>>>>>
>>>>> I've been leaving killing the master outside of the unicorn hooks
>>>>> and doing it as a separate step; seemed too fragile to do it in
>>>>> hooks from my perspective.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Sarkis Varozian*
>>>> svarozian@gmail.com
>>>>
>>>
>>>
>>>
>>> --
>>> *Sarkis Varozian*
>>> svarozian@gmail.com
>>>
>>
>>
>>
>> --
>> "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
>> ideologia. Morra por sua ideologia" P.R. Sarkar
>>
>> EITA - Educação, Informação e Tecnologias para Autogestão
>> http://cirandas.net/brauliobo
>> http://eita.org.br
>>
>> "Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
>> lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
>> Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
>> destruídas nas fases de extroversão e introversão do fluxo imaginativo
>> cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
>> naquele momento, essa pessoa é a única proprietária daquilo que ela
>> imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
>> por um milharal também imaginado, a pessoa imaginada não é a propriedade
>> desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
>> universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
>> a propriedade deste universo é de Brahma, e não dos microcosmos que também
>> foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
>> mutável ou imutável, pertence a um indivíduo em particular; tudo é o
>> patrimônio comum de todos."
>> Restante do texto em
>> http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia
>>
>
>
>
> --
> *Sarkis Varozian*
> svarozian@gmail.com
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é meu
lar e todos nós somos cidadãos deste cosmo. Este universo é a imaginação da
Mente Macrocósmica, e todas as entidades estão sendo criadas, preservadas e
destruídas nas fases de extroversão e introversão do fluxo imaginativo
cósmico. No âmbito pessoal, quando uma pessoa imagina algo em sua mente,
naquele momento, essa pessoa é a única proprietária daquilo que ela
imagina, e ninguém mais. Quando um ser humano criado mentalmente caminha
por um milharal também imaginado, a pessoa imaginada não é a propriedade
desse milharal, pois ele pertence ao indivíduo que o está imaginando. Este
universo foi criado na imaginação de Brahma, a Entidade Suprema, por isso
a propriedade deste universo é de Brahma, e não dos microcosmos que também
foram criados pela imaginação de Brahma. Nenhuma propriedade deste mundo,
mutável ou imutável, pertence a um indivíduo em particular; tudo é o
patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-03-05 17:28                     ` Sarkis Varozian
  2015-03-05 17:31                       ` Bráulio Bhavamitra
  2015-03-05 17:32                       ` Bráulio Bhavamitra
@ 2015-03-05 21:12                       ` Eric Wong
  2 siblings, 0 replies; 20+ messages in thread
From: Eric Wong @ 2015-03-05 21:12 UTC (permalink / raw)
  To: Sarkis Varozian; +Cc: Bráulio Bhavamitra, Michael Fischer, unicorn-public

Sarkis Varozian <svarozian@gmail.com> wrote:
> Braulio,
> 
> Are you referring to the vertical grey line? That is the deployment event.
> The part that spikes in the first graph is request queue which is a bit
> different on newrelic:
> http://blog.newrelic.com/2013/01/22/understanding-new-relic-queuing/

I'm not about to open images/graphs, but managed to read that.

Now I'm still unsure if they are actually using raindrops or not to
measure your stats, but at least they mention it in that post.

Setting the timestamp header in nginx is a good idea, but you need to be
completely certain clocks are synchronized between machines for accuracy
(no using monotonic clock between multiple hosts, either, must be
real-time).

Have you tried using raindrops standalone to confirm queueing in the
kernel?

raindrops inspects the listen queue in the kernel directly, so it's
as accurate as possible as far as the local machine is concerned.
(it will not measure internal network latency).

I recommend checking raindrops (or inspecting /proc/net/{unix,tcp} or
running "ss -lx" / "ss -lt" to check listen queues).

You can also simulate TCP socket queueing in a standalone Ruby
script by doing something like:

    -----------------------------8<---------------------------
    require 'socket'
    host = '127.0.0.1'
    port = 1234
    re = Regexp.escape("#{host}:#{port}")
    check = lambda do |desc|
      puts desc
      # use "ss -lx" instead for UNIXServer/UNIXSocket
      puts `ss -lt`.split(/\n/).grep(/LISTEN\s.*\b#{re}\b/io)
      puts
    end

    puts "Creating new server"
    s = TCPServer.new(host, port)

    check.call "2nd column should initially be zero:"

    puts "Queueing up one client:"
    c1 = TCPSocket.new(host, port)
    check.call "2nd column should be one, since accept is not yet called:"

    puts "Accepting one client to clear the queue"
    a1 = s.accept
    check.call "2nd column should be back to zero after calling accept:"

    puts "Queueing up two clients:"
    c2 = TCPSocket.new(host, port)
    c3 = TCPSocket.new(host, port)
    check.call "2nd column should show two queued clients"

    a2 = s.accept
    check.call "2nd column should be down to one after calling accept:"
    -----------------------------8<---------------------------

Disclaimer: I'm a Free Software extremist and would not touch
New Relic with a ten-foot pole...

> We are using HAProxy to load balance (round robin) to 4 physical hosts
> running unicorn with 6 workers.

I assume there's nginx somewhere?  Where is it?

If not, you're not protected from slow uploads with giant request
bodies.  I'm not up-to-date about current haproxy versions, but AFAIK
only nginx buffers request bodies in full.

With nginx, I'm not sure what the point of haproxy is if you're just
going to do round-robin; nginx already does round-robin.  I'd only
use haproxy for a "smarter" load balancing scheme.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
       [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
@ 2015-09-13 15:12   ` Bráulio Bhavamitra
  2015-09-14  2:14     ` Eric Wong
  0 siblings, 1 reply; 20+ messages in thread
From: Bráulio Bhavamitra @ 2015-09-13 15:12 UTC (permalink / raw)
  To: Sarkis Varozian, unicorn-public

Sakis, ressurecting this.

I'm seeing this after the upgrade from rails 3.2 to rails 4.2. Maybe it is
because of adequate record and other "cached on first time used" stuff.

cheers,
bráulio


> On Tue, Mar 3, 2015 at 7:26 PM Sarkis Varozian <svarozian@gmail.com> wrote:
>>
>> We have a rails application with the following unicorn.rb:
>> http://goo.gl/qZ5NLn
>>
>> When we deploy to the application, a USR2 signal is sent to the unicorn
>> master which spins up a new master and we use the before_fork in the
>> unicorn.rb config above to send signals to the old master as the new
>> workers come online.
>>
>> I've been trying to debug a weird issue that manifests as "Request
>> Queueing" in our Newrelic APM. The graph shows what happens after a
>> deployment (represented by the vertical lines). Here is the graph:
>> http://goo.gl/iFZPMv . As you see from the graph, it is inconsistent -
>> there is always a latency spike - however, at times Request Queueing is
>> higher than previous deploys.
>>
>> Any ideas on what exactly is going on here? Any suggestions on
>> tools/profilers to use to get to the bottom of this? Should we expect this
>> to happen on each deploy?
>>
>> Thanks,
>>
>> --
>> *Sarkis Varozian*
>> svarozian@gmail.com
>>
>>
>



-- 
"Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia" P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

"Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos."
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Request Queueing after deploy + USR2 restart
  2015-09-13 15:12   ` Bráulio Bhavamitra
@ 2015-09-14  2:14     ` Eric Wong
  0 siblings, 0 replies; 20+ messages in thread
From: Eric Wong @ 2015-09-14  2:14 UTC (permalink / raw)
  To: Bráulio Bhavamitra; +Cc: Sarkis Varozian, unicorn-public

Bráulio Bhavamitra <braulio@eita.org.br> wrote:
> Sakis, ressurecting this.
> 
> I'm seeing this after the upgrade from rails 3.2 to rails 4.2. Maybe it is
> because of adequate record and other "cached on first time used" stuff.

I'm not knowledgeable enough to comment on Rails , but
"cache on first time used" includes the caches in the YARV RubyVM
itself:

1) inline method cache
2) inline constant cache
3) global method cache

The only way to warm up the inline caches is to actually run your
code paths (perhaps in warmup code).  After warmup, you need to
avoid defining new classes/constants or doing includes/extends/etc
because those things invalidate caches.

ruby-core has been working to reduce the size of internal data
structures in C to improve CPU cache utilization (which might make
the explicit RubyVM-level caches less necessary for small apps).

Ruby 2.3 should include some more speedups in method lookup;
but having the smallest possible code base always helps.

Regardless of language or VM implementation; big code and data
structures invalidates caches on the CPU faster than smaller
code and data structures.

So reduce your application code size, kill unnecessary features,
use a smaller framework (perhaps Sinatra, or even just Rack),
load fewer libraries, etc.


And yes, this mindset to making things smaller extends to mail:
stop bloating them with top-posts and insanely long signatures.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, back to index

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-03 22:24 Request Queueing after deploy + USR2 restart Sarkis Varozian
2015-03-03 22:32 ` Michael Fischer
2015-03-04 19:48   ` Sarkis Varozian
2015-03-04 19:51     ` Michael Fischer
2015-03-04 19:58       ` Sarkis Varozian
2015-03-04 20:17         ` Michael Fischer
2015-03-04 20:24           ` Sarkis Varozian
2015-03-04 20:27             ` Michael Fischer
2015-03-04 20:35             ` Eric Wong
2015-03-04 20:40               ` Sarkis Varozian
2015-03-05 17:07                 ` Sarkis Varozian
2015-03-05 17:13                   ` Bráulio Bhavamitra
2015-03-05 17:28                     ` Sarkis Varozian
2015-03-05 17:31                       ` Bráulio Bhavamitra
2015-03-05 17:32                       ` Bráulio Bhavamitra
2015-03-05 21:12                       ` Eric Wong
2015-03-03 22:47 ` Bráulio Bhavamitra
2015-03-04 19:50   ` Sarkis Varozian
     [not found] ` <CAJri6_vidE15Xor4THzQB3uxyqPdApxHoyWp47NAG8m8TQuw0Q@mail.gmail.com>
2015-09-13 15:12   ` Bráulio Bhavamitra
2015-09-14  2:14     ` Eric Wong

unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help

Archives are clonable:
	git clone --mirror https://bogomips.org/unicorn-public
	git clone --mirror http://ou63pmih66umazou.onion/unicorn-public

Example config snippet for mirrors

Newsgroups are available over NNTP:
	nntp://news.public-inbox.org/inbox.comp.lang.ruby.unicorn
	nntp://ou63pmih66umazou.onion/inbox.comp.lang.ruby.unicorn

 note: .onion URLs require Tor: https://www.torproject.org/

AGPL code for this site: git clone https://public-inbox.org/ public-inbox