unicorn Ruby/Rack server user+dev discussion/patches/pulls/bugs/help
 help / color / mirror / code / Atom feed
* Auto scaling workers with unicorn
@ 2017-12-04 23:42 Sam Saffron
  2017-12-05  1:51 ` Eric Wong
  0 siblings, 1 reply; 3+ messages in thread
From: Sam Saffron @ 2017-12-04 23:42 UTC (permalink / raw)
  To: unicorn-public

I would like to amend Discourse so we "automatically" absorb certain
traffic spikes. As it stands we can only configure unicorn with
num_workers and use TTIN and TTOUT to tune the number on the fly.

I was wondering if you would be open to patching unicorn to allow it
to perform auto-tuning based on raindrops info.

How it could work

1. configure unicorn with min_workers, max_workers, wince_delay, scale_up_delay

2. If queued requests is over 0 for N samples over scale_up_delay, add
a worker up until max_workers

3. If queued requests is 0 for N samples over wince_delay scale down
until you reach min workers

Having this system in place can heavily optimize memory in large
deployments and simplifies provisioning logic quite a lot.

Wondering what you think about this and if you think unicorn should
provide this option?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Auto scaling workers with unicorn
  2017-12-04 23:42 Auto scaling workers with unicorn Sam Saffron
@ 2017-12-05  1:51 ` Eric Wong
  2017-12-05  2:33   ` Ben Somers
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Wong @ 2017-12-05  1:51 UTC (permalink / raw)
  To: Sam Saffron; +Cc: unicorn-public

Sam Saffron <sam.saffron@gmail.com> wrote:
> I would like to amend Discourse so we "automatically" absorb certain
> traffic spikes. As it stands we can only configure unicorn with
> num_workers and use TTIN and TTOUT to tune the number on the fly.
> 
> I was wondering if you would be open to patching unicorn to allow it
> to perform auto-tuning based on raindrops info.

I'm no fan of this or auto-tuning systems in general.
More explanation below.

> How it could work
> 
> 1. configure unicorn with min_workers, max_workers, wince_delay, scale_up_delay
> 
> 2. If queued requests is over 0 for N samples over scale_up_delay, add
> a worker up until max_workers
> 
> 3. If queued requests is 0 for N samples over wince_delay scale down
> until you reach min workers

This adds more complexity to configuration: increasing the
likelyhood of getting these numbers completely wrong.  GC and
malloc tuning is tricky and error-prone enough, already.

Mainly, this tends to hide problems for later; instead of
forcing you to deal with your resource limitations up front.

It becomes more difficult to forsee resource limitations down
the line.  Before I worked on unicorn, I've seen auto-scaling
Apache workers mistuned far too often and running out of DB
connections or memory; and that happens at the worst time:
when your site is under heavy load (when you have the most to
lose (or gain)).

> Having this system in place can heavily optimize memory in large
> deployments and simplifies provisioning logic quite a lot.

My philosophy remains to tune for the worst case possible.

If you really need to do something like run an expensive
off-peak cronjob, maybe have it TTIN at the beginning and TTOU
again at the end.

Fwiw, the most useful thing I've found TTIN/TTOU for is cutting
down to one worker so I know which one to strace when tracking
down a problem; not auto-scaling.

> Wondering what you think about this and if you think unicorn should
> provide this option?

Fwiw, my position has been consistent on this throughout the years.

Also, digging through the archives, Ben Somers came up with
alicorn a while back and it might be up your alley:

https://bogomips.org/unicorn-public/CAO1NZApo0TLJY2KgSg+Fjt1jEcuPfq=UCC0SCvvnuGDnr39w8w@mail.gmail.com/

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Auto scaling workers with unicorn
  2017-12-05  1:51 ` Eric Wong
@ 2017-12-05  2:33   ` Ben Somers
  0 siblings, 0 replies; 3+ messages in thread
From: Ben Somers @ 2017-12-05  2:33 UTC (permalink / raw)
  To: Eric Wong; +Cc: Sam Saffron, unicorn-public

On Mon, Dec 4, 2017 at 5:51 PM, Eric Wong <e@80x24.org> wrote:
> Also, digging through the archives, Ben Somers came up with
> alicorn a while back and it might be up your alley:
>
> https://bogomips.org/unicorn-public/CAO1NZApo0TLJY2KgSg+Fjt1jEcuPfq=UCC0SCvvnuGDnr39w8w@mail.gmail.com/

AFAIK alicorn still works, but to my knowledge it stopped seeing
production use by anyone a couple years ago (the team I wrote it for
shut down, and I never heard of other production use). But it did do
the job with no incidents for several years, and the API's managed
carefully enough that it probably still works. At a minimum, it might be a
helpful starting point if you want to write your own tool.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-12-05  2:34 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-04 23:42 Auto scaling workers with unicorn Sam Saffron
2017-12-05  1:51 ` Eric Wong
2017-12-05  2:33   ` Ben Somers

Code repositories for project(s) associated with this public inbox

	https://yhbt.net/unicorn.git/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).