From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS14383 205.234.109.0/24 X-Spam-Status: No, score=-0.7 required=5.0 tests=MSGID_FROM_MTA_HEADER, RP_MATCHES_RCVD shortcircuit=no autolearn=unavailable version=3.3.2 Path: news.gmane.org!not-for-mail From: Nate Clark Newsgroups: gmane.comp.lang.ruby.unicorn.general Subject: Re: workers not utilizing multiple CPUs Date: Wed, 1 Jun 2011 14:51:38 +0800 Message-ID: References: <4DE4DABB.3070503@gmail.com> <7E16F5B9-2C96-426F-BC75-670BEDD122A9@gmail.com> <20110531154803.GC10313@dcvr.yhbt.net> NNTP-Posting-Host: lo.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Trace: dough.gmane.org 1306911440 14611 80.91.229.12 (1 Jun 2011 06:57:20 GMT) X-Complaints-To: usenet@dough.gmane.org NNTP-Posting-Date: Wed, 1 Jun 2011 06:57:20 +0000 (UTC) To: unicorn list Original-X-From: mongrel-unicorn-bounces@rubyforge.org Wed Jun 01 08:57:16 2011 Return-path: Envelope-to: gclrug-mongrel-unicorn@m.gmane.org X-Original-To: mongrel-unicorn@rubyforge.org Delivered-To: mongrel-unicorn@rubyforge.org In-Reply-To: X-BeenThere: mongrel-unicorn@rubyforge.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Original-Sender: mongrel-unicorn-bounces@rubyforge.org Errors-To: mongrel-unicorn-bounces@rubyforge.org Xref: news.gmane.org gmane.comp.lang.ruby.unicorn.general:978 Archived-At: Received: from rubyforge.org ([205.234.109.19]) by lo.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1QRfMf-00078C-SV for gclrug-mongrel-unicorn@m.gmane.org; Wed, 01 Jun 2011 08:57:14 +0200 Received: from rubyforge.org (rubyforge.org [127.0.0.1]) by rubyforge.org (Postfix) with ESMTP id C55891678326; Wed, 1 Jun 2011 02:57:12 -0400 (EDT) Received: from mail-yi0-f50.google.com (mail-yi0-f50.google.com [209.85.218.50]) by rubyforge.org (Postfix) with ESMTP id A95E11858367 for ; Wed, 1 Jun 2011 02:51:38 -0400 (EDT) Received: by yie30 with SMTP id 30so3065033yie.23 for ; Tue, 31 May 2011 23:51:38 -0700 (PDT) Received: by 10.91.80.20 with SMTP id h20mr5811864agl.149.1306911098167; Tue, 31 May 2011 23:51:38 -0700 (PDT) Received: by 10.90.101.5 with HTTP; Tue, 31 May 2011 23:51:38 -0700 (PDT) Thanks for the responses, all. Eric, you were right, our load was not enough. We had just started on load testing our app, and I think we started with too many app servers and not enough load. Once we cranked up the load and used fewer instances, we're now=A0definitely=A0seeing all CPU cores being utilized. I was not aware that the kernel would optimize like you described. Once we did start seeing heavier load, our collectd data and htop were reporting usage on the virtual cores correctly. Thanks again, very happy with the results so far, Nate On Wed, Jun 1, 2011 at 2:35 PM, Nate Clark wrote: > > Thanks for the responses, all. > Eric, you were right, our load was not enough. We had just started on loa= d testing our app, and I think we started with too many app servers and not= enough load. Once we cranked up the load and used fewer instances, we're n= ow=A0definitely=A0seeing all CPU cores being utilized. I was not aware that= the kernel would optimize like you described. > Once we did start seeing heavier load, our collectd data and htop were re= porting usage on the virtual cores correctly. > Thanks again, very happy with the results so far, > Nate > On Tue, May 31, 2011 at 11:55 PM, Clifton King wrote: >> >> Thanks Eric, I had expected that to be the case (we are under light >> load as of now). >> >> On Tue, May 31, 2011 at 10:48 AM, Eric Wong wrot= e: >> > Clifton King wrote: >> >> We experience the same problem. I believe the problem has more to do >> >> with the kernel CPU scheduler than anything else. If you figure put a >> >> reliable way to spread the load, I'd like to hear it. >> > >> > Load not being spread is /not/ a problem unless there are requests that >> > get stuck in the listen queue. >> > >> > If no requests are actually stuck in the queue (light load), the kernel >> > is right to put requests into the most recently used worker since it c= an >> > get better CPU cache behavior this way. >> > >> > >> > =3D=3D The real problem >> > >> > Under high loads (many cores, fast responses), Unicorn currently uses >> > more resources because of non-blocking accept() + select(). =A0This is= n't >> > a noticeable problem for most machines (1-16 cores). >> > >> > Future versions of Unicorn may take advantage of /blocking/ accept() >> > optimizations under Linux. =A0Rainbows! already lets you take advantage >> > of this behavior if you meet the following requirements: >> > >> > * Ruby 1.9.x under Linux >> > * only one listen socket (if worker_connections =3D=3D 1 under Rainbow= s!) >> > * use ThreadPool|XEpollThreadPool|XEpollThreadSpawn|XEpoll >> > >> > I haven't had a chance to benchmark any of this on very big machines so >> > I have no idea how well it actually works compared to Unicorn, only how >> > well it works in theory :) >> > >> > >> > Blocking accept() under Ruby 1.9.x + Linux should distribute load even= ly >> > across workers in all situations, even in the non-busy cases where load >> > distribution doesn't matter (your case :). >> > >> > [1] - http://rainbows.rubyforge.org/Rainbows/XEpollThreadPool.html >> > >> > -- >> > Eric Wong >> > _______________________________________________ >> > Unicorn mailing list - mongrel-unicorn@rubyforge.org >> > http://rubyforge.org/mailman/listinfo/mongrel-unicorn >> > Do not quote signatures (like this one) or top post when replying >> > >> _______________________________________________ >> Unicorn mailing list - mongrel-unicorn@rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-unicorn >> Do not quote signatures (like this one) or top post when replying > _______________________________________________ Unicorn mailing list - mongrel-unicorn@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-unicorn Do not quote signatures (like this one) or top post when replying