unicorn.git  about / heads / tags
Rack HTTP server for Unix and fast clients
blob 9a54a01bb122441a13e3b890eff8c2f4258bd84b 3614 bytes (raw)
$ git show v2.0.0pre2:TUNING	# shows this blob on the CLI

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
 
= Tuning Unicorn

Unicorn performance is generally as good as a (mostly) Ruby web server
can provide.  Most often the performance bottleneck is in the web
application running on Unicorn rather than Unicorn itself.

== Unicorn Configuration

See Unicorn::Configurator for details on the config file format.

* Setting a very low value for the :backlog parameter in "listen"
  directives can allow failover to happen more quickly if your
  cluster is configured for it.

* If you're doing extremely simple benchmarks and getting connection
  errors under high request rates, increasing your :backlog parameter
  above the already-generous default of 1024 can help avoid connection
  errors.  Keep in mind this is not recommended for real traffic if
  you have another machine to failover to (see above).

* :rcvbuf and :sndbuf parameters generally do not need to be set for TCP
  listeners under Linux 2.6 because auto-tuning is enabled.  UNIX domain
  sockets do not have auto-tuning buffer sizes; so increasing those will
  allow syscalls and task switches to be saved for larger requests
  and responses.  If your app only generates small responses or expects
  small requests, you may shrink the buffer sizes to save memory, too.

* Having socket buffers too large can also be detrimental or have
  little effect.  Huge buffers can put more pressure on the allocator
  and may also thrash CPU caches, cancelling out performance gains
  one would normally expect.

* Setting "preload_app true" can allow copy-on-write-friendly GC to
  be used to save memory.  It will probably not work out of the box with
  applications that open sockets or perform random I/O on files.
  Databases like TokyoCabinet use concurrency-safe pread()/pwrite()
  functions for safe sharing of database file descriptors across
  processes.

* On POSIX-compliant filesystems, it is safe for multiple threads or
  processes to append to one log file as long as all the processes are
  have them unbuffered (File#sync = true) or they are
  record(line)-buffered in userspace.

* worker_processes should be scaled to the number of processes your
  backend system(s) can support.  DO NOT scale it to the number of
  external network clients your application expects to be serving.
  Unicorn is NOT for serving slow clients, that is the job of nginx.

== Kernel Parameters (Linux sysctl)

WARNING: Do not change system parameters unless you know what you're doing!

* net.core.rmem_max and net.core.wmem_max can increase the allowed
  size of :rcvbuf and :sndbuf respectively. This is mostly only useful
  for UNIX domain sockets which do not have auto-tuning buffer sizes.

* For load testing/benchmarking with UNIX domain sockets, you should
  consider increasing net.core.somaxconn or else nginx will start
  failing to connect under heavy load.  You may also consider setting
  a higher :backlog to listen on as noted earlier.

* If you're running out of local ports, consider lowering
  net.ipv4.tcp_fin_timeout to 20-30 (default: 60 seconds).  Also
  consider widening the usable port range by changing
  net.ipv4.ip_local_port_range.

* Setting net.ipv4.tcp_timestamps=1 will also allow setting
  net.ipv4.tcp_tw_reuse=1 and net.ipv4.tcp_tw_recycle=1, which along
  with the above settings can slow down port exhaustion.  Not all
  networks are compatible with these settings, check with your friendly
  network administrator before changing these.

* Increasing the MTU size can reduce framing overhead for larger
  transfers.  One often-overlooked detail is that the loopback
  device (usually "lo") can have its MTU increased, too.

git clone https://yhbt.net/unicorn.git