cmogstored design notes

object relationships

1:1 relationship between mog_cfg and mog_svc, we'll support multiple
mog_svc if they don't conflict.

There's only one mog_queue instance for now, shared by any number of
mog_svc instances.  Theoretically, there can be any number of mog_queue
objects, but having one means the fairest distribution in the worst-case
scenarios (at the cost of optimal performance in the best-case scenario)

mog_cfg[0] -- mog_svc[0] --- mog_mgmt[N]
                 |       \-- mog_http[N]
               /         ___ mog_accept[N]
    mog_queue[0]--------<___ mog_accept[N]
mog_cfg[1] -- mog_svc[1] --- mog_mgmt[M]
                         \-- mog_http[M]

memory management

cmogstored avoids dynamic memory allocation in common cases.
Each file descriptor is mapped to a 128-byte (on 64-bit systems)
slot of memory which maps all the critical data needed for most
connections.  See fdmap.c for details.

Userspace socket read buffers are per-thread in the common case (rather
than per-client), as most request buffers do not need to live longer
than a single event loop step (iteration).

performance compromises

We still use snprintf to generate HTTP response headers, and we use
Ragel-generated code for the HTTP parser.  These choices were made for
maintainability and readability rather than performance.

Even with these performance compromises, cmogstored performance is
expected to be competitive in the best-case (hot cache[1]) scenarios
with other HTTP servers.

Where cmogstored (and the original Perl mogstored) shines is in avoiding
pathological head-of-line blocking when serving files from multiple
mountpounts.  See queues.txt for more on the unique queue design for
to take advantage of multiple cores and disks.

[1] cmogstored does not do any caching on its own, it relies on the
    operating system kernel.