diff options
author | Eric Wong <normalperson@yhbt.net> | 2013-08-31 00:58:47 +0000 |
---|---|---|
committer | Eric Wong <normalperson@yhbt.net> | 2013-08-31 01:24:32 +0000 |
commit | 3d55af133e1da342a7eb52c3dc099daf4ed6acf6 (patch) | |
tree | 61cbf89738c5fb6b19a2fe4e60e57a38cc762d8b | |
parent | 2b7a572ddd9bcce063e3cd10851fd953f525fe24 (diff) | |
download | cmogstored-3d55af133e1da342a7eb52c3dc099daf4ed6acf6.tar.gz |
We do not need to set the contended flag again until we're certain we have no free slots in the ioq, not when we assume the client is the last one to take a slot. This is because ioq access itself is serialized, and the last client taking the ioq could be getting a false positive when another thread is waiting on ioq->mtx to release the ioq. This prevents throughput loss while recovering from a situation where an ioq is oversubscribed. This is reproduced under heavy load and switching temporarily to "SERVER aio_threads = 1" and then bringing aio_threads back up to a high value.
-rw-r--r-- | ioq.c | 15 |
1 files changed, 2 insertions, 13 deletions
@@ -91,13 +91,7 @@ bool mog_ioq_ready(struct mog_ioq *ioq, struct mog_fd *mfd) good = ioq->cur > 0; if (good) { - /* - * assume the worst when we are the last one to - * acquire a free slot - */ - if (--ioq->cur == 0) - ioq_set_contended(ioq); - + --ioq->cur; mog_ioq_current = ioq; } else { TRACE(CMOGSTORED_IOQ_BLOCKED(mfd->fd)); @@ -131,13 +125,8 @@ void mog_ioq_next(struct mog_ioq *check_ioq) if (mog_ioq_current->cur <= mog_ioq_current->max) { /* wake up any waiters */ mfd = SIMPLEQ_FIRST(&mog_ioq_current->ioq_head); - if (mfd) { + if (mfd) SIMPLEQ_REMOVE_HEAD(&mog_ioq_current->ioq_head, ioqent); - - /* if there's another head, we're still contended */ - if (SIMPLEQ_FIRST(&mog_ioq_current->ioq_head)) - ioq_set_contended(mog_ioq_current); - } } else { /* mog_ioq_adjust was called and lowered our capacity */ mog_ioq_current->cur--; |