Thread pools tend to only offer a sparse interface: pass a closure or a function and its arguments to the pool, and that function will be called, eventually.^{1} Functions can do anything, so this interface should offer all the expressive power one could need. Experience tells me otherwise.
The standard pool interface is so impoverished that it is nearly
impossible to use correctly in complex programs, and leads us down
design deadends. I would actually argue it’s better to work with raw
threads than to even have generic amorphous thread pools: the former force us
to stop and think about resource requirements (and lets the OS’s real
scheduler help us along), instead of making us pretend we only care
about CPU usage. I claim thread pools aren’t scalable because, with
the exception of CPU time, they actively hinder the development of
programs that achieve high resource utilisation.
This post comes in two parts. First, the story of a simple program that’s parallelised with a thread pool, then hits a wall as a wider set of resources becomes scarce. Second, a solution I like for that kind of program: an explicit state machine, where each state gets a dedicated queue that is aware of the state’s resource requirements.
We start with a simple program that processes independent work units, a serial loop that pulls in work (e.g., files in a directory), or wait for requests on a socket, one work unit at a time.
At some point, there’s enough work to think about parallelisation, and we choose threads.^{2} To keep things simple, we simply spawn a thread per work unit. Load increases further, and we observe that we spend more time switching between threads or contending on shared data than doing actual work. We could use a semaphore to limit the number of work units we process concurrently, but we might as well just push work units to a thread pool and recycle threads instead of wasting resources on a threadperrequest model. We can even start thinking about queueing disciplines, admission control, backpressure, etc. Experienced developers will often jump directly to this stage after the serial loop.
The 80s saw a lot of research on generalising this “flat” parallelism model to nested parallelism, where work units can spawn additional requests and wait for the results (e.g., to recursively explore subbranches of a search tree). Nested parallelism seems like a good fit for contemporary network services: we often respond to a request by sending simpler requests downstream, before merging and munging the responses and sending the result back to the original requestor. That may be why futures and promises are so popular these days.
I believe that, for most programs, the futures model is an excellent answer to the wrong question. The moment we perform I/O (be it network, disk, or even with hardware accelerators) in order to generate a result, running at scale will have to mean controlling more resources than just CPU, and both the futures and the generic thread pool models fall short.
The issue is that futures only work well when a waiter can help along the value it needs, with task stealing, while thread pools implement a trivial scheduler (dedicate a thread to a function until that function returns) that must be oblivious to resource requirements, since it handles opaque functions.
Once we have futures that might be blocked on I/O, we can’t guarantee a waiter will achieve anything by lending CPU time to its children. We could help sibling tasks, but that way stack overflows lie.
The deficiency of flat generic thread pools is more subtle. Obviously, one doesn’t want to take a tight thread pool, with one thread per core, and waste it on synchronous I/O. We’ll simply kick off I/O asynchronously, and reenqueue the continuation on the pool upon completion!
Instead of doing
A, I/O, B
in one function, we’ll split the work in two functions and a callback
A, initiate asynchronous I/O
On I/O completion: enqueue B in thread pool
B
The problem here is that it’s easy to create too many asynchronous
requests, and run out of memory, DOS the target, or delay the rest of
the computation for too long. As soon as the I/O requests has been
initiated in A
, the function returns to the thread pool, which will
just execute more instances of A
and initiate even more I/O.
At first, when the program doesn’t heavily utilise any resource in particular, there’s an easy solution: limit the total number of inflight work units with a semaphore. Note that I wrote work unit, not function calls. We want to track logical requests that we started processing, but for which there is still work to do (e.g., the response hasn’t been sent back yet).
I’ve seen two ways to cap inflight work units. One’s buggy, the other doesn’t generalise.
The buggy implementation acquires a semaphore in the first stage of
request handling (A
) and releases it in the last stage (B
). The
bug is that, by the time we’re executing A
, we’re already using up a
slot in the thread pool, so we might be preventing B
s from
executing. We have a lock ordering problem: A
acquires a thread
pool slot before acquiring the inflight semaphore, but B
needs to
acquire a slot before releasing the same semaphore. If you’ve seen
code that deadlocks when the thread pool is too small, this was
probably part of the problem.
The correct implementation acquires the semaphore before enqueueing a
new work unit, before shipping a call to A
to the thread pool (and
releases it at the end of processing, in B
). This only works because
we can assume that the first thing A
does is to acquire the
semaphore. As our code becomes more efficient, we’ll want to more
finely track the utilisation of multiple resources, and
preacquisition won’t suffice. For example, we might want to limit
network requests going to individual hosts, independently from disk
reads or writes, or from database transactions.
The core issue with thread pools is that the only thing they can do is run opaque functions in a dedicated thread, so the only way to reserve resources is to already be running in a dedicated thread. However, the one resource that every function needs is a thread on which to run, thus any correct lock order must acquire the thread last.
We care about reserving resources because, as our code becomes more efficient and scales up, it will start saturating resources that used to be virtually infinite. Unfortunately, classical thread pools can only control CPU usage, and actively hinder correct resource throttling. If we can’t guarantee we won’t overwhelm the supply of a given resource (e.g., read IOPS), we must accept wasteful overprovisioning.
Once the problem has been identified, the solution becomes obvious: make sure the work we push to thread pools describes the resources to acquire before running the code in a dedicated thread.
My favourite approach assigns one global thread pool (queue) to each function or processing step. The arguments to the functions will change, but the code is always the same, so the resource requirements are also well understood. This does mean that we incur complexity to decide how many threads or cores each pool is allowed to use. However, I find that the resulting programs are better understandable at a high level: it’s much easier to write code that traverses and describes the work waiting at different stages when each stage has a dedicated thread pool queue. They’re also easier to model as queueing systems, which helps answer “what if?” questions without actually implementing the hypothesis.
In increasing order of annoyingness, I’d divide resources to acquire in four classes.
We don’t really have to think about the first class of resources, at least when it comes to correctness. However, repeatedly running the same code on a given core tends to improve performance, compared to running all sorts of code on all cores.
The second class of resources may be acquired once our code is running in a thread pool, so one could pretend it doesn’t exist. However, it is more efficient to batch acquisition, and execute a bunch of calls that all need a given resource (e.g., a DB connection from a connection pool) before releasing it, instead of repetitively acquiring and releasing the same resource in backtoback function calls, or blocking multiple workers on the same bottleneck.^{4} More importantly, the property of always being acquired and released in the same function invocation, is a global one: as soon as even one piece of code acquires a given resource and releases in another thread pool call (e.g., acquires a DB connection, initiates an asynchronous network call, writes the result of the call to the DB, and releases the connection), we must always treat that resource as being in the third, more annoying, class. Having explicit stages with fixed resource requirements helps us confirm resources are classified correctly.
The third class of resources must be acquired in a way that preserves forward progress in the rest of the system. In particular, we must never have all workers waiting for resources of this third class. In most cases, it suffices to make sure there at least as many workers as there are queues or stages, and to only let each stage run the initial resource acquisition code in one worker at a time. However, it can pay off to be smart when different queued items require different resources, instead of always trying to satisfy resource requirements in FIFO order.
The fourth class of resources is essentially heap memory. Memory is special because the only way to release it is often to complete the computation. However, moving the computation forward will use even more heap. In general, my only solution is to impose a hard cap on the total number of inflight work units, and to make sure it’s easy to tweak that limit at runtime, in disaster scenarios. If we still run close to the memory capacity with that limit, the code can either crash (and perhaps restart with a lower inflight cap), or try to cancel work that’s already in progress. Neither option is very appealing.
There are some easier cases. For example, I find that temporary bumps
in heap usage can be caused by parsing large responses from
idempotent (GET
) requests. It would be nice if networking subsystems
tracked memory usage to dynamically throttle requests, or
even cancel and retry idempotent ones.
Once we’ve done the work of explicitly writing out the processing steps in our program as well as their individual resource requirements, it makes sense to let that topology drive the structure of the code.
Over time, we’ll gain more confidence in that topology and bake it in our program to improve performance. For example, rather than limiting the number of inflight requests with a semaphore, we can have a fixedsize allocation pool of request objects. We can also selectively use bounded ring buffers once we know we wish to impose a limit on queue size. Similarly, when a sequence (or subgraph) of processing steps is fully synchronous or retires in order, we can control both the queue size and the number of inflight work units with a disruptor, which should also improve locality and throughput under load. These transformations are easy to apply once we know what the movement of data and resource looks like. However, they also ossify the structure of the program, so I only think about such improvements if they provide a system property I know I need (e.g., a limit on the number of inflight requests), or once the code is functional and we have loadtesting data.
Complex programs are often best understood as state machines. These state machines can be implicit, or explicit. I prefer the latter. I claim that it’s also preferable to have one thread pool^{5} per explicit state than to dump all sorts of state transition logic in a shared pool. If writing functions that process flat tables is dataoriented programming, I suppose I’m arguing for dataoriented state machines.
Convenience wrappers, like parallel map, or “run after this time,” still rely on the flexibility of opaque functions.↩
Maybe we decided to use threads because there’s a lot of shared, readmostly, data on the heap. It doesn’t really matter, process pools have similar problems.↩
Up to a point, of course. No model is perfect, etc. etc.↩
Explicit resource requirements combined with one queue per stage lets us steal ideas from SEDA.↩
One thread pool per state in the sense that no state can fully starve out another of CPU time. The concrete implementation may definitely let a shared set of workers pull from all the queues.↩
This abuse of interrupts is complementary to Bounded TSO. Bounded TSO measures the hardware limit on the number of store instructions that may concurrently be inflight (and combines that with the knowledge that instructions are retired in order) to guarantee liveness without explicit barriers, with no overhead, and usually marginal latency. However, without worstcase execution time information, it’s hard to map instruction counts to real time. Tracking interrupts lets us determine when enough real time has elapsed that earlier writes have definitely retired, albeit after a more conservative delay than Bounded TSO’s typical case.
I reached this position after working on two lockfree synchronisation primitives—event counts, and asymmetric flag flips as used in hazard pointers and epoch reclamation—that are similar in that a slow path waits for a sign of life from a fast path, but differ in the way they handle “stuck” fast paths. I’ll cover the event count and flag flip implementations that I came to on Linux/x86[64], which both rely on interrupts for ordering. Hopefully that will convince you too that preemption is a useful source of prepaid barriers for lockfree code in userspace.
I’m writing this for readers who are already familiar with
lockfree programming,
safe memory reclamation techniques in particular, and have some
experience reasoning with formal memory models.
For more references, Samy’s overview in the ACM Queue is a good resource.
I already committed the code for
event counts in Concurrency Kit,
and for interruptbased reverse barriers in my barrierd
project.
An event count
is essentially a version counter that lets threads wait
until the current version differs from an arbitrary prior version. A
trivial “wait” implementation could spin on the version counter.
However, the value of event counts is that they let lockfree code
integrate with OSlevel blocking: waiters can grab the event count’s
current version v0
, do what they want with the versioned data, and
wait for new data by sleeping rather than burning cycles until the
event count’s version differs from v0
. The event count is a common
synchronisation primitive that is often reinvented and goes by many
names (e.g.,
blockpoints);
what matters is that writers can update the version counter, and
waiters can read the version, run arbitrary code, then efficiently
wait while the version counter is still equal to that previous
version.
The explicit version counter solves the lost wakeup issue associated with misused condition variables, as in the pseudocode below.
bad condition waiter:
while True:
atomically read data
if need to wait:
WaitOnConditionVariable(cv)
else:
break
In order to work correctly, condition variables require waiters to acquire a mutex that protects both data and the condition variable, before checking that the wait condition still holds and then waiting on the condition variable.
good condition waiter:
while True:
with(mutex):
read data
if need to wait:
WaitOnConditionVariable(cv, mutex)
else:
break
Waiters must prevent writers from making changes to the data, otherwise the data change (and associated condition variable wakeup) could occur between checking the wait condition, and starting to wait on the condition variable. The waiter would then have missed a wakeup and could end up sleeping forever, waiting for something that has already happened.
good condition waker:
with(mutex):
update data
SignalConditionVariable(cv)
The six diagrams below show the possible interleavings between the
signaler (writer) making changes to the data and waking waiters, and
a waiter observing the data and entering the queue to wait for
changes. The two leftmost diagrams don’t interleave anything; these
are the only scenarios allowed by correct locking. The remaining four
actually interleave the waiter and signaler, and show that, while
three are accidentally correct (lucky), there is one case, WSSW
,
where the waiter misses its wakeup.
If any waiter can prevent writers from making progress, we don’t have a lockfree protocol. Event counts let waiters detect when they would have been woken up (the event count’s version counter has changed), and thus patch up this window where waiters can miss wakeups for data changes they have yet to observe. Crucially, waiters detect lost wakeups, rather than preventing them by locking writers out. Event counts thus preserve lockfreedom (and even waitfreedom!).
We could, for example, use an event count in a lockfree ring buffer: rather than making consumers spin on the write pointer, the write pointer could be encoded in an event count, and consumers would then efficiently block on that, without burning CPU cycles to wait for new messages.
The challenging part about implementing event counts isn’t making sure
to wake up sleepers, but to only do so when there are sleepers to
wake. For some use cases, we don’t need to do any active wakeup,
because exponential backoff is good enough: if version updates
signal the arrival of a response in a request/response communication
pattern, exponential backoff, e.g., with a 1.1x
backoff factor,
could bound the increase in response latency caused by the blind sleep
during backoff, e.g., to 10%.
Unfortunately, that’s not always applicable. In general, we can’t assume that signals corresponds to responses for prior requests, and we must support the case where progress is usually fast enough that waiters only spin for a short while before grabbing more work. The latter expectation means we can’t “just” unconditionally execute a syscall to wake up sleepers whenever we increment the version counter: that would be too slow. This problem isn’t new, and has a solution similar to the one deployed in adaptive spin locks.
The solution pattern for adaptive locks relies on tight integration with an OS primitive, e.g., futexes. The control word, the machine word on which waiters spin, encodes its usual data (in our case, a version counter), as well as a new flag to denote that there are sleepers waiting to be woken up with an OS syscall. Every write to the control word uses atomic readmodifywrite instructions, and before sleeping, waiters ensure the “sleepers are present” flag is set, then make a syscall to sleep only if the control word is still what they expect, with the sleepers flag set.
OpenBSD’s compatibility shim for Linux’s futexes is about as simple an implementation of the futex calls as it gets. The OS code for futex wake and wait is identical to what userspace would do with mutexes and condition variables (waitqueues). Waiters lock out wakers for the futex word or a coarser superset, check that the futex word’s value is as expected, and enters the futex’s waitqueue. Wakers acquire the futex word for writes, and wake up the waitqueue. The difference is that all of this happens in the kernel, which, unlike userspace, can force the scheduler to be helpful. Futex code can run in the kernel because, unlike arbitrary mutex/condition variable pairs, the protected data is always a single machine integer, and the wait condition an equality test. This setup is simple enough to fully implement in the kernel, yet general enough to be useful.
OSassisted conditional blocking is straightforward enough to adapt to event counts. The control word is the event count’s version counter, with one bit stolen for the “sleepers are present” flag (sleepers flag).
Incrementing the version counter can use a regular atomic increment; we only need to make sure we can tell whether the sleepers flag might have been set before the increment. If the sleepers flag was set, we clear it (with an atomic bit reset), and wake up any OS thread blocked on the control word.
increment event count:
old < fetch_and_add(event_count.counter, 2) # flag is in the low bit
if (old & 1):
atomic_and(event_count.counter, 2)
signal waiters on event_count.counter
Waiters can spin for a while, waiting for the version counter to change. At some point, a waiter determines that it’s time to stop wasting CPU time. The waiter then sets the sleepers flag with a compareandswap: the CAS (compareandswap) can only fail because the counter’s value has changed or because the flag is already set. In the former failure case, it’s finally time to stop waiting. In the latter failure care, or if the CAS succeeded, the flag is now set. The waiter can then make a syscall to block on the control word, but only if the control word still has the sleepers flag set and contains the same expected (old) version counter.
wait until event count differs from prev:
repeat k times:
if (event_count.counter / 2) != prev: # flag is in low bit.
return
compare_and_swap(event_count.counter, prev * 2, prev * 2 + 1)
if cas_failed and cas_old_value != (prev * 2 + 1):
return
repeat k times:
if (event_count.counter / 2) != prev:
return
sleep_if(event_count.center == prev * 2 + 1)
This scheme works, and offers decent performance. In fact, it’s good
enough for Facebook’s Folly.
I certainly don’t see how we can improve on that if there are
concurrent writers (incrementing threads).
However, if we go back to the ring buffer example, there is often only
one writer per ring. Enqueueing an item in a
singleproducer ring buffer incurs no atomic, only a release
store:
the write pointer increment only has to be visible after the data
write, which is always the case under the TSO memory model (including
x86). Replacing the write pointer in a singleproducer ring buffer
with an event count where each increment incurs an atomic operation is
far from a nobrainer. Can we do better, when there is only one
incrementer?
On x86 (or any of the zero other architectures with nonatomic readmodifywrite instructions and TSO), we can… but we must accept some weirdness.
The operation that must really be fast is incrementing the event counter, especially when the sleepers flag is not set. Setting the sleepers flag on the other hand, may be slower and use atomic instructions, since it only happens when the executing thread is waiting for fresh data.
I suggest that we perform the former, the increment on the fast path,
with a nonatomic readmodifywrite instruction, either inc mem
or
xadd mem, reg
. If the sleepers flag is in the sign bit, we can
detect it (modulo a false positive on wraparound) in the condition
codes computed by inc
; otherwise, we must use xadd
(fetchandadd)
and look at the flag bit in the fetched value.
The usual orderingbased arguments are no help in this kind of
asymmetric synchronisation pattern. Instead, we must go directly to
the x86TSO
memory model. All atomic (LOCK
prefixed) instructions conceptually
flush the executing core’s store buffer, grab an exclusive lock on
memory, and perform the readmodifywrite operation with that lock
held. Thus, manipulating the sleepers flag can’t lose updates that
are already visible in memory, or on their way from the store buffer.
The RMW increment will also always see the latest version update
(either in global memory, or in the only incrementer’s store buffer),
so won’t lose version updates either. Finally, scheduling and thread
migration must always guarantee that the incrementer thread sees its
own writes, so that won’t lose version updates.
increment event count without atomics in the common case:
old < non_atomic_fetch_and_add(event_count.counter, 2)
if (old & 1):
atomic_and(event_count.counter, 2)
signal waiters on event_count.counter
The only thing that might be silently overwritten is the sleepers flag: a waiter might set that flag in memory just after the increment’s load from memory, or while the increment reads a value with the flag unset from the local store buffer. The question is then how long waiters must spin before either observing an increment, or knowing that the flag flip will be observed by the next increment. That question can’t be answered with the memory model, and worstcase execution time bounds are a joke on contemporary x86.
I found an answer by remembering that IRET
, the instruction used to
return from interrupt handlers, is a full barrier.^{1}
We also know that interrupts happen at frequent and regular intervals,
if only for the preemption timer (every 410ms on stock Linux/x86oid).
Regardless of the bound on store visibility, a waiter can flip the sleepersarepresent flag, spin on the control word for a while, and then start sleeping for short amounts of time (e.g., a millisecond or two at first, then 10 ms, etc.): the spin time is long enough in the vast majority of cases, but could still, very rarely, be too short.
At some point, we’d like to know for sure that, since we have yet to observe a silent overwrite of the sleepers flag or any activity on the counter, the flag will always be observed and it is now safe to sleep forever. Again, I don’t think x86 offers any strict bound on this sort of thing. However, one second seems reasonable. Even if a core could stall for that long, interrupts fire on every core several times a second, and returning from interrupt handlers acts as a full barrier. No write can remain in the store buffer across interrupts, interrupts that occur at least once per second. It seems safe to assume that, once no activity has been observed on the event count for one second, the sleepers flag will be visible to the next increment.
That assumption is only safe if interrupts do fire at regular intervals. Some latency sensitive systems dedicate cores to specific userspace threads, and move all interrupt processing and preemption away from those cores. A correctly isolated core running Linux in tickless mode, with a single runnable process, might not process interrupts frequently enough. However, this kind of configuration does not happen by accident. I expect that even a halfsecond stall in such a system would be treated as a system error, and hopefully trigger a watchdog. When we can’t count on interrupts to get us barriers for free, we can instead rely on practical performance requirements to enforce a hard bound on execution time.
Either way, waiters set the sleepers flag, but can’t rely on it being observed until, very conservatively, one second later. Until that time has passed, waiters spin on the control word, then block for short, but growing, amounts of time. Finally, if the control word (event count version and sleepers flag) has not changed in one second, we assume the incrementer has no write in flight, and will observe the sleepers flag; it is safe to block on the control word forever.
wait until event count differs from prev:
repeat k times:
if (event_count.counter / 2) != prev:
return
compare_and_swap(event_count.counter, 2 * prev, 2 * prev + 1)
if cas_failed and cas_old_value != 2 * prev + 1:
return
repeat k times:
if event_count.counter != 2 * prev + 1:
return
repeat for 1 second:
sleep_if_until(event_count.center == 2 * prev + 1,
$exponential_backoff)
if event_count.counter != 2 * prev + 1:
return
sleep_if(event_count.center == prev * 2 + 1)
That’s the solution I implemented in this pull request for
SPMC and MPMC event counts in concurrency kit.
The MP (multiple producer) implementation is the regular adaptive
logic, and matches Folly’s strategy. It needs about 30 cycles for an
uncontended increment with no waiter, and waking up sleepers adds
another 700 cycles on my E546xx (Linux 4.16). The single producer
implementation is identical for the slow path, but only takes ~8
cycles per increment with no waiter, and, eschewing atomic
instruction, does not flush the pipeline (i.e., the outoforder
execution engine is free to maximise throughput). The additional
overhead for an increment without waiter, compared to a regular ring
buffer pointer update, is 34 cycles for a single predictable conditional branch or fused
test
and branch, and the RMW’s load instead of a regular
add/store. That’s closer to zero overhead, which makes it much easier
for coders to offer OSassisted blocking in their lockfree
algorithms, without agonising over the penalty when no one needs to
block.
Hazard pointers and epoch reclamation. Two different memory reclamation technique, in which the fundamental complexity stems from nearly identical synchronisation requirements: rarely, a cold code path (which is allowed to be very slow) writes to memory, and must know when another, much hotter, code path is guaranteed to observe the slow path’s last write.
For hazard pointers, the cold code path waits until, having overwritten an object’s last persistent reference in memory, it is safe to destroy the pointee. The hot path is the reader:
1. read pointer value *(T **)x.
2. write pointer value to hazard pointer table
3. check that pointer value *(T **)x has not changed
Similarly, for epoch reclamation, a readside section will grab the current epoch value, mark itself as reading in that epoch, then confirm that the epoch hasn’t become stale.
1. $epoch < current epoch
2. publish self as entering a readside section under $epoch
3. check that $epoch is still current, otherwise retry
Under a sequentially consistent (SC) memory model, the two sequences are valid with regular (atomic) loads and stores. The slow path can always make its write, then scan every other thread’s singlewriter data to see if any thread has published something that proves it executed step 2 before the slow path’s store (i.e., by publishing the old pointer or epoch value).
The diagrams below show all possible interleavings. In all cases, once there is no evidence that a thread has failed to observe the slow path’s new write, we can correctly assume that all threads will observe the write. I simplified the diagrams by not interleaving the first read in step 1: its role is to provide a guess for the value that will be reread in step 3, so, at least with respect to correctness, that initial read might as well be generating random values. I also kept the second “scan” step in the slow path abstract. In practice, it’s a nonsnapshot read of all the epoch or hazard pointer tables for threads that execute the fast path: the slow path can assume an epoch or pointer will not be resurrected once the epoch or pointer is absent from the scan.
No one implements SC in hardware. X86 and SPARC offer the strongest
practical memory model, Total Store Ordering, and that’s
still not enough to correctly execute the readside critical sections
above without special annotations. Under TSO, reads (e.g., step 3)
are allowed to execute before writes (e.g., step 2).
X86TSO models
that as a buffer in which stores may be delayed, and that’s what the
scenarios below show, with steps 2 and 3 of the fast path reversed
(the slow path can always be instrumented to recover sequential order,
it’s meant to be slow). The TSO interleavings only differ from the SC
ones when the fast path’s steps 2 and 3 are separated by something on
slow path’s: when the two steps are adjacent, their order relative to
the slow path’s steps is unaffected by TSO’s delayed stores. TSO is so
strong that we only have to fix one case, FSSF
, where the slow path
executes in the middle of the fast path, with the reversal of store
and load order allowed by TSO.
Simple implementations plug this hole with a storeload barrier between the second and third steps, or implement the store with an atomic readmodifywrite instruction that doubles as a barrier. Both modifications are safe and recover SC semantics, but incur a nonnegligible overhead (the barrier forces the out of order execution engine to flush before accepting more work) which is only necessary a minority of the time.
The pattern here is similar to the event count, where the slow path signals the fast path that the latter should do something different. However, where the slow path for event counts wants to wait forever if the fast path never makes progress, hazard pointer and epoch reclamation must detect that case and ignore sleeping threads (that are not in the middle of a readside SMR critical section).
In this kind of asymmetric synchronisation pattern, we wish to move as
much of the overhead to the slow (cold) path. Linux 4.3 gained the
membarrier
syscall for exactly this use case. The slow path can execute its
write(s) before making a membarrier
syscall. Once the syscall
returns, any fast path write that has yet to be visible (hasn’t
retired yet), along with every subsequent instruction in program
order, started in a state where the slow path’s writes were visible.
As the next diagram shows, this global barrier lets us rule out the
one anomalous execution possible under TSO, without adding any special
barrier to the fast path.
The problem with membarrier
is that it comes in two flavours: slow,
or not scalable. The initial, unexpedited, version waits for kernel
RCU to run its callback, which, on my machine, takes anywhere between
25 and 50 milliseconds. The reason it’s so slow is that the
condition for an RCU grace period to elapse are more demanding than
a global barrier, and may even require multiple such
barriers. For example, if we used the same scheme to nest epoch
reclamation ten deep, the outermost reclaimer would be 1024 times
slower than the innermost one. In reaction to this slowness,
potential users of membarrier
went back to triggering
IPIs, e.g.,
by mprotect
ing a dummy page.
mprotect
isn’t guaranteed to act as a barrier, and does not do so on
AArch64, so Linux 4.16 added an “expedited” mode to membarrier
. In
that expedited mode, each membarrier syscall sends an IPI to every
other core… when I look at machines with hundreds of cores, \(n 
1\) IPI per core, a couple times per second on every \(n\) core,
start to sound like a bad idea.
Let’s go back to the observation we made for event count: any interrupt acts as a barrier for us, in that any instruction that retires after the interrupt must observe writes made before the interrupt. Once the hazard pointer slow path has overwritten a pointer, or the epoch slow path advanced the current epoch, we can simply look at the current time, and wait until an interrupt has been handled at a later time on all cores. The slow path can then scan all the fast path state for evidence that they are still using the overwritten pointer or the previous epoch: any fast path that has not published that fact before the interrupt will eventually execute the second and third steps after the interrupt, and that last step will notice the slow path’s update.
There’s a lot of information in /proc
that lets us conservatively
determine when a new interrupt has been handled on every core.
However, it’s either too granular (/proc/stat
) or
extremely slow to generate (/proc/schedstat
). More importantly,
even with ftrace
, we can’t easily ask to be woken up when something
interesting happens, and are forced to poll files for updates
(never mind the weirdly hard to productionalise kernel interface).
What we need is a way to read, for each core, the last time it was
definitely processing an interrupt. Ideally, we could also block and
let the OS wake up our waiter on changes to the oldest “last
interrupt” timestamp, across all cores. On x86, that’s enough to get
us the asymmetric barriers we need for hazard pointers and epoch
reclamation, even if only IRET
is serialising, and not
interrupt handler entry. Once a core’s update to its “last interrupt”
timestamp is visible, any write prior to the update, and thus any
write prior to the interrupt is also globally visible: we can only
observe the timestamp update from a different core than the updater,
in which case TSO saves us, or after the handler has returned with a
serialising IRET.
We can bundle all that logic in a short eBPF program.^{2} The program has a map of threadlocal arrays (of 1 CLOCK_MONOTONIC timestamp each), a map of perf event queues (one per CPU), and an array of 1 “watermark” timestamp. Whenever the program runs, it gets the current time. That time will go in the threadlocal array of interrupt timestamps. Before storing a new value in that array, the program first reads the previous interrupt time: if that time is less than or equal to the watermark, we should wake up userspace by enqueueing in event in perf. The enqueueing is conditional because perf has more overhead than a threadlocal array, and because we want to minimise spurious wakeups. A high signaltonoise ratio lets userspace set up the read end of the perf queue to wake up on every event and thus minimise update latency.
We now need a single global daemon
to attach the eBPF program to an arbitrary set of software tracepoints
triggered by interrupts (or PMU events that trigger interrupts), to
hook the perf fds to epoll, and to reread the map of interrupt
timestamps whenever epoll detects a new perf event. That’s what the
rest of the code handles: setting up tracepoints, attaching the eBPF
program, convincing perf to wake us up, and hooking it all up to
epoll
. On my fully loaded 24core E546xx running Linux 4.18 with
security patches, the daemon uses ~12% (much less on 4.16) of
a core to read the map of timestamps every time it’s woken up every ~4
milliseconds. perf
shows the nonJITted eBPF program itself uses
~0.10.2% of every core.
Amusingly enough, while eBPF offers maps that are safe for concurrent
access in eBPF programs, the same maps come with no guarantee when
accessed from userspace, via the syscall interface. However, the
implementation uses a handrolled longbylong copy loop, and, on
x8664, our data all fit in longs. I’ll hope that the kernel’s
compilation flags (e.g., ffreestanding
) suffice to prevent GCC
from recognising memcpy
or memmove
, and that we thus get atomic
store and loads on x8664. Given the quality of eBPF documentation,
I’ll bet that this implementation accident is actually part of
the API. Every BPF map is single writer (either perCPU in the
kernel, or singlethreaded in userspace), so this should work.
Once the barrierd
daemon is running, any program can mmap
its data
file to find out the last time we definitely know each core had
interrupted userspace, without making any further syscall or incurring
any IPI. We can also use regular synchronisation to let the daemon
wake up threads waiting for interrupts as soon as the oldest interrupt
timestamp is updated. Applications don’t even need to call
clock_gettime
to get the current time: the daemon also works in
terms of a virtual time that it updates in the mmap
ed data file.
The barrierd
data file also includes an array of perCPU structs
with each core’s timestamps (both from CLOCK_MONOTONIC and in virtual
time). A client that knows it will only execute on a subset of CPUs,
e.g., cores 26, can compute its own “last interrupt” timestamp by
only looking at entries 2 to 6 in the array. The daemon even wakes up
any futex waiter on the perCPU values whenever they change. The
convenience interface is pessimistic, and assumes that client code
might run on every configured core. However, anyone can mmap
the
same file and implement tighter logic.
Again, there’s a snag with tickless kernels. In the default
configuration already, a fully idle core might not process timer
interrupts. The barrierd
daemon detects when a core is falling
behind, and starts looking for changes to /proc/stat
. This backup
path is slower and coarser grained, but always works with idle cores.
More generally, the daemon might be running on a system with dedicated
cores. I thought about causing interrupts by reaffining RT threads,
but that seems counterproductive. Instead, I think the right approach
is for users of barrierd
to treat dedicated cores specially.
Dedicated threads can’t (shouldn’t) be interrupted, so they can
regularly increment a watchdog counter with a serialising instruction.
Waiters will quickly observe a change in the counters for dedicated
threads, and may use barrierd
to wait for barriers on preemptively
shared cores. Maybe dedicated threads should be able to
hook into barrierd
and checkin from time to time. That would break
the isolation between users of barrierd
, but threads on dedicated
cores are already in a privileged position.
I quickly compared the barrier latency on an unloaded 4way E546xx
running Linux 4.16, with a sample size of 20000 observations per
method (I had to remove one outlier at 300ms). The synchronous
methods mprotect
(which abuses mprotect
to send IPIs by removing
and restoring permissions on a dummy page), or explicit IPI
via
expedited membarrier, are much faster than the other (unexpedited
membarrier
with kernel RCU, or barrierd
that counts interrupts).
We can zoom in on the IPIbased methods, and see that an expedited
membarrier (IPI
) is usually slightly faster than mprotect
;
IPI
via expedited membarrier hits a worstcase of 0.041 ms, versus
0.046 for mprotect
.
The performance of IPIbased barriers should be roughly independent of
system load. However, we did observe a slowdown for expedited
membarrier (between \(68.473.0\%\) of the time,
\(p < 10\sp{12}\) according to a binomial test^{3}) on the same
4way system, when all CPUs were running CPUintensive code at low
priority. In this second experiment, we have a sample size of one
million observations for each method, and the worst case for IPI
via
expedited membarrier was 0.076 ms (0.041 ms on an unloaded system),
compared to a more stable 0.047 ms for mprotect
.
Now for nonIPI methods: they should be slower than methods that trigger synchronous IPIs, but hopefully have lower overhead and scale better, while offering usable latencies.
On an unloaded system, the interrupts that drive barrierd
are less
frequent, sometimes outright absent, so unexpedited membarrier
achieves faster response times. We can even observe barrierd
’s
fallback logic, which scans /proc/stat
for evidence of idle CPUs
after 10 ms of inaction: that’s the spike at 20ms. The values for
vtime
show the additional slowdown we can expect if we wait on
barrierd
’s virtual time, rather than directly reading
CLOCK_MONOTONIC
. Overall, the worst case latencies for barrierd
(53.7 ms) and membarrier
(39.9 ms) aren’t that different, but I
should add another fallback mechanism based on membarrier
to improve
barrierd
’s performance on lightly loaded machines.
When the same 4way, 24core, system is under load, interrupts are
fired much more frequently and reliably, so barrierd
shines, but
everything has a longer tail, simply because of preemption of the
benchmark process. Out of the one million observations we have for
each of unexpedited membarrier
, barrierd
, and barrierd
with
virtual time on this loaded system, I eliminated 54 values over
100 ms (18 for membarrier
, 29 for barrierd
, and 7 for virtual
time). The rest is shown below. barrierd
is consistently much
faster than membarrier
, with a geometric mean speedup of 23.8x. In
fact, not only can we expect barrierd
to finish before an
unexpedited membarrier
\(99.99\%\) of the time
(\(p<10\sp{12}\) according to a binomial test), but we can even expect barrierd
to be 10 times as
fast \(98.398.5\%\) of the time (\(p<10\sp{12}\)). The gap is
so wide that even the opportunistic virtualtime approach is faster
than membarrier
(geometric mean of 5.6x), but this time with a mere
three 9s (as fast as membarrier \(99.9199.96\%\) of the time,
\(p<10\sp{12}\)).
With barrierd, we get implicit
barriers with worse overhead than unexpedited membarrier
(which is
essentially free since it piggybacks on kernel RCU, another sunk
cost), but 1/10th the latency (04 ms instead of 2550 ms). In
addition, interrupt tracking is perCPU, not perthread, so it only
has to happen in a global singlethreaded daemon; the rest of
userspace can obtain the information it needs without causing
additional system overhead. More importantly, threads don’t have to
block if they use barrierd
to wait for a systemwide barrier. That’s
useful when, e.g., a thread pool worker is waiting for a reverse
barrier before sleeping on a futex. When that worker blocks in
membarrier
for 25ms or 50ms, there’s a potential hiccup where a work
unit could sit in the worker’s queue for that amount of time before it
gets processed. With barrierd
(or the event count described earlier), the worker can spin and wait for work units to
show up until enough time has passed to sleep on the futex.
While I believe that information about interrupt times should be made
available without tracepoint hacks, I don’t know if a syscall like
membarrier
is really preferable to a shared daemon like barrierd
.
The one salient downside is that barrierd
slows down when some CPUs
are idle; that’s something we can fix by including a membarrier
fallback, or by sacrificing power consumption and forcing kernel
ticks, even for idle cores.
When we write lockfree code in userspace, we always have preemption in mind. In fact, the primary reason for lockfree code in userspace is to ensure consistent latency despite potentially adversarial scheduling. We spend so much effort to make our algorithms work despite interrupts and scheduling that we can fail to see how interrupts can help us. Obviously, there’s a cost to making our code preemptionsafe, but preemption isn’t an option. Much like garbage collection in managed language, preemption is a feature we can’t turn off. Unlike GC, it’s not obvious how to make use of preemption in lockfree code, but this post shows it’s not impossible.
We can use preemption to get asymmetric barriers, nearly for free,
with a daemon like barrierd
. I see a duality between
preemptiondriven barriers and techniques like Bounded TSO:
the former are relatively slow, but offer hard bounds, while the
latter guarantee liveness, usually with negligible latency, but
without any time bound.
I used preemption to make singlewriter event counts faster
(comparable to a regular nonatomic counter), and to provide a
lowerlatency alternative to membarrier
’s asymmetric barrier.
In a similar vein,
SPeCK
uses time bounds to ensure scalability, at the expense of a bit of
latency, by enforcing periodic TLB reloads instead of relying on
synchronous shootdowns. What else can we do with interrupts, timer or
otherwise?
Thank you Samy, Gabe, and Hanes for discussions on an earlier draft. Thank you Ruchir for improving this final version.
The singleproducer event count specialisation relies on nonatomic readmodifywrite instructions, which are hard to find outside x86. I think the flag flip pattern in epoch and hazard pointer reclamation shows that’s not the only option.
We need two control words, one for the version counter, and another for the sleepers flag. The version counter is only written by the incrementer, with regular nonatomic instructions, while the flag word is written to by multiple producers, always with atomic instructions.
The challenge is that OS blocking primitives like futex only let us
conditionalise the sleep on a single word. We could try to pack a
pair of 16bit short
s in a 32bit int
, but that doesn’t give us a
lot of room to avoid wraparound. Otherwise, we can guarantee that
the sleepers flag is only cleared immediately before incrementing the
version counter. That suffices to let sleepers only conditionalise on
the version counter… but we still need to trigger a wakeup if the
sleepers flag was flipped between the last clearing and the increment.
On the increment side, the logic looks like
must_wake = false
if sleepers flag is set:
must_wake = true
clear sleepers flag
increment version
if must_wake or sleepers flag is set:
wake up waiters
and, on the waiter side, we find
if version has changed
return
set sleepers flag
sleep if version has not changed
The separate “sleepers flag” word doubles the space usage, compared to the single flag bit in the x86 singleproducer version. Composite OS uses that twoword solution in blockpoints, and the advantages seem to be simplicity and additional flexibility in data layout. I don’t know that we can implement this scheme more efficiently in the single producer case, under other memory models than TSO. If this twoword solution is only useful for nonx86 TSO, that’s essentially SPARC, and I’m not sure that platform still warrants the maintenance burden.
But, we’ll see, maybe we can make the above work on AArch64 or POWER.
I actually prefer another, more intuitive, explanation that isn’t backed by official documentation.The store buffer in x86TSO doesn’t actually exist in silicon: it represents the instructions waiting to be retired in the outoforder execution engine. Precise interrupts seem to imply that even entering the interrupt handler flushes the OOE engine’s state, and thus acts as a full barrier that flushes the conceptual store buffer.↩
I used raw eBPF instead of the C frontend because that frontend relies on a ridiculous amount of runtime code that parses an ELF file when loading the eBPF snippet to know what eBPF maps to setup and where to backpatch their fd
number. I also find there’s little advantage to the C frontend for the scale of eBPF programs (at most 4096 instructions, usually much fewer). I did use clang
to generate a starting point, but it’s not that hard to tighten 30 instructions in ways that a compiler can’t without knowing what part of the program’s semantics is essential. The bpf
syscall can also populate a string buffer with additional information when loading a program. That’s helpful to know that something was assembled wrong, or to understand why the verifier is rejecting your program.↩
I computed these extreme confidence intervals with my old code to test statistical SLOs.↩
Getting these nonblocking protocols right is still challenging, but the challenge is one fundamental for reliable systems. The same problems, solutions, and space/functionality tradeoffs appear in all distributed systems. Some would even argue that the kind of interfaces that guarantee lock or wait freedom are closer to the object oriented ideals.
Of course, there is still a place for clever instruction sequences that avoid internal locks, for code that may be paused anywhere without freezing the whole system: interrupts can’t always be disabled, read operations should avoid writing to shared memory if they can, and a single atomic readmodifywrite operation may be faster than locking. The key point for me is that this complexity is optin: we can choose to tackle it incrementally, as a performance problem rather than as a prerequisite for correctness.
We don’t have the same luxury in userspace. We can’t start by focusing on the fundamentals of a nonblocking algorithm, and only implement interruptable sequences where it makes sense. Userspace can’t disable preemption, so we must think about the minutiae of interruptable code sequences from the start; nonblocking algorithms in userspace are always in hard mode, where every step of the protocol might be paused at any instruction.
Specifically, the problem with nonblocking code in user space isn’t that threads or processes can be preempted at any point, but rather that the preemption can be observed. It’s a PCLSRing issue! Even Unices guarantee programmers won’t observe a thread in the middle of a syscall: when a thread (process) must be interrupted, any pending syscall either runs to completion, or returns with an error^{1}. What we need is a similar guarantee for steps of our own nonblocking protocols^{2}.
Hardware transactional memory kind of solves the problem (preemption aborts any pending transaction) but is a bit slow^{3}, and needs a fallback mechanism. Other emulation schemes for PCLSRing userspace code divide the problem in two:
The first part is relatively easy. For perCPU data, it suffices to
observe that we are running on a given CPU (e.g., core #4), and that
another thread claims to own the same CPU’s (core #4’s) data. For
global locks,
we can instead spin for a while before entering a slow path
that determines whether the holder has been preempted, by reading
scheduling information in /proc
.
The second part is harder. I have played with schemes that relied on signals, but was never satisfied: I found Linux perf will rarely, but not never, drop interrupts when I used it to “profile” context switches, and signaling when we determine that the holder has been preempted has memory visibility issues for perCPU data^{5}.
Until earlier this month, the best known solution on mainline Linux involved crossmodifying code! When a CPU executes a memory write instruction, that write is affected by the registers, virtual memory mappings, and the instruction’s bytes. Contemporary operating systems rarely let us halt and tweak another thread’s general purpose registers (Linux won’t let us selfptrace, nor pause an individual thread). Virtual memory mappings are perprocess, and can’t be modified from the outside. The only remaining angle is modifying the premptee’s machine code.
That’s what Facebook’s experimental library Rseq (restartable sequences) actually does.
I’m not happy with that solution either: while it “works,” it requires perthread clones of each critical section, and makes us deal with crossmodifying code. I’m not comfortable with leaving code pages writable, and we also have to guarantee the preemptee’s writes are visible. For me, the only defensible implementation is to modify the code by mmaping pages in place, which incurs an IPI per modification. The total system overhead thus scales superlinearly with the number of CPUs.
With Mathieu Desnoyers’s, Paul Turner’s, and Andrew Hunter’s patch to add an rseq syscall to Linux 4.18, we finally have a decent answer. Rather than triggering special code when a thread detects that another thread has been preempted in the middle of a critical section, userspace can associate recovery code with the address range for each restartable critical section’s instructions. Whenever the kernel preempts a thread, it detects whether the interruptee is in such a restartable sequence, and, if so, redirects the instruction pointer to the associated recovery code. This essentially means that critical sections must be readonly except for the last instruction in the section, but that’s not too hard to satisfy. It also means that we incur recovery even when no one would have noticed, but the overhead should be marginal (there’s at most one recovery per timeslice), and we get a simpler programming model in return.
Earlier this year, I found another way to prevent critical sections from resuming normal execution after being preempted. It’s a total hack that exercises a state saving defect in Linux/x8664, but I’m comfortable sharing it now that Rseq is in mainline: if anyone needs the functionality, they can update to 4.18, or backport the feature.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 

With an appropriate setup, the read_value
function above will return
a different value once the executing thread is switched out. No, the
kernel isn’t overwriting readonly data while we’re switched out. When
I listed the set of inputs that affect a memory store or load
instruction (general purpose registers, virtual memory mappings, and
the instruction bytes), I left out one last x86 thing: segment registers.
Effective addresses on x86oids are about as feature rich as it gets:
they sum a base address, a shifted index, a constant offset, and,
optionally, a segment base. Today, we simply use segment bases to implement
threadlocal storage (each thread’s FS
or GS
offset points to its
threadlocal block), but that usage
repurposes memory segmentation, an old 8086 feature… and x8664 still
maintains some backward compatibility with its 16bit ancestor.
There’s a lot of unused complexity there, so it’s plausible that we’ll
find information leaks or otherwise flawed architectural state
switching by poking around segment registers.
After learning about this trick to observe interrupts from userland, I decided to do a close reading of Linux’s task switching code on x8664 and eventually found this interesting comment^{6}.
Observing a value of 0
in the FS
or GS
registers can mean
two things:
0
.0
in there before setting up the segment base
directly, with WR{FS,GS}BASE
or by writing to a modelspecific
register (MSR).Hardware has to efficiently keep track of which is actually in
effect. If userspace wrote a 0
in FS
or GS
, prefixing an
instruction with that segment has no impact; if the MSR write is
still active (and is nonzero), using that segment must
impact effective address computation.
There’s no easy way to do the same in software. Even in ring 0, the
only surefire way to distinguish between the two cases is to actually
read the current segment base value, and that’s slow. Linux instead
fastpaths the common case, where the segment register is 0 because
the kernel is handling segment bases. It prioritises that use case so
much that the code knowingly sacrifices correctness when userspace
writes 0
in a segment register after asking the kernel to setup its
segment base directly.
This incorrectness is acceptable because it only affects the thread that overwrites its segment register, and no one should go through that sequence of operations. Legacy code can still manipulate segment descriptor tables and address them in segment registers. However, being legacy code, it won’t use the modern syscall that directly manipulates the segment base. Modern code can let the kernel set the segment base without playing with descriptor tables, and has no reason to look at segment registers.
The only way to observe the buggy state saving is to go looking for
it, with something like the code below (which uses GS
because FS
is already taken by glibc
to implement threadlocal storage).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 

Running the above on my Linux 4.14/x8664 machine yields
$ gcc6 std=gnu99 h4x.c && ./a.out
Reads: XXYX
Rereads: XYX
The first set of reads shows that:
GS
(reads[0] == values[0]
)GS
to 0 does not change that (reads[1] == values[0]
)GS
base to 1 with arch_prctl
does work (reads[2] == values[1]
)GS
selector to 0 resets the base (reads[3] == values[0]
).The second set of reads shows that:
re_reads[0] == values[0]
)GS
base to the arch_prctl
value (re_reads[1] == values[1]
)GS
resets the base again (re_reads[2] == values[0]
).The property demonstrated in the hack above is that, after our call to
arch_prctl
, we can write a 0
in GS
with a regular instruction to
temporarily reset the GS
base to 0, and know it will revert to the
arch_prctl
offset again when the thread resumes execution, after
being suspended.
We now have to ensure our restartable sequences are noops when the
GS
base is reset to the arch_prctl
offset, and that the noop is
detected as such. For example, we could set the arch_prctl
offset to
something small, like 4 or 8 bytes, and make sure that any address we
wish to mutate in a critical section is followed by 4 or 8 bytes of
padding that can be detected as such. If a thread is switched out in
the middle of a critical section, its GS
base will be reset to 4 or
8 when the thread resumes execution; we must guarantee that this
offset will make the critical section’s writes fail.
If a write is a compareandswap, we only have to make sure the padding’s value is unambiguously different from real data: reading the padding instead of the data will make compareandswap fail, and the old value will tell us that it failed because we read padding, which should only happen after the section is preempted. We can play similar tricks with fetchandadd (e.g., real data is always even, while the padding is odd), or atomic bitwise operations (steal the sign bit).
If we’re willing to eat a signal after a context switch, we can set
the arch_prctl
offset to something very large, and take a
segmentation fault after being rescheduled. Another option is to set
the arch_prctl
offset to 1, and use a doublewide compareandswap
(CMPXCHG16B
), or turn on the AC (alignment check) bit in
EFLAGS. After a context switch, our destination address will be
misaligned, which will trigger a SIGBUS
that we can handle.
The last two options aren’t great, but, if we make sure to regularly
write a 0 in GS
, signals should be triggered rarely, only when
preemption happens between the last write to GS
and a critical
section. They also have the advantages of avoiding the need for
padding, and making it trivial to detect when a restartable section was
interrupted. Detection is crucial because it often isn’t safe to
assume an operation failed when it succeeded (e.g., unwittingly
succeeding at popping from a memory allocator’s freelist would leak
memory). When a GS
prefixed instruction fails, we must be able to
tell from the instruction’s result, and nothing else. We can’t just
check if the segment base is still what we expect, after the fact: our
thread could have been preempted right after the special GS
prefixed
instruction, before our check.
Once we have restartable sections, we can use them to implement perCPU data structures (instead of perthread), or to let thread acquire locks and hold them until they are preempted: with restartable sections that only write if there was no preemption between the lock acquisition and the final store instruction, we can create a revocable lock abstraction and implement waitfree coöperation or flatcombining.
Unfortunately, our restartable sections will always be hard to debug:
observing a thread’s state in a regular debugger like GDB will reset
the GS
base and abort the section. That’s not unique to the segment
hack approach. Hardware transactional memory will abort critical
sections when debugged, and there’s similar behaviour with the
official rseq
syscall. It’s hard enough to PCLSR userspace code; it
would be even harder to
PCLSRexceptwhentheinterruptionisfordebugging.
The null GS
hack sounds like it only works because of a pile of
questionable design decisions. However, if we look at the historical
context, I’d say everything made sense.
Intel came up with segmentation back when 16 bit pointers were big,
but 64KB of RAM not quite capacious enough. They didn’t have 32 bit
(never mind 64 bit) addresses in mind, nor threads; they only wanted
to address 1 MB of RAM with their puny registers. When thread
libraries abused segments to implement threadlocal storage, the only
other options were to overalign the stack and hide information there,
or to steal a register. Neither sounds great, especially with x86’s
sixandahalf general purpose registers. Finally, when AMD decided
to rip out segmentation, but keep FS
and GS
, they needed to make
porting x86 code as easy as possible, since that was the whole value
proposition for AMD64 over Itanium.
I guess that’s what systems programming is about. We take our tools, get comfortable with their imperfections, and use that knowledge to build new tools by breaking the ones we already have (#Mickens).
Thank you Andrew for a fun conversation that showed the segment
hack might be of interest to someone else, and to Gabe for snarkily
reminding us Rseq
is another Linux/Silicon Valley
reinvention.
That’s not as nice as rewinding the PC to just before the syscall, with a fixed up state that will resume the operation, but is simpler to implement, and usually good enough. Classic worst is better (Unix semantics are also safer with concurrency, but that could have been optin…).↩
That’s not a new observation, and SUN heads like to point to prior art like Dice’s and Garthwaite’s Mostly LockFree Malloc, Garthwaite’s, Dice’s, and White’s work on Preemption notification for perCPU buffers, or Harris’s and Fraser’s Revocable locks. Linux sometimes has to reinvent everything with its special flavour.↩
For instance, SuperMalloc optimistically uses TSX to access perCPU caches, but TSX is slow enough that SuperMalloc first tries to use a perthread cache. Dice and Harris explored the use of hardware transactional lock elision solely to abort on context switches; they maintained high system throughput under contention by trying the transaction once before falling back to a regular lock.↩
I did not expect systems programming to get near multiagent epistemic logic ;)↩
Which is fixable with LOCKed instructions, but that defeats the purpose of perCPU data.↩
I actually found the logic bug before the Spectre/Meltdown fire drill and was worried the hole would be plugged. This one survived the purge. fingers crossed↩
I recently resumed thinking about balls and bins for hash tables. This time, I’m looking at large bins (on the order of one 2MB huge page). There are many hashing methods with solid worstcase guarantees that unfortunately query multiple uncorrelated locations; I feel like we could automatically adapt them to modern hierarchical storage (or address translation) to make them more efficient, for a small loss in density.
In theory, large enough bins can be allocated statically with a minimal waste of space. I wanted some actual nonasymptotic numbers, so I ran numerical experiments and got the following distribution of global utilisation (fill rate) when the first bin fills up.
It looks like, even with one thousand bins of thirty thousand values, we can expect almost 98% space utilisation until the first bin saturates. I want something more formal.
Could I establish something like a service level objective, “When distributing balls randomly between one thousand bins with individual capacity of thirty thousand balls, we can utilise at least 98% of the total space before a bin fills up, x% of the time?”
The natural way to compute the “x%” that makes the proposition true is to first fit a distribution on the observed data, then find out the probability mass for that distribution that lies above 98% fill rate. Fitting distributions takes a lot of judgment, and I’m not sure I trust myself that much.
Alternatively, we can observe independent identically distributed fill rates, check if they achieve 98% space utilisation, and bound the success rate for this Bernoulli process.
There are some nontrivial questions associated with this approach.
Thankfully, I have been sitting on a software package to compute satisfaction rate for exactly this kind of SLOtype properties, properties of the form “this indicator satisfies $PREDICATE x% of the time,” with arbitrarily bounded false positive rates.
The code takes care of adaptive stopping, generates a credible interval, and spits out a report like this : we see the threshold (0.98), the empirical success rate estimate (0.993 ≫ 0.98), a credible interval for the success rate, and the shape of the probability mass for success rates.
This post shows how to compute credible intervals for the Bernoulli’s success rate, how to implement a dynamic stopping criterion, and how to combine the two while compensating for multiple hypothesis testing. It also gives two examples of converting more general questions to SLO form, and answers them with the same code.
If we run the same experiment \(n\) times, and observe \(a\) successes (\(b = n  a\) failures), it’s natural to ask for an estimate of the success rate \(p\) for the underlying Bernoulli process, assuming the observations are independent and identically distributed.
Intuitively, that estimate should be close to \(a / n\), the empirical success rate, but that’s not enough. I also want something that reflects the uncertainty associated with small \(n\), much like in the following ridge line plot, where different phrases are assigned not only a different average probability, but also a different spread.
I’m looking for an interval of plausible success rates \(p\) that responds to both the empirical success rate \(a / n\) and the sample size \(n\); that interval should be centered around \(a / n\), be wide when \(n\) is small, and become gradually tighter as \(n\) increases.
The Bayesian approach is straightforward, if we’re willing to shut up and calculate. Once we fix the underlying success rate \(p = \hat{p}\), the conditional probability of observing \(a\) successes and \(b\) failures is
\[P((a, b)  p = \hat{p}) \sim \hat{p}\sp{a} \cdot (1  \hat{p})\sp{b},\]
where the righthand side is a proportion^{1}, rather than a probability.
We can now apply Bayes’s theorem to invert the condition and the event. The inversion will give us the conditional probability that \(p = \hat{p}\), given that we observed \(a\) successes and \(b\) successes. We only need to impose a prior distribution on the underlying rate \(p\). For simplicity, I’ll go with the uniform \(U[0, 1]\), i.e., every success rate is equally plausible, at first. We find
\[P(p = \hat{p}  (a, b)) = \frac{P((a, b)  p = \hat{p}) P(p = \hat{p})}{P(a, b)}.\]
We already picked the uniform prior, \(P(p = \hat{p}) = 1\,\forall \hat{p}\in [0,1],\) and the denominator is a constant with respect to \(\hat{p}\). The expression simplifies to
\[P(p = \hat{p}  (a, b)) \sim \hat{p}\sp{a} \cdot (1  \hat{p})\sp{b},\]
or, if we normalise to obtain a probability,
\[P(p = \hat{p}  (a, b)) = \frac{\hat{p}\sp{a} \cdot (1  \hat{p})\sp{b}}{\int\sb{0}\sp{1} \hat{p}\sp{a} \cdot (1  \hat{p})\sp{b}\, d\hat{p}} = \textrm{Beta}(a+1, b+1).\]
A bit of calculation, and we find that our credibility estimate for the underlying success rate follows a Beta distribution. If one is really into statistics, they can observe that the uniform prior distribution is just the \(\textrm{Beta}(1, 1)\) distribution, and rederive that the Beta is the conjugate distribution for the Binomial distribution.
For me, it suffices to observe that the distribution \(\textrm{Beta}(a+1, b+1)\) is unimodal, does peak around \(a / (a + b)\), and becomes tighter as the number of observations grows. In the following image, I plotted three Beta distributions, all with empirical success rate 0.9; red corresponds to \(n = 10\) (\(a = 9\), \(b = 1\), \(\textrm{Beta}(10, 2)\)), black to \(n = 100\) (\(\textrm{Beta}(91, 11)\)), and blue to \(n = 1000\) (\(\textrm{Beta}(901, 101)\)).
We calculated, and we got something that matches my intuition. Before trying to understand what it means, let’s take a detour to simply plot points from that unnormalised proportion function \(\hat{p}\sp{a} \cdot (1  \hat{p})\sp{b}\), on an arbitrary \(y\) axis.
Let \(\hat{p} = 0.4\), \(a = 901\), \(b = 101\). Naïvely entering the expression at the REPL yields nothing useful.
CLUSER> (* (expt 0.4d0 901) (expt ( 1 0.4d0) 101))
0.0d0
The issue here is that the unnormalised proportion is so small that it underflows double floats and becomes a round zero. We can guess that the normalisation factor \(\frac{1}{\mathrm{Beta}(\cdot,\cdot)}\) quickly grows very large, which will bring its own set of issues when we do care about the normalised probability.
How can we renormalise a set of points without underflow? The usual trick to handle extremely small or large magnitudes is to work in the log domain. Rather than computing \(\hat{p}\sp{a} \cdot (1  \hat{p})\sp{b}\), we shall compute
\[\log\left[\hat{p}\sp{a} \cdot (1  \hat{p})\sp{b}\right] = a \log\hat{p} + b \log (1  \hat{p}).\]
CLUSER> (+ (* 901 (log 0.4d0)) (* 101 (log ( 1 0.4d0))))
877.1713374189787d0
CLUSER> (exp *)
0.0d0
That’s somewhat better: the logdomain value is not \(\infty\), but converting it back to a regular value still gives us 0.
The \(\log\) function is monotonic, so we can find the maximum
proportion value for a set of points, and divide everything by that
maximum value to get plottable points. There’s one last thing that
should change: when \(x\) is small, \(1  x\) will round most of
\(x\) away.
Instead of (log ( 1 x))
, we should use (log1p ( x))
to compute \(\log (1 + x) = \log (1  x)\). Common
Lisp did not standardise log1p
,
but SBCL does have it in internals, as a wrapper around libm
. We’ll
just abuse that for now.
CLUSER> (defun proportion (x) (+ (* 901 (log x)) (* 101 (sbkernel:%log1p ( x)))))
PROPORTION
CLUSER> (defparameter *points* (loop for i from 1 upto 19 collect (/ i 20d0)))
*POINTS*
CLUSER> (reduce #'max *points* :key #'proportion)
327.4909190001001d0
We have to normalise in the log domain, which is simply a subtraction: \(\log(x / y) = \log x  \log y\). In the case above, we will subtract \(327.49\ldots\), or add a massive \(327.49\ldots\) to each log proportion (i.e., multiply by \(10\sp{142}\)). The resulting values should have a reasonably nonzero range.
CLUSER> (mapcar (lambda (x) (cons x (exp ( (proportion x) *)))) *points*)
((0.05d0 . 0.0d0)
(0.1d0 . 0.0d0)
[...]
(0.35d0 . 3.443943164733533d288)
[...]
(0.8d0 . 2.0682681158181894d16)
(0.85d0 . 2.6252352579425913d5)
(0.9d0 . 1.0d0)
(0.95d0 . 5.65506756824607d10))
There’s finally some signal in there. This is still just an unnormalised proportion function, not a probability density function, but that’s already useful to show the general shape of the density function, something like the following, for \(\mathrm{Beta}(901, 101)\).
Finally, we have a probability density function for the Bayesian update of our belief about the success rate after \(n\) observations of a Bernoulli process, and we know how to compute its proportion function. Until now, I’ve carefully avoided the question of what all these computations even mean. No more (:
The Bayesian view assumes that the underlying success rate (the value we’re trying to estimate) is unknown, but sampled from some distribution. In our case, we assumed a uniform distribution, i.e., that every success rate is a priori equally likely. We then observe \(n\) outcomes (successes or failures), and assign an updated probability to each success rate. It’s like a manyworld interpretation in which we assume we live in one of a set of worlds, each with a success rate sampled from the uniform distribution; after observing 900 successes and 100 failures, we’re more likely to be in a world where the success rate is 0.9 than in one where it’s 0.2. With Bayes’s theorem to formalise the update, we assign posterior probabilities to each potential success rate value.
We can compute an equaltailed credible interval from that \(\mathrm{Beta}(a+1,b+1)\) posterior distribution by excluding the leftmost values, \([0, l)\), such that the Beta CDF (cumulative distribution function) at \(l\) is \(\varepsilon / 2\), and doing the same with the right most values to cut away \(\varepsilon / 2\) of the probability density. The CDF for \(\mathrm{Beta}(a+1,b+1)\) at \(x\) is the incomplete beta function, \(I\sb{x}(a+1,b+1)\). That function is really hard to compute (this technical report detailing Algorithm 708 deploys five different evaluation strategies), so I’ll address that later.
The more orthodox “frequentist” approach to confidence intervals treats the whole experiment, from data colleaction to analysis (to publication, independent of the observations 😉) as an Atlantic City algorithm: if we allow a false positive rate of \(\varepsilon\) (e.g., \(\varepsilon=5\%\)), the experiment must return a confidence interval that includes the actual success rate (population statistic or parameter, in general) with probability \(1  \varepsilon\), for any actual success rate (or underlying population statistic / parameter). When the procedure fails, with probability at most \(\varepsilon\), it is allowed to fail in an arbitrary manner.
The same Atlantic City logic applies to \(p\)values. An experiment (data collection and analysis) that accepts when the \(p\)value is at most \(0.05\) is an Atlantic City algorithm that returns a correct result (including “don’t know”) with probability at least \(0.95\), and is otherwise allowed to yield any result with probability at most \(0.05\). The \(p\)value associated with a conclusion, e.g., “success rate is more than 0.8” (the confidence level associated with an interval) means something like “I’m pretty sure that the success rate is more than 0.8, because the odds of observing our data if that were false are small (less than 0.05).” If we set that threshold (of 0.05, in the example) ahead of time, we get an Atlantic City algorithm to determine if “the success rate is more than 0.8” with failure probability 0.05. (In practice, reporting is censored in all sorts of ways, so…)
There are ways to recover a classical confidence interval, given \(n\) observations from a Bernoulli. However, they’re pretty convoluted, and, as Jaynes argues in his note on confidence intervals, the classical approach gives values that are roughly the same^{2} as the Bayesian approach… so I’ll just use the Bayesian credibility interval instead.
See this stackexchange post for a lot more details.
The way statistics are usually deployed is that someone collects a data set, as rich as is practical, and squeezes that static data set dry for significant results. That’s exactly the setting for the credible interval computation I sketched in the previous section.
When studying the properties of computer programs or systems, we can usually generate additional data on demand, given more time. The problem is knowing when it’s ok to stop wasting computer time, because we have enough data… and how to determine that without running into multiple hypothesis testing issues (ask anyone who’s run A/B tests).
Here’s an example of an intuitive but completely broken dynamic stopping criterion. Let’s say we’re trying to find out if the success rate is less than or greater than 90%, and are willing to be wrong 5% of the time. We could get \(k\) data points, run a statistical test on those data points, and stop if the data let us conclude with 95% confidence that the underlying success rate differs from 90%. Otherwise, collect \(2k\) fresh points, run the same test; collect \(4k, \ldots, 2\sp{i}k\) points. Eventually, we’ll have enough data.
The issue is that each time we execute the statistical test that determines if we should stop, we run a 5% risk of being totally wrong. For an extreme example, if the success rate is exactly 90%, we will eventually stop, with probability 1. When we do stop, we’ll inevitably conclude that the success rate differs from 90%, and we will be wrong. The worstcase (over all underlying success rates) false positive rate is 100%, not 5%!
In my experience, programmers tend to sidestep the question by wasting CPU time with a large, fixed, number of iterations… people are then less likely to run our statistical tests, since they’re so slow, and everyone loses (the other popular option is to impose a reasonable CPU budget, with error thresholds so lax we end up with a smoke test).
Robbins, in Statistical Methods Related to the Law of the Iterated Logarithm, introduces a criterion that, given a threshold success rate \(p\) and a sequence of (infinitely many!) observations from the same Bernoulli with unknown success rate parameter, will be satisfied infinitely often when \(p\) differs from the Bernoulli’s success rate. Crucially, Robbins also bounds the false positive rate, the probability that the criterion be satisfied even once in the infinite sequence of observations if the Bernoulli’s unknown success rate is exactly equal to \(p\). That criterion is
\[{n \choose a} p\sp{a} (1p)\sp{na} \leq \frac{\varepsilon}{n+1},\]
where \(n\) is the number of observations, \(a\) the number of successes, \(p\) the threshold success rate, and \(\varepsilon\) the error (false positive) rate. As the number of observation grows, the criterion becomes more and more stringent to maintain a bounded false positive rate over the whole infinite sequence of observations.
There are similar “Confidence Sequence” results for other distributions (see, for example, this paper of Lai), but we only care about the Binomial here.
More recently, Ding, Gandy, and Hahn showed that Robbins’s criterion also guarantees that, when it is satisfied, the empirical success rate (\(a/n\)) lies on the correct side of the threshold \(p\) (same side as the actual unknown success rate) with probability \(1\varepsilon\). This result leads them to propose the use of Robbins’s criterion to stop Monte Carlo statistical tests, which they refer to as the Confidence Sequence Method (CSM).
(defun csmstopp (successes failures threshold eps)
"Pseudocode, this will not work on a real machine."
(let ((n (+ successes failures)))
(<= (* (choose n successes)
(expt threshold successes)
(expt ( 1 threshold) failures))
(/ eps (1+ n)))))
We may call this predicate at any time with more independent and identically distributed results, and stop as soon as it returns true.
The CSM is simple (it’s all in Robbins’s criterion), but still provides good guarantees. The downside is that it is conservative when we have a limit on the number of observations: the method “hedges” against the possibility of having a false positive in the infinite number of observations after the limit, observations we will never make. For computergenerated data sets, I think having a principled limit is pretty good; it’s not ideal to ask for more data than strictly necessary, but not a blocker either.
In practice, there are still real obstacles to implementing the CSM on computers with finite precision (floating point) arithmetic, especially since I want to preserve the method’s theoretical guarantees (i.e., make sure rounding is onesided to overestimate the lefthand side of the inequality).
If we implement the expression well, the effect of rounding on correctness should be less than marginal. However, I don’t want to be stuck wondering if my bad results are due to known approximation errors in the method, rather than errors in the code. Moreover, if we do have a tight expression with little rounding errors, adjusting it to make the errors onesided should have almost no impact. That seems like a good tradeoff to me, especially if I’m going to use the CSM semiautomatically, in continuous integration scripts, for example.
One look at csmstopp
shows we’ll have the same problem we had with
the proportion function for the Beta distribution: we’re multiplying
very small and very large values. We’ll apply the same fix: work in
the log domain and exploit \(\log\)’s monotonicity.
\[{n \choose a} p\sp{a} (1p)\sp{na} \leq \frac{\varepsilon}{n+1}\]
becomes
\[\log {n \choose a} + a \log p + (na)\log (1p) \leq \log\varepsilon \log(n+1),\]
or, after some more expansions, and with \(b = n  a\),
\[\log n!  \log a!  \log b! + a \log p + b \log(1  p) + \log(n+1) \leq \log\varepsilon.\]
The new obstacle is computing the factorial \(x!\), or the logfactorial \(\log x!\). We shouldn’t compute the factorial iteratively: otherwise, we could spend more time in the stopping criterion than in the data generation subroutine. Robbins has another useful result for us:
\[\sqrt{2\pi} n\sp{n + ½} \exp(n) \exp\left(\frac{1}{12n+1}\right) < n! < \sqrt{2\pi} n\sp{n + ½} \exp(n) \exp\left(\frac{1}{12n}\right),\]
or, in the log domain,
\[\log\sqrt{2\pi} + \left(n + \frac{1}{2}\right)\log n n + \frac{1}{12n+1} < \log n! < \log\sqrt{2\pi} + \left(n + \frac{1}{2}\right)\log n n +\frac{1}{12n}.\]
This double inequality gives us a way to overapproximate \(\log {n \choose a} = \log \frac{n!}{a! b!} = \log n!  \log a!  \log b!,\) where \(b = n  a\):
\[\log {n \choose a} < \log\sqrt{2\pi} + \left(n + \frac{1}{2}\right)\log n n +\frac{1}{12n}  \left(a + \frac{1}{2}\right)\log a +a  \frac{1}{12a+1}  \left(b + \frac{1}{2}\right)\log b +b  \frac{1}{12b+1},\]
where the rightmost expression in Robbins’s double inequality replaces \(\log n!\), which must be overapproximated, and the leftmost \(\log a!\) and \(\log b!\), which must be underapproximated.
Robbins’s approximation works well for us because, it is onesided, and guarantees that the (relative) error in \(n!\), \(\frac{\exp\left(\frac{1}{12n}\right)  \exp\left(\frac{1}{12n+1}\right)}{n!},\) is small, even for small values like \(n = 5\) (error \(< 0.0023\%\)), and decreases with \(n\): as we perform more trials, the approximation is increasingly accurate, thus less likely to spuriously prevent us from stopping.
Now that we have a conservative approximation of Robbins’s criterion
that only needs the four arithmetic operations and logarithms (and
log1p
), we can implement it on a real computer. The only challenge
left is regular floating point arithmetic stuff: if rounding must
occur, we must make sure it is in a safe (conservative) direction for
our predicate.
Hardware usually lets us manipulate the rounding mode to force floating point arithmetic operations to round up or down, instead of the usual round to even. However, that tends to be slow, so most language (implementations) don’t support changing the rounding mode, or do so badly… which leaves us in a multidecade hardware/software coevolution Catch22.
I could think hard and derive tight bounds on the roundoff error, but I’d
rather apply a bit of brute force. IEEE754 compliant implementations
must round the four basic operations correctly. This means that
\(z = x \oplus y\) is at most half a ULP away from \(x + y,\)
and thus either \(z = x \oplus y \geq x + y,\) or the next floating
point value after \(z,\) \(z^\prime \geq x + y\). We can find this
“next value” portably in Common Lisp, with
decodefloat
/scalefloat
, and some handwaving for denormals.
(defun next (x &optional (delta 1))
"Increment x by delta ULPs. Very conservative for
small (0/denormalised) values."
(declare (type doublefloat x)
(type unsignedbyte delta))
(let* ((exponent (nthvalue 1 (decodefloat x)))
(ulp (max (scalefloat doublefloatepsilon exponent)
leastpositivenormalizeddoublefloat)))
(+ x (* delta ulp))))
I prefer to manipulate IEEE754 bits directly. That’s theoretically not portable, but the platforms I care about make sure we can treat floats as signmagnitude integers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 

CLUSER> (doublefloatbits pi)
4614256656552045848
CLUSER> (doublefloatbits ( pi))
4614256656552045849
The two’s complement value for pi
is one less than
( (doublefloatbits pi))
because two’s complement does not support
signed zeros.
CLUSER> (eql 0 ( 0))
T
CLUSER> (eql 0d0 ( 0d0))
NIL
CLUSER> (doublefloatbits 0d0)
0
CLUSER> (doublefloatbits 0d0)
1
We can quickly check that the round trip from float to integer and back is an identity.
CLUSER> (eql pi (bitsdoublefloat (doublefloatbits pi)))
T
CLUSER> (eql ( pi) (bitsdoublefloat (doublefloatbits ( pi))))
T
CLUSER> (eql 0d0 (bitsdoublefloat (doublefloatbits 0d0)))
T
CLUSER> (eql 0d0 (bitsdoublefloat (doublefloatbits 0d0)))
T
We can also check that incrementing or decrementing the integer representation does increase or decrease the floating point value.
CLUSER> (< (bitsdoublefloat (1 (doublefloatbits pi))) pi)
T
CLUSER> (< (bitsdoublefloat (1 (doublefloatbits ( pi)))) ( pi))
T
CLUSER> (bitsdoublefloat (1 (doublefloatbits 0d0)))
0.0d0
CLUSER> (bitsdoublefloat (1+ (doublefloatbits 0d0)))
0.0d0
CLUSER> (bitsdoublefloat (1+ (doublefloatbits 0d0)))
4.9406564584124654d324
CLUSER> (bitsdoublefloat (1 (doublefloatbits 0d0)))
4.9406564584124654d324
The code doesn’t handle special values like infinities or NaNs, but
that’s out of scope for the CSM criterion anyway. That’s all we need
to nudge the result of the four operations to guarantee an over or
under approximation of the real value. We can also look at the
documentation for our libm
(e.g., for GNU libm)
to find error bounds on functions like log
; GNU claims their
log
is never off by more than 3 ULP. We can round up to the
fourth next floating point value to obtain a conservative upper bound
on \(\log x\).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 

I could go ahead and use the building blocks above (ULP nudging for directed rounding) to directly implement Robbins’s criterion,
\[\log {n \choose a} + a \log p + b\log (1p) + \log(n+1) \leq \log\varepsilon,\]
with Robbins’s factorial approximation,
\[\log {n \choose a} < \log\sqrt{2\pi} + \left(n + \frac{1}{2}\right)\log n n +\frac{1}{12n}  \left(a + \frac{1}{2}\right)\log a +a  \frac{1}{12a+1}  \left(b + \frac{1}{2}\right)\log b +b  \frac{1}{12b+1}.\]
However, even in the log domain, there’s a lot of cancellation: we’re taking the difference of relatively large numbers to find a small result. It’s possible to avoid that by reassociating some of the terms above, e.g., for \(a\):
\[\left(a + \frac{1}{2}\right) \log a + a  a \log p = \frac{\log a}{2} + a (\log a + 1  \log p).\]
Instead, I’ll just brute force things (again) with Kahan summation. Shewchuk’s presentation in Adaptive Precision FloatingPoint Arithmetic and Fast Robust Geometric Predicates highlights how the only step where we may lose precision to rounding is when we add the current compensation term to the new summand. We can implement Kahan summation with directed rounding in only that one place: all the other operations are exact!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 

We need one last thing to implement \(\log {n \choose a}\), and then Robbins’s confidence sequence: a safely rounded floatingpoint value approximation of \(\log \sqrt{2 \pi}\). I precomputed one with computablereals:
CLUSER> (computablereals:r
(computablereals:logr
(computablereals:sqrtr computablereals:+2pir+)))
0.91893853320467274178...
CLUSER> (computablereals:ceilingr
(computablereals:*r *
(ash 1 53)))
8277062471433908
0.65067431749790398594...
CLUSER> (* 8277062471433908 (expt 2d0 53))
0.9189385332046727d0
CLUSER> (computablereals:r (rational *)
***)
+0.00000000000000007224...
We can safely replace \(\log\sqrt{2\pi}\) with
0.9189385332046727d0
, or, equivalently,
(scalefloat 8277062471433908.0d0 53)
, for an upper bound.
If we wanted a lower bound, we could decrement the integer significand
by one.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 

We can quickly check against an exact implementation with
computablereals
and a brute force factorial.
CLUSER> (defun crlogchoose (n s)
(computablereals:r
(computablereals:logr (alexandria:factorial n))
(computablereals:logr (alexandria:factorial s))
(computablereals:logr (alexandria:factorial ( n s)))))
CRLOGCHOOSE
CLUSER> (computablereals:r (rational (robbinslogchoose 10 5))
(crlogchoose 10 5))
+0.00050526703375914436...
CLUSER> (computablereals:r (rational (robbinslogchoose 1000 500))
(crlogchoose 1000 500))
+0.00000005551513197557...
CLUSER> (computablereals:r (rational (robbinslogchoose 1000 5))
(crlogchoose 1000 5))
+0.00025125559085509706...
That’s not obviously broken: the error is pretty small, and always positive.
Given a function to overapproximate logchoose, the Confidence Sequence Method’s stopping criterion is straightforward.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 

The other, much harder, part is computing credible (Bayesian) intervals for the Beta distribution. I won’t go over the code, but the basic strategy is to invert the CDF, a monotonic function, by bisection^{3}, and to assume we’re looking for improbable (\(\mathrm{cdf} < 0.5\)) thresholds. This assumption lets us pick a simple hypergeometric series that is normally useless, but converges well for \(x\) that correspond to such small cumulative probabilities; when the series converges too slowly, it’s always conservative to assume that \(x\) is too central (not extreme enough).
That’s all we need to demo the code. Looking at the distribution of fill rates for the 1000 bins @ 30K ball/bin facet in
it looks like we almost always hit at least 97.5% global density, let’s say with probability at least 98%. We can ask the CSM to tell us when we have enough data to confirm or disprove that hypothesis, with a 0.1% false positive rate.
Instead of generating more data on demand, I’ll keep things simple and prepopulate a list with new independently observed fill rates.
CLUSER> (defparameter *observations* '(0.978518900
0.984687300
0.983160833
[...]))
CLUSER> (defun test (n)
(let ((count (countif (lambda (x) (>= x 0.975))
*observations*
:end n)))
(csm:csm n 0.98d0 count (log 0.001d0))))
CLUSER> (test 10)
NIL
2.1958681996231784d0
CLUSER> (test 100)
NIL
2.5948497850893184d0
CLUSER> (test 1000)
NIL
3.0115331544604658d0
CLUSER> (test 2000)
NIL
4.190687115879456d0
CLUSER> (test 4000)
T
17.238559826956475d0
We can also use the inverse Beta CDF to get a 99.9% credible interval. After 4000 trials, we found 3972 successes.
CLUSER> (countif (lambda (x) (>= x 0.975))
*observations*
:end 4000)
3972
These values give us the following lower and upper bounds on the 99.9% CI.
CLUSER> (csm:betaicdf 3972 ( 4000 3972) 0.001d0)
0.9882119750976562d0
1.515197753898523d5
CLUSER> (csm:betaicdf 3972 ( 4000 3972) 0.001d0 t)
0.9963832682169742d0
2.0372679238045424d13
And we can even reuse and extend the Beta proportion code from earlier to generate this embeddable SVG report.
There’s one small problem with the sample usage above: if we compute the stopping criterion with a false positive rate of 0.1%, and do the same for each end of the credible interval, our total false positive (error) rate might actually be 0.3%! The next section will address that, and the equally important problem of estimating power.
It’s not always practical to generate data forever. For example, we might want to bound the number of iterations we’re willing to waste in an automated testing script. When there is a bound on the sample size, the CSM is still correct, just conservative.
We would then like to know the probability that the CSM will stop
successfully when the underlying success rate differs from the
threshold rate \(p\) (alpha
in the code). The problem here is
that, for any bounded number of iterations, we can come up with an
underlying success rate so close to \(p\) (but still different) that
the CSM can’t reliably distinguish between the two.
If we want to be able to guarantee any termination rate, we need two thresholds: the CSM will stop whenever it’s likely that the underlying success rate differs from either of them. The hardest probability to distinguish from both thresholds is close to the midpoint between them.
With two thresholds and the credible interval, we’re running three tests in parallel. I’ll apply a Bonferroni correction, and use \(\varepsilon / 3\) for each of the two CSM tests, and \(\varepsilon / 6\) for each end of the CI.
That logic is encapsulated in csmdriver
.
We only have to pass a
success value generator function to the driver. In our case, the
generator is itself a call to csmdriver
, with fixed thresholds
(e.g., 96% and 98%), and a Bernoulli sampler (e.g., return T
with
probability 97%). We can see if the driver returns successfully and
correctly at each invocation of the generator function, with the
parameters we would use in production, and recursively compute
an estimate for that procedure’s success rate with CSM. The following
expression simulates a CSM procedure with thresholds at 96% and 98%,
the (usually unknown) underlying success rate in the middle, at 97%, a
false positive rate of at most 0.1%, and an iteration limit of ten thousand
trials. We pass that simulation’s result to csmdriver
, and ask
whether the simulation’s success rate differs from 99%, while allowing
one in a million false positives.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 

We find that yes, we can expect the 96%/98%/0.1% false positive/10K
iterations setup to succeed more than 99% of the time. The
code above is available as csmpower
,
with a tighter outer false positive rate of 1e9. If we only allow
1000 iterations, csmpower
quickly tells us that, with one CSM
success in 100 attempts, we can expect the CSM success rate to be less
than 99%.
CLUSER> (csm:csmpower 0.97d0 0.96d0 1000 :alphahi 0.98d0 :eps 1d3 :stream *standardoutput*)
1 0.000e+0 1.250e10 10.000e1 1.699e+0
10 0.000e+0 0.000e+0 8.660e1 1.896e+1
20 0.000e+0 0.000e+0 6.511e1 3.868e+1
30 0.000e+0 0.000e+0 5.099e1 5.851e+1
40 2.500e2 5.518e7 4.659e1 7.479e+1
50 2.000e2 4.425e7 3.952e1 9.460e+1
60 1.667e2 3.694e7 3.427e1 1.144e+2
70 1.429e2 3.170e7 3.024e1 1.343e+2
80 1.250e2 2.776e7 2.705e1 1.542e+2
90 1.111e2 2.469e7 2.446e1 1.741e+2
100 1.000e2 2.223e7 2.232e1 1.940e+2
100 iterations, 1 successes (false positive rate < 1.000000e9)
success rate p ~ 1.000000e2
confidence interval [2.223495e7, 0.223213 ]
p < 0.990000
max inner iteration count: 816
T
T
0.01d0
100
1
2.2234953205868331d7
0.22321314110840665d0
Until now, I’ve only used the Confidence Sequence Method (CSM) for Monte Carlo simulation of phenomena that are naturally seen as boolean success / failures processes. We can apply the same CSM to implement an exact test for null hypothesis testing, with a bit of resampling magic.
Looking back at the balls and bins grid, the average fill rate seems to be slightly worse for 100 bins @ 60K ball/bin, than for 1000 bins @ 128K ball/bin. How can we test that with the CSM?
First, we should get a fresh dataset for the two setups we wish to compare.
CLUSER> (defparameter *10060k* #(0.988110167
0.990352500
0.989940667
0.991670667
[...]))
CLUSER> (defparameter *1000128k* #(0.991456281
0.991559578
0.990970109
0.990425805
[...]))
CLUSER> (alexandria:mean *10060k*)
0.9897938
CLUSER> (alexandria:mean *1000128k*)
0.9909645
CLUSER> ( * **)
0.0011706948
The mean for 1000 bins @ 128K ball/bin is slightly higher than that
for 100 bins @ 60k ball/bin. We will now simulate the null hypothesis
(in our case, that the distributions for the two setups are
identical), and determine how rarely we observe a difference of
0.00117
in means. I only use a null hypothesis where the
distributions are identical for simplicity; we could use the same
resampling procedure to simulate distributions that, e.g., have
identical shapes, but one is shifted right of the other.
In order to simulate our null hypothesis, we want to be as close to the test we performed as possible, with the only difference being that we generate data by reshuffling from our observations.
CLUSER> (defparameter *resamplingdata* (concatenate 'simplevector *10060k* *1000128k*))
*RESAMPLINGDATA*
CLUSER> (length *10060k*)
10000
CLUSER> (length *1000128k*)
10000
The two observation vectors have the same size, 10000 values; in
general, that’s not always the case, and we must make sure to
replicate the sample sizes in the simulation. We’ll generate our
simulated observations by shuffling the *resamplingdata*
vector,
and splitting it in two subvectors of ten thousand elements.
CLUSER> (let* ((shuffled (alexandria:shuffle *resamplingdata*))
(60k (subseq shuffled 0 10000))
(128k (subseq shuffled 10000)))
( (alexandria:mean 128k) (alexandria:mean 60k)))
6.2584877e6
We’ll convert that to a truth value by comparing the difference of simulated means with the difference we observed in our real data, \(0.00117\ldots\), and declare success when the simulated difference is at least as large as the actual one. This approach gives us a onesided test; a twosided test would compare the absolute values of the differences.
CLUSER> (csm:csmdriver
(lambda (_)
(declare (ignore _))
(let* ((shuffled (alexandria:shuffle *resamplingdata*))
(60k (subseq shuffled 0 10000))
(128k (subseq shuffled 10000)))
(>= ( (alexandria:mean 128k) (alexandria:mean 60k))
0.0011706948)))
0.005 1d9 :alphahi 0.01 :stream *standardoutput*)
1 0.000e+0 7.761e11 10.000e1 2.967e1
10 0.000e+0 0.000e+0 8.709e1 9.977e1
20 0.000e+0 0.000e+0 6.577e1 1.235e+0
30 0.000e+0 0.000e+0 5.163e1 1.360e+0
40 0.000e+0 0.000e+0 4.226e1 1.438e+0
50 0.000e+0 0.000e+0 3.569e1 1.489e+0
60 0.000e+0 0.000e+0 3.086e1 1.523e+0
70 0.000e+0 0.000e+0 2.718e1 1.546e+0
80 0.000e+0 0.000e+0 2.427e1 1.559e+0
90 0.000e+0 0.000e+0 2.192e1 1.566e+0
100 0.000e+0 0.000e+0 1.998e1 1.568e+0
200 0.000e+0 0.000e+0 1.060e1 1.430e+0
300 0.000e+0 0.000e+0 7.207e2 1.169e+0
400 0.000e+0 0.000e+0 5.460e2 8.572e1
500 0.000e+0 0.000e+0 4.395e2 5.174e1
600 0.000e+0 0.000e+0 3.677e2 1.600e1
700 0.000e+0 0.000e+0 3.161e2 2.096e1
800 0.000e+0 0.000e+0 2.772e2 5.882e1
900 0.000e+0 0.000e+0 2.468e2 9.736e1
1000 0.000e+0 0.000e+0 2.224e2 1.364e+0
2000 0.000e+0 0.000e+0 1.119e2 5.428e+0
NIL
T
0.0d0
2967
0
0.0d0
0.007557510165262294d0
We tried to replicate the difference 2967 times, and did not succeed
even once. The CSM stopped us there, and we find a CI for the
probability of observing our difference, under the null hypothesis, of
[0, 0.007557]
(i.e., \(p < 0.01\)). Or, for a graphical summary, .
We can also test for a lower \(p\)value by changing the
thresholds and running the simulation more times (around thirty
thousand iterations for \(p < 0.001\)).
This experiment lets us conclude that the difference in mean fill rate between 100 bins @ 60K ball/bin and 1000 @ 128K is probably not due to chance: it’s unlikely that we observed an expected difference between data sampled from the same distribution. In other words, “I’m confident that the fill rate for 1000 bins @ 128K ball/bin is greater than for 100 bins @ 60K ball/bins, because it would be highly unlikely to observe a difference in means that extreme if they had the same distribution (\(p < 0.01\))”.
In general, we can use this exact test when we have two sets of observations, \(X\sb{0}\) and \(Y\sb{0}\), and a statistic \(f\sb{0} = f(X\sb{0}, Y\sb{0})\), where \(f\) is a pure function (the extension to three or more sets of observations is straightforward).
The test lets us determine the likelihood of observing \(f(X, Y) \geq f\sb{0}\) (we could also test for \(f(X, Y) \leq f\sb{0}\)), if \(X\) and \(Y\) were taken from similar distributions, modulo simple transformations (e.g., \(X\)’s mean is shifted compared to \(Y\)’s, or the latter’s variance is double the former’s).
We answer that question by repeatedly sampling without replacement from \(X\sb{0} \cup Y\sb{0}\) to generate \(X\sb{i}\) and \(Y\sb{i}\), such that \(X\sb{i} = X\sb{0}\) and \(Y\sb{i} = Y\sb{0}\) (e.g., by shuffling a vector and splitting it in two). We can apply any simple transformation here (e.g., increment every value in \(Y\sb{i}\) by \(\Delta\) to shift its mean by \(\Delta\)). Finally, we check if \(f(X\sb{i}, Y\sb{i}) \geq f\sb{0} = f(X\sb{0}, Y\sb{0})\); if so, we return success for this iteration, otherwise failure.
The loop above is a Bernoulli process that generates independent, identically distributed (assuming the random sampling is correct) truth values, and its success rate is equal to the probability of observing a value for \(f\) “as extreme” as \(f\sb{0}\) under the null hypothesis. We use the CSM with false positive rate \(\varepsilon\) to know when to stop generating more values and compute a credible interval for the probability under the null hypothesis. If that probability is low (less than some predetermined threshold, like \(\alpha = 0.001\)), we infer that the null hypothesis does not hold, and declare that the difference in our sample data points at a real difference in distributions. If we do everything correctly (cough), we will have implemented an Atlantic City procedure that fails with probability \(\alpha + \varepsilon\).
Personally, I often just set the threshold and the false positive rate unreasonably low and handwave some Bayes.
I pushed
the code above, and much more, to github,
in Common
Lisp, C, and Python (probably Py3, although 2.7 might work). Hopefully
anyone can run with the code and use it to test, not only
SLOtype
properties, but also answer more general questions, with an exact
test. I’d love to have ideas or contributions on the usability front.
I have some
throwaway code in attic/
,
which I used to generate the SVG in this post, but it’s not great. I
also feel like I can do something to make it easier to stick the logic
in shell scripts and continuous testing pipelines.
When I passed around a first draft for this post, many readers that could have used the CSM got stuck on the process of moving from mathematical expressions to computer code; not just how to do it, but, more fundamentally, why we can’t just transliterate Greek to C or CL. I hope this revised post is clearer. Also, I hope it’s clear that the reason I care so much about not introducing false positive via rounding isn’t that I believe they’re likely to make a difference, but simply that I want peace of mind with respect to numerical issues; I really don’t want to be debugging some issue in my tests and have to wonder if it’s all just caused by numerical errors.
The reason I care so much about making sure users can understand what the CSM codes does (and why it does what it does) is that I strongly believe we should minimise dependencies whose inner working we’re unable to (legally) explore. Every abstraction leaks, and leakage is particularly frequent in failure situations. We may not need to understand magic if everything works fine, but, everything breaks eventually, and that’s when expertise is most useful. When shit’s on fire, we must be able to break the abstraction and understand how the magic works, and how it fails.
This post only tests ideal SLOtype properties (and regular null hypothesis tests translated to SLO properties), properties of the form “I claim that this indicator satisfies $PREDICATE x% of the time, with false positive rate y%” where the indicator’s values are independent and identically distributed.
The last assumption is rarely truly satisfied in practice. I’ve seen an interesting choice, where the service level objective is defined in terms of a sample of production requests, which can replayed, shuffled, etc. to ensure i.i.d.ness. If the nature of the traffic changes abruptly, the SLO may not be representative of behaviour in production; but, then again, how could the service provider have guessed the change was about to happen? I like this approach because it is amenable to predictive statistical analysis, and incentivises communication between service users and providers, rather than users assuming the service will gracefully handle radically new crap being thrown at it.
Even if we have a representative sample of production, it’s not true that the service level indicators for individual requests are distributed identically. There’s an easy fix for the CSM and our credible intervals: generate i.i.d. sets of requests by resampling (e.g., shuffle the requests sample) and count successes and failures for individual requests, but only test for CSM termination after each resampled set.
On a more general note, I see the Binomial and Exact tests as instances of a general pattern that avoids intuitive functional decompositions that create subproblems that are harder to solve than the original problem. For example, instead of trying to directly determine how frequently the SLI satisfies some threshold, it’s natural to first fit a distribution on the SLI, and then compute percentiles on that distribution. Automatically fitting an arbitrary distribution is hard, especially with the weird outliers computer systems spit out. Reducing to a Bernoulli process before applying statistics is much simpler. Similarly, rather than coming up with analytical distributions in the Exact test, we bruteforce the problem by resampling from the empirical data. I have more examples from online control systems… I guess the moral is to be wary of decompositions where internal subcomponents generate intermediate values that are richer in information than the final output.
Thank you Jacob, Ruchir, Barkley, and Joonas for all the editing and restructuring comments.
Proportions are unscaled probabilities that don’t have to sum or integrate to 1. Using proportions instead of probabilities tends to make calculations simpler, and we can always get a probability back by rescaling a proportion by the inverse of its integral.↩
Instead of a \(\mathrm{Beta}(a+1, b+1)\), they tend to bound with a \(\mathrm{Beta}(a, b)\). The difference is marginal for doubledigit \(n\).↩
I used the bisection method instead of more sophisticated ones with better convergence, like Newton’s method or the derivativefree Secant method, because bisection already adds one bit of precision per iteration, only needs a predicate that returns “too high” or “too low,” and is easily tweaked to be conservative when the predicate declines to return an answer.↩
The question is interesting because stream processing in constant space is a subset of L (or FL), and thus probably not Pcomplete, let alone Turing complete. Having easily characterisable subsets of stream processing that can be implemented in constant space would be a boon for the usability of stream DSLs.
I think I find this academic trope as suspicious as @DRMavIver does, so I have mixed feelings about the fact that this one still feels true seven years later.
Is it just me or do impossibility theorems which claim "these three obviously desirable properties cannot simultaneously be satisfied" always include at least one obviously undesirable or at least suspicious property?
— David R. MacIver (@DRMacIver) June 19, 2018
The main reason I believe in this conjecture is the following
example, F(S(X), X)
, where S
is the function that takes a stream
and ouputs every other value. Or, more formally, \(F\sb{i} = f(X\sb{2i}, X\sb{i})\).
Let’s say X
is some stream of values that can’t be easily
recomputed (e.g., each output value is the result of a slow
computation). How do we then compute F(S(X), X)
without either
recomputing the stream X
, or buffering an unbounded amount of past
values from that stream? I don’t see a way to do so, not just in any
stream processing DSL (domain specific language), but also in any
general purpose language.
For me, the essence of the problem is that the two inputs to F
are
out of sync with respect to the same source of values, X
: one
consumes two values of X
per invocation of F
, and the other only
one. This issue could also occur if we forced stream transducers
(processing nodes) to output a fixed number of value at each
invocation: let S
repeat each value of X
twice,
i.e., interleave X
with X
(\(F\sb{i} = f(X\sb{\lfloor i / 2\rfloor}, X\sb{i})\)).
Forcing each invocation of a transducer to always produce exactly one value is one way to rule out this class of stream processing network. Two other common options are to forbid either forks (everything is singleuse or subtrees copied and recomputed for each reuse) or joins (only singleinput stream processing nodes).
I don’t think this turtleandhare desynchronisation problem is a weakness in stream DSLs, I only see a reasonable task that can’t be performed in constant space. Given the existence of such tasks, I’d like to see stream processing DSLs be explicit about the tradeoffs they make to balance performance guarantees, expressiveness, and usability, especially when it comes to the performance model.
]]>