Paul Khuong: some Lisp

Fixed Points and Strike Mandates

Feb 19th, 2012 | Comments

Many tasks in compilation and program analysis (in symbolic computation in general, I suppose) amount to finding solutions to systems of the form \(x = f(x)\). However, when asked to define algorithms to find such fixed points, we rarely stop and ask “which fixed point are we looking for?”

In practice, we tend to be interested in fixed points of monotone functions: given a partial order \((\prec)\), we have \(a \prec b \Rightarrow f(a)\prec f(b)\). Now, in addition to being a fairly reasonable hypothesis, this condition usually lets us exploit Tarski’s fixed point theorem. If the domain of \(f\) (with \(\prec\)) forms a complete lattice, so does the set of fixpoints of \(f\) ! As a corollary, there then exists exactly one least and one greatest fixed point under \(\prec\).

This is extremely useful, because we can usually define useful meet and join operations, and enjoy a complete lattice. For example, for a domain that’s the power set of a given set, we can use \(\subset\) as the order relation, \(\cup\) as join, and \(\cap\) as meet. However, what I find interesting to note is that, when we don’t pay attention to which fixpoint we wish to find, humans seem to consistently develop algorithms that converge to the least or greatest one, depending on the problem. It’s as though we all have a common blind spot covering one of the extreme fixed points.

A simple example is dead value (useless variable) elimination. When I ask people how they’d identify such variables in a program, the naïve solutions tend to be very similar. They exploit the observation that a value is useless if it’s only used to compute values that are themselves useless. The routines start out with every value live (used), and prune away useless values, until there’s nothing left to remove.

These algorithms converge to solutions that are correct, but suboptimal (except for cycle-free code). We wish to identify as many useless values as possible, to eliminate as many computations as possible. Yet, if we start by assuming that all values are live, our algorithm will fail to identify some obviously-useless values, like x in:

for (...)
  x = x

We could keep adding more special cases. However, the correct (simplest) solution is to try and identify live values, rather than dead ones. A value is live if it’s used to compute a live value. Moreover, return values and writes to memory are always live. Our routine now starts out by assuming that only the latter values are live, and adjoins live values as it finds them, until there’s nothing left to add.

In this case, the intuitive solution converges to the greatest fixed point, but we’re looking for the least fixed point. Setting the right initial value ensures convergence to the right fixed point.

Other common instances of this pattern are reference counting instead of marking, or performing type propagation by initially assigning the top type to all values (like SBCL).

# I recently found a use for fixed point computations outside of math and computer science.

Most university or CEGEP student unions in Québec will vote (or already have voted) on strike mandates to help organize protests against rising university tuition fees this winter and spring. There are hundreds of such unions across the province representing, in total, around four hundred thousand students. The vast majority of these unions comprise a couple hundred (or fewer) students, and many feel it would be counter-productive for only a tiny number of students to be on strike. Thus, strike mandates commonly include conditions regarding the minimal number of other students who also hold strike mandates, along with additional lower bounds on the number of unions and universities or colleges involved. As far as I know, all the mandates adopted so far are monotone: if they are satisfied by a set striking unions, they are also satisfied by all of its supersets.

Tarski’s theorem applies (again, with \((\subset, \cup, \cap)\) on the power set of the set of student unions). Which fixed point are we looking for?

It’s clear to me that we’re looking for the fixed point with the largest set of striking unions. In some situations, the least fixed point could trivially be the empty set (or all unions that did not adopt any lower bound). Moreover, the mandates are usually presented with an explanation to the effect that, if unions representing at least \(n_0\) students adopt the same mandate, then all unions that have adopted the mandate will go on strike simultaneously.

I asked fellow graduate students in computer science to sketch an algorithm to determine which unions should go on strike given their mandates; they started with the set of student unions currently on strike, and adjoined unions for which all the conditions were met. Such algorithms will converge toward the least fixed point. For example, there could be two unions, each comprising 5 000 students, with the same strike floor of 10 000 students, and these algorithms would have both unions deadlocked, waiting for the other to go on strike.

Instead, we should start by assuming that all the unions (with a strike mandate) are on strike, and iteratively remove unions whose conditions are not all met, until we hit the greatest fixed point. I’m fairly sure this will end up being a purely theoretical concern, but it’s a pretty neat case of abstract mathematics helping us interpret a real-world situation.

This pattern of intuitively converging toward a suboptimal solution seems to come up a lot when computing fixed points. It’s not necessarily a bad choice: conservative initial values tend to lead to faster convergence, and often have the property that intermediate solutions are always correct (feasible). When we need quick results, it may make sense to settle for suboptimal solutions. However, it ought to be a deliberate choice, rather than a consequence of failing to consider other possibilities.

Comments