The Promise of Polynomial-based Local Search to Boost Boolean MAX-CSP Solvers

This paper presents the P-Optimal algorithms from a local search perspective, proposes new requirements for MAX-CSP solvers and shows encouraging experimental results showing that the preprocessing technique is practically interesting.

```@INPROCEEDINGS{Evergreen:lscs-07,
AUTHOR = "Christine D. Hang and Ahmed Abdelmeged and Daniel Rinehart and Karl J. Lieberherr",
TITLE = "{The Promise of Polynomial-based Local Search to Boost Boolean MAX-CSP Solvers}",
YEAR = 2007,
booktitle = "{Proceedings of Fourth International Workshop on Local Search Techniques in Constraint Satisfaction, CP2007, Providence, Rhode Island, September 2007}",
MONTH = "September"
}
```

## Software available

There is an interesting connection to: SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT. L. Xu, F. Hutter, H. Hoos, K. Leyton-Brown. Principles and Practice of Constraint Programming (CP), Providence, 2007.

SATzilla-07 advocates a per-instance based choice of algorithm. SATzilla-07 uses machine learning to choose an algorithm. The Evergreen project uses a similar per-instance approach based on mathematical analysis and not based on machine learning.

Given a formula (an instance), our algorithm determines the type of the formula (consisting of several "features"). This type determines a polynomial that is used to find an approximation.

While in empirical hardness models the per-instance approach is used to predict running time, we use it to predict the existance of a solution of a given quality.

## Definition of maximal

The definition of maximal(F,J) where F is a formula and M an assignment can be made more intuitive. The notion of using a biased coin with bias p to generate assignments is a well understood concept.

maximal(F,J) iff for n-map(F,J) the expected value of the fraction of satisfied constraints for a coin with probability p is maximum for p=0. We use the shorter phrase: maximal(F,J) iff for n-map(F,J) the maximum biased coin has p=0.

It is also useful to have a single argument predicate: maximal(F) iff for F the maximum biased coin has p=0.

### Elevator speech for ELS(F)

Our local search / preprocessing algorithm starts with a formula F and turns it into an equally satisfiable formula for which the maximum biased coin has p=0.

### Elevator speech for EvergreenPlayer(F)

First we compute the maximum bias for F. Then we iterate the following: The algorithm performs a Shannon decomposition: F = x*F[x]+!x*F[!x] and computes the bias for the positive and negative Shannon cofactors F[x] and F[!x] based on the bias for F. If F[x] achieves the best expected value we set x to true and else to false. We apply the same process to the better cofactor and iterate until all variables are set. This is a linear time algorithm.

Evergreen Local Search, ELS(F) iterates EvergreenPlayer(F) combined with an n-map resulting in a formula F' for which maximal(F') holds.

### Stochastic Local Search by Holger Hoos and Thomas Stuetzle

Our current local search algorithm is not a stochastic local search algorithm but only a local search algorithm that iterates through neighborhoods. Our current algorithm qualifies as an Iterative Improvement (II) algorithm, as described by Hoos and Stuetzle but without the stochastic property:
```determine initial candidate solution s;
while s is not a local optimum
choose a neighbor s' of s such that g(s') < g(s);
s:=s'
```
We can make our local search algorithm stochastic by using a random permutation of the variables to do the successive Shannon decompositions. So in Evergreen Player, we use each time a different variable order when we go through the loop. This variable reordering does not affect the post condition that we prove for our algorithm but it will have the effect that upon a restart, we will most likely explore a different sequence of increasingly better assignments.

The SLS book also presents variable neighborhood approaches. In the case of the Evergreen Local Search, we analytically choose a most promissing neighborhood by maximizing over averages. There are interesting variations of our algorithm. Instead of using the k where the mean polynomial is maximum, we could also search for the k where the mean polynomial is the ith highest, for i=2,3,4 and 5, for example.

### Name changes for the algorithms

The algorithms in this paper are already in the Journal of Algorithms (1982) paper.
```1982         2007
MAXMEAN  EvergreenPlayer
MAXMEAN* EvergreenLocalSearch
```
What is new in this paper is the implementation and the experimental results. What is also new is a randomized version of EvergreenPlayer and its analysis. Furthermore the reformulation using local search neighborhoods is new. The paper also postulates two laws that future MAX-CSP algorithms hopefully will follow.