This paper presents the P-Optimal algorithms from a local search perspective, proposes new requirements for MAX-CSP solvers and shows encouraging experimental results showing that the preprocessing technique is practically interesting.
@INPROCEEDINGS{Evergreen:lscs-07, AUTHOR = "Christine D. Hang and Ahmed Abdelmeged and Daniel Rinehart and Karl J. Lieberherr", TITLE = "{The Promise of Polynomial-based Local Search to Boost Boolean MAX-CSP Solvers}", YEAR = 2007, booktitle = "{Proceedings of Fourth International Workshop on Local Search Techniques in Constraint Satisfaction, CP2007, Providence, Rhode Island, September 2007}", MONTH = "September" }
Comments by Karl Lieberherr
There is an interesting connection to: SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT. L. Xu, F. Hutter, H. Hoos, K. Leyton-Brown. Principles and Practice of Constraint Programming (CP), Providence, 2007.
SATzilla-07 advocates a per-instance based choice of algorithm. SATzilla-07 uses machine learning to choose an algorithm. The Evergreen project uses a similar per-instance approach based on mathematical analysis and not based on machine learning.
Given a formula (an instance), our algorithm determines the type of the formula (consisting of several "features"). This type determines a polynomial that is used to find an approximation.
While in empirical hardness models the per-instance approach is used to predict running time, we use it to predict the existance of a solution of a given quality.
maximal(F,J) iff for n-map(F,J) the expected value of the fraction of satisfied constraints for a coin with probability p is maximum for p=0. We use the shorter phrase: maximal(F,J) iff for n-map(F,J) the maximum biased coin has p=0.
It is also useful to have a single argument predicate: maximal(F) iff for F the maximum biased coin has p=0.
Evergreen Local Search, ELS(F) iterates EvergreenPlayer(F) combined with an n-map resulting in a formula F' for which maximal(F') holds.
determine initial candidate solution s; while s is not a local optimum choose a neighbor s' of s such that g(s') < g(s); s:=s'We can make our local search algorithm stochastic by using a random permutation of the variables to do the successive Shannon decompositions. So in Evergreen Player, we use each time a different variable order when we go through the loop. This variable reordering does not affect the post condition that we prove for our algorithm but it will have the effect that upon a restart, we will most likely explore a different sequence of increasingly better assignments.
The SLS book also presents variable neighborhood approaches. In the case of the Evergreen Local Search, we analytically choose a most promissing neighborhood by maximizing over averages. There are interesting variations of our algorithm. Instead of using the k where the mean polynomial is maximum, we could also search for the k where the mean polynomial is the ith highest, for i=2,3,4 and 5, for example.
1982 2007 MAXMEAN EvergreenPlayer MAXMEAN* EvergreenLocalSearchWhat is new in this paper is the implementation and the experimental results. What is also new is a randomized version of EvergreenPlayer and its analysis. Furthermore the reformulation using local search neighborhoods is new. The paper also postulates two laws that future MAX-CSP algorithms hopefully will follow.