Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning

Author(s): Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia

Abstract
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems in 2QBF we learn a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.

Electronic Downloads

Citation Formats

  • APA
                    
    Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia. (2020). Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning. In International Conference on Learning Representations (ICLR).                       
                    
                    
  • MLA
                    
    Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia. "Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning." International Conference on Learning Representations (ICLR), 2020.                       
                    
                    
  • Chicago
                    
    Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia. "Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning." International Conference on Learning Representations (ICLR), 2020.                       
                    
                    
  • BibTeX
                        
    @inproceedings{LedermanEtAl:20:LearningQBF,
    	author = {Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia},
    	title = {Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning},
    booktitle = {International Conference on Learning Representations (ICLR)},
    month = {April 26-May 1},
    year = {2020},
    abstract = {We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems in 2QBF we learn a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.},
    URL = {https://ptolemy.berkeley.edu/publications/papers/20/LedermanEtAl_QBF_ICLR_2020.pdf}}