Papers
- Completeness for First-Order Properties on Sparse Structures with Algorithmic Applications.Properties definable in first-order logic are algorithmically interesting for both theoretical and pragmatic reasons. Many of the most studied algorithmic problems, such as Hitting Set and Orthogonal Vectors, are first-order, and the first-order properties naturally arise as relational database queries. A relatively straightforward algorithm for evaluating a property with $k+1$ quantifiers takes time $O(m^k)$ and, assuming the Strong Exponential Time Hypothesis (SETH), some such properties require $O(m^{k-\epsilon})$ time for any $\epsilon > 0$. (Here, $m$ represents the size of the input structure, i.e.~the number of tuples in all relations.)
We give algorithms for every first-order property that improves this upper bound to $m^k/2^{\Theta(\sqrt{\log n})}$, i.e., an improvement by a factor more than any poly-log, but less than the polynomial required to refute SETH. Moreover, we show that further improvement is
equivalent to improving algorithms for sparse instances of the well-studied Orthogonal Vectors problem. Surprisingly, both results are obtained by showing completeness of the Sparse Orthogonal Vectors problem for the class of first-order properties under fine-grained reductions. To obtain improved algorithms, we apply the fast Orthogonal Vectors algorithm of [Abboud,Vassilevska Williams,Yu 2015, Chan, Williams 2016].While fine-grained reductions (reductions that closely preserve the conjectured complexities of problems) have been used to relate the hardness of disparate specific problems both within P and beyond, this is the first such completeness result for a standard complexity class.
ACM-SIAM Symposium on Discrete Algorithms (SODA) 2017 (pdf) - Learning Algorithms from Natural Proofs.
Based on Håstad's {1986} circuit lower bounds, Linial, Mansour, and Nisan {1993} gave a quasipolytime learning algorithm for $AC^0$ {constant-depth circuits with AND, OR, and NOT gates}, in the PAC model over the uniform distribution. It was an open question to get a learning algorithm of any kind for the class of $AC^0[p]$ circuits {constant-depth, with AND, OR, NOT, and MOD$_p$ gates for a prime $p$}. Our main result is a quasipolytime learning algorithm for $AC^0[p]$ in the PAC model over the uniform distribution with membership queries. This algorithm is an application of a general connection we show to hold between natural proofs {in the sense of Razborov and Rudich {1997}} and learning algorithms. We argue that a natural proof of a circuit lower bound against any {sufficiently powerful} circuit class yields a learning algorithm for the same circuit class. As the lower bounds against $AC^0[p]$ by Razborov {1987} and Smolensky {1987} are natural, we obtain our learning algorithm for $AC^0[p]$.
Conference on Computational Complexity (CCC) 2016: 10:1-10:24 (pdf)Electronic Colloquium on Computational Complexity (ECCC) 23: 8 (2016) (pdf) - Expander Construction in VNC1.
We give a combinatorial analysis, using edge expansion, of a variant of the iterative expander construction due to Reingold, Vadhan, and Wigderson [Annals of Mathematics, 2002], and show that this analysis can be formalized in the bounded-arithmetic system VNC^{1} corresponding to the ``NC^{1} reasoning''. As a corollary, we prove the assumption made by Jeřábek [Annals of Pure and Applied Logic, 2011] that a construction of certain bipartite expander graphs can be formalized in VNC^{1}. This in turn implies that every proof in Gentzen's sequent calculus LK of a monotone sequent can be simulated in the monotone version of LK (MLK) with only polynomial blowup in proof size, strengthening the quasipolynomial simulation result of Atserias, Galesi, and Pudlák [Journal of Computer and System Sciences, 2002].
Innovations in Theoretical Computer Science (ITCS) 2017 (pdf)Electronic Colloquium on Computational Complexity (ECCC) 23: 144 (2016) (pdf) - Mining Circuit Lower Bound Proofs for Meta-Algorithms.
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for ``easy'' Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2^{n}) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size 2^{n}/n. We get non-trivial compression for functions computable by AC^{0} circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known.
These compression algorithms rely on the structural characterizations of ``easy'' functions, which are useful both for proving circuit lower bounds and for designing ``meta-algorithms'' (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the ``shrinkage under random restrictions'' results [Subbotovskaya61,Hastad98], strengthened to the ``high-probability'' version by [Santhanam10,IMZ12,KR12]. We give a new, simple proof of the ``high-probability'' version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n^{2}. We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [KR12] of the average-case lower bound against small (de Morgan) formulas.
Finally, we show that the existence of any non-trivial compression algorithm for a circuit class C in P/poly would imply the circuit lower bound for NEXP against C. This complements Williams's result [Williams10] that any non-trivial Circuit-SAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP.
Computational Complexity 24(2): 333-392 (2015) (pdf)IEEE Conference on Computational Complexity 2014: 262-273 (pdf)Electronic Colloquium on Computational Complexity (ECCC) 20: 57 (2013) (pdf) - Complexity of alignment and decoding problems: restrictions and approximations.
We study the computational complexity of the Viterbi alignment and relaxed decoding problems for IBM model 3, focusing on the problem of finding a solution which has significant overlap with an optimal. That is, an approximate solution is considered good if it looks like some optimal solution with a few mistakes, where mistakes can be wrong values (such as a word aligned incorrectly or a wrong word in decoding), as well as insertions and deletions (spurious/missing words in decoding).
In this setting, we show that it is computationally hard to find a solution which is correct on more than half (plus an inverse polynomial fraction) of the words. More precisely, if there is a polynomial-time algorithm computing an alignment for IBM model 3 which agrees with some Viterbi alignment on $l/2+l^\epsilon$ words, where $l$ is the length of the English sentence, or producing a decoding with $l/2+l^\epsilon$ correct words, then P=NP. We also present a similar structure inapproximability result for phrase-based alignment.
As these strong lower bounds are for the general definitions of the Viterbi alignment and decoding problems, we also consider, from parameterized complexity perspective, which properties of the input make these problems intractable. As a first step in this direction, we show that Viterbi alignment has a fixed-parameter tractable algorithm with respect to limiting the range of words in the target sentence that a source word can be aligned to. We note that by comparison, limiting maximal fertility, even to 3, does not affect NP-hardness of the result.
Machine Translation 29(3-4): 163-187 (2015) (pdf) - Tighter Connections between Derandomization and Circuit Lower Bounds.
We tighten the connections between circuit lower bounds and derandomization for each of the following three types of derandomization:
- general derandomization of promise-BPP (connected to Boolean circuits),
- derandomization of Polynomial Identity Testing, PIT, over fixed finite fields (connected to arithmetic circuit lower bounds over the same field), and
- derandomization of PIT over the integers (connected to arithmetic circuit lower bounds over the integers).
We show how to make these connections uniform equivalences, although at the expense of using somewhat less common versions of complexity classes and for a less studied notion of inclusion.
Our main results are as follows:
- We give the first proof that a non-trivial, nondeterministic subexponential-time, algorithm for PIT over a fixed finite field yields arithmetic circuit lower bounds.
- We get a similar result for the case of PIT over the integers, strengthening a result of Jansen and Santhanam, 2012 (by removing the need for advice).
- We derive a Boolean circuit lower bound for NEXP$\cap$coNEXP from the assumption of sufficiently strong non-deterministic derandomization of promise-BPP (without advice), as well as from the assumed existence of an NP-computable non-empty property of Boolean functions useful for proving superpolynomial circuit lower bounds in the sense of natural proofs of Razborov and Rudich (1987); this strengthens the related results of Impagliazzo et al. (2002).
- Finally, we turn all of these implications into equivalences for appropriately defined promise classes and for a notion of robust inclusion/separation (inspired by Fortnow and Santhanam (2011)) that lies between the classical ``almost everywhere'' and ``infinitely often'' notions.
APPROX-RANDOM 2015: 645-658 (pdf) - Glider mission planning using generic solvers
In this paper we describe several approaches to the AUV (glider) mission planning problem and investigate their complexity. At the heart of such mission planning are variants of an NP-hard (Non-deterministic Polynominal-time) Asymmetric Travelling Salesman Problem (ATSP); however, some modern-day heuristics can solve this problem optimally in a reasonable amount of time (although providing a proof of optimality slows down the computation). A glider mission plan often has to accommodate a variety of other constraints such as scheduling restrictions, specific time windows or order to visit selected points and so on. Here we consider a general AUV mission planning problem which, although based on ATSP, can incorporate other constraints. Thus, the use of general-purpose solvers such as Integer Linear Programming or Satisfiability-based solvers may be desired. With this goal, we have developed a software package for the glider mission planning problem that utilizes a variety of existing solvers to compute an optimal order of goal points to visit, subject to travel time as well as userprovided additional constraints. Then, to evaluate feasibility of this setting using state-of-the-art solvers, we analyze the performance of a variety of solvers on the core ATSP problem.
Journal of Ocean Technology 2:19, 2014, pp. 47-67.) (pdf) -
Approximating solution structure of the Weighted Sentence Alignment problem.
We study the complexity of approximating solution structure of the bijective weighted sentence alignment problem of DeNero and Klein (2008). In particular, we consider the complexity of finding an alignment that has a significant overlap with an optimal alignment.
We discuss ways of representing the solution for the general weighted sentence alignment as well as phrases-to-words alignment problem, and show that computing a string which agrees with the optimal sentence partition on more than half (plus an arbitrarily small polynomial fraction) positions for the phrases-to-words alignment is NP-hard. For the general weighted sentence alignment we obtain such bound from the agreement on a little over 2/3 of the bits.
Additionally, we generalize the Hamming distance approximation of a solution structure to approximating it with respect to the edit distance metric, obtaining similar lower bounds.
CoRR abs/1409.2433 (2014) (pdf) - Compression of Boolean Functions.
We consider the problem of compression for ``easy'' Boolean functions: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2^{n}) a circuit C (no restriction on the type of C) computing f so that the size of C is less than the trivial circuit size 2^{n}/n.
We get both positive and negative results. On the positive side, we show that several circuit classes for which lower bounds are proved by a method of random restrictions:
- AC^{0},
- (de Morgan) formulas, and
- (read-once) branching programs,
allow non-trivial compression for circuits up to the size for which lower bounds are known. On the negative side, we show that compressing functions from any class C⊆P/poly implies superpolynomial lower bounds against C for a function in NEXP; we also observe that compressing monotone functions of polynomial circuit complexity or functions computable by large-size AC^{0} circuits would also imply new superpolynomial circuit lower bounds.
Finally, we apply the ideas used for compression to get zero-error randomized #SAT-algorithms for de Morgan and complete-basis formulas, as well as branching programs, on n variables of about quadratic size that run in expected time 2^{n}/2^{nε}, for some ε>0 (dependent on the size of the formula/branching program).
Electronic Colloquium on Computational Complexity (ECCC) 20: 24 (2013) (pdf) -
- On the Complexity of Model Expansion.We study the complexity of model expansion (MX), which is the problem of expanding a given finite structure with additional relations to produce a finite model of a given formula. This is the logical task underlying many practical constraint languages and systems for representing and solving search problems, and our work is motivated by the need to provide theoretical foundations for these. We present results on both data and combined complexity of MX for several fragments and extensions of FO that are relevant for this purpose, in particular the guarded fragment GF_{k} of FO and extensions of FO and GF_{k} with inductive definitions. We present these in the context of the two closely related, but more studied, problems of model checking and finite satisfiability. To obtain results on FO(ID), the extension of FO with inductive definitions, we provide translations between FO(ID) with FO(LFP), which are of independent interest.International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR) 2010: 447-458 (pdf)
- An axiomatic approach to algebrization.Non-relativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by ``black-box'' techniques. In the early 1990's, a sequence of important non-relativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed to understand the power and limitations of these algebraic techniques:
- Fortnow gives a construction of a class of oracles which have a similar algebraic and logical structure, although they are arbitrarily powerful. He shows that many of the non-relativizing results proved using algebraic techniques hold for all such oracles, but he does not show, e.g., that the outcome of the ``P vs. NP'' question differs between different oracles in that class.
- Aaronson and Wigderson give definitions of algebrizing separations and collapses of complexity classes, by comparing classes relative to one oracle to classes relative to an algebraic extension of that oracle. Using these definitions, they show both that the standard collapses and separations ``algebrize'' and that many of the open questions in complexity fail to ``algebrize'', suggesting that the arithmetization technique is close to its limits. However, it is unclear how to formalize algebrization of more complicated complexity statements than collapses or separations, and whether the algebrizing statements are, e.g., closed under modus ponens; so it is conceivable that several algebrizing premises could imply (in a relativizing way) a non-algebrizing conclusion.
In this paper, building on the work of Arora, Impagliazzo, and Vazirani, we propose an axiomatic approach to ``algebrization'', which complements and clarifies the approaches of Fortnow and Aaronson&Wigderson. We present logical theories formalizing the notion of algebrizing techniques in the following sense: most known complexity results proved using arithmetization are provable within our theories, while many open questions are independent of the theories. So provability in the proposed theories can serve as a surrogate for provability using the arithmetization technique.
Our theories extend the Arora et al.'s theory with a new axiom, Arithmetic Checkability which intuitively says that all NP languages have verifiers that are efficiently computable low-degree polynomials (over the integers). We show the following:
- Arithmetic checkability holds relative to arbitrarily powerful oracles (since Fortnow's algebraic oracles all satisfy the Arithmetic Checkability axiom).
- Most of the algebrizing collapses and separations from Aaronson&Wigderson, such as IP=PSPACE, NP ⊂ ZKIP if one-way functions exist, MA-EXP ⊄ P/poly, etc., are provable from Arithmetic Checkability.
- Many of the open complexity questions (including most of those shown to require non-algebrizing techniques in Aaronson&Wigderson), such as ``P vs. NP'', ``NP vs. BPP'', etc., cannot be proved from Arithmetic Checkability.
- Arithmetic Checkability is also insufficient to prove one known result, NEXP=MIP (although relative to an oracle satisfying Arithmetic Checkability, NEXP^{O} restricted to poly-length queries is contained in MIP^{O}, mirroring a similar result from Aaronson&Wigderson).
STOC 2009: 695-704 (pdf) - Many Facets of Complexity in Logic.There are many ways to define complexity in logic. In finite model theory, it is the complexity of describing properties, whereas in proof complexity it is the complexity of proving properties in a proof system. Here we consider several notions of complexity in logic, the connections among them, and their relationship with computational complexity. In particular, we show how the complexity of logics in the setting of finite model theory is used to obtain results in bounded arithmetic, stating which functions are provably total in certain weak systems of arithmetic. For example, the transitive closure function (testing reachability between two given points in a directed graph) is definable using only NL-concepts (where NL is non-deterministic log-space complexity class), and its totality is provable within NL-reasoning.CiE 2008: 316-325 (pdf)
- Complexity of expanding a finite structure and related tasks.The authors of [MT05] proposed a declarative constraint programming framework based on classical logic extended with non-monotone inductive definitions. In the framework, a problem instance is a finite structure, and a problem specification is a formula defining the relationship between an instance and its solutions. Thus, problem solving amounts to expanding a finite structure with new relations, to satisfy the formula. We present here the complexities of model expansion for a number of logics, alongside those of satisfiability and model checking. As the task is equivalent to witnessing the existential quantifiers in $\exists SO$ model checking, the paper is in large part of a survey of this area, together with some new results. In particular, we describe the combined and data complexity of FO(ID), first-order logic extended with inductive definitions [DT04] and the guarded and k-guarded logics of [AvBN98] and [GLS01].International workshop on logic and computational complexity (LCC) 2006 (pdf)
- Closure Properties of Weak Systems of Bounded Arithmetic.In this paper we study the properties of systems of bounded arithmetic capturing small complexity classes and state conditions sufficient for such systems to capture the corresponding complexity class tightly. Our class of systems of bounded arithmetic is the class of second-order systems with comprehension axiom for a syntactically restricted class of formulas $\Phi \subset \Sigma_1^B$ based on a logic in the descriptive complexity setting. This work generalizes the results of [CK03] and [CK04]. We show that if the system 1) extends $V_0$ (second-order version of $I\Delta_0$), 2) $\Delta_1$-defines all functions with bitgraphs from $\Phi$, and 3) proves witnessing for all theorems from $\Phi$, then the class of $\Sigma_1^B$-definable functions of the resulting system is exactly the class expressed by $\Phi$ in the descriptive complexity setting, provably in this system.CSL 2005: 369-383 (pdf)
- A Second-Order Theory for NL.We introduce a second-order theory V-Krom of bounded arithmetic for nondeterministic log space. This system is based on Grädel's characterization of NL by second-order Krom formulae with only universal first-order quantifiers, which in turn is motivated by the result that the decision problem for 2-CNF satisfiability is complete for coNL (and hence for NL). This theory has the style of the authors' theory $V_1$-Horn [APAL 124 (2003)] for polynomial time. Both theories use Zambella's elegant second-order syntax, and are axiomatized by a set 2-BASIC of simple formulae, together with a comprehension scheme for either second-order Horn formulae (in the case of $V_1$-Horn), or second-order Krom (2CNF) formulae (in the case of V-Krom). Our main result for V-Krom is a formalization of the Immerman-Szelepcenyi theorem that NL is closed under complementation. This formalization is necessary to show that the NL functions are $\Sigma^B_1$-definable in V-Krom. The only other theory for NL in the literature relies on the Immerman-Szelepcenyi's result rather than proving it.LICS 2004: 398-407 (pdf)
- A second-order system for polytime reasoning based on GrĂ¤del's theorem.We introduce a second-order system $V_1$-Horn of bounded arithmetic formalizing polynomial-time reasoning, based on Grädel's second-order Horn characterization of P. Our system has comprehension over P predicates (defined by Grädel's second-order Horn formulas) , and only finitely many function symbols. Other systems of polynomial-time reasoning either allow induction on NP predicates (such as Buss's $S_2^1$ or the second-order $V_1^1$), and hence are more powerful than our system (assuming the polynomial hierarchy does not collapse), or use Cobham's theorem to introduce function symbols for all polynomial-time functions (such as Cook's $PV$ and Zambella's P-def). We prove that our system is equivalent to $QPV$ and Zambella's P-def. Using our techniques, we also show that $V_1$-Horn is finitely axiomatizable, and, as a corollary, that the class of $\forall\Sigma_1^b$ consequences of $S_2^1$ is finitely axiomatizable as well, thus answering an open question.Annals of Pure and Applied Logic 124(1-3): 193-231 (2003) (pdf)
Surveys
- Complexity barriers as independenceAfter many years of effort, the main questions of complexity theory remain unresolved, even though the concepts involved are simple. Understanding the main idea behind the statement of the ``P vs NP" problem does not require much background (``is it easier to check answers than to produce them?"). Yet, we are as far from resolving it as ever. Much work has been done to unravel the intricate structure in the complexity world, the ``complexity zoo" contains hosts of inhabitants. But the main questions are still elusive.
So a natural question comes to mind: is there any intrinsic reason why this is still unknown? Is there any rationale why the proofs are out of our reach? Maybe we are not using the right techniques -- or maybe we are not pushing our techniques far enough? After trying to prove a statement and failing we try to prove its negation; after failing at that, as well, we resort to looking for an explanation that might give us a hint at why our attempts are failing. Indeed, in the world of computational complexity there have been several results of this nature: results that state that current techniques are, in a precise mathematical sense, insufficient to resolve the main open problems. We call these results ``barriers".
A pessimistic view of the barrier results would be that the questions are hard, intrinsically hard. But there is a more optimistic way of interpreting them. The fact that certain classes of proof techniques, ones that have specific properties, are eliminated gives us a direction to search for new techniques. It gives us a method for discovering ways of approaching questions in places where we might not have been looking, if not for the barrier results. In this paper, we will focus on three major complexity barriers: Relativization, Algebrization, and Natural Proofs. Interestingly enough, all three of those can be recast in the framework of independence of a theory of logic. That is, theories can be constructed which formalize (almost) all known techniques, yet for which the main open questions of complexity theory are independent.
To appear in Barry Cooper and Mariya Soskova (eds.), THE INCOMPUTABLE - Journeys beyond the Turing barrier. Springer Verlag, 2016 (pdf)