Sum-of-squares: proofs, beliefs, and algorithms — Boaz Barak and David Steurer

Index PDF

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Approaches to prove the Unique Games Conjecture

Given the sub-exponential time algorithm for unique games (Arora, Barak, and Steurer 2015), that under the exponential time hypothesis, the unique games conjecture implies the following conjecture:

Intermediate complexity conjecture: There exist some \(1>c>s>0\), \(1>\alpha>\beta>0\) and a CSP \(CSP_\Sigma(\cP)\) such that the \(c\) vs \(s\) problem for \(CSP_\Sigma(\cP)\) can be solved in time \(\exp(O(n^\alpha))\) but it cannot be solved in time faster than \(\exp(\Omega(n^{\beta}))\).

This is a very interesting conjecture in its own right ,as it says that unlike the widely believed situation for exact computation, it is not the case that every CSP approximation problem either can be solved in polynomial time or requires \(\exp(\Omega(n))\) time. Thus, if the Unique Games Conjecture is true, then the complexity landscape of approximation problems for CSP’s is much richer (at least in this sense) than the one for exact computation. This issue of “intermediate complexity” also raises some obstacles for certain approaches for proving the unique games conjecture, as well as suggests certain directions for doing so.

Subexponential complexity and gadget reductions.

The popular approach to proving hardness of approximation for CSP’s can be called the “label cover + gadget paradigm”.

A label cover predicate is a predicate \(LC\from\Sigma'\times\Sigma'\to\bits\) such that there is some \(|\Sigma'|/|\Sigma''|\)-to-one functions \(\pi_1,\pi_2\) such that \(P(x,y)=1\) iff \(\pi_1(x)=\pi_2(y)\).

One canonical way to get a label cover instance is the “clause vs clause” construction. Suppose \(I\) is an instance of some CSP, say \(3LIN(2)\) for concreteness, we can define a new label cover instance \(I'\) where for every equation \(x_i+x_j+x_k = b\) we have a variable \(X_{i,j,k}\) in the alphabet \([4]\) which we identify with the set of satisfying assignments to this equation. For every two equations that share a variable \(x_i + x_j + x_k = b\), \(x_i + x_j + x_k = b'\) we put in the constraint that this variable is identical in both, which corresponds to checking that two projections of \([4]\) to \(\bits\) agree with one another.

One can relate the two instances as follows:

Assume that the original \(3LIN(2)\) instance was \(d\) regular and had \(m\) constraints (i.e., every variable participated in the same number of constraints).

  • Prove that if there is an assignment \(x\in\bits^n\) for the original instance satisfying \(1-\epsilon\) fraction of the constraints then there is an assignment \(y \in [4]^m\) satisfying at least \(1-2\epsilon\) fraction of the constraints of the label cover instance.
  • Prove that if there is an assignment \(y\in [4]^m\) satisfying at least \(1-\delta\) fraction of the constraints of the label cover instance then there is an assignment \(x\in \bits^n\) satisfying at least \(1-2\delta\) fraction of the constraints of the original instance.

Given a label cover instance, a canonical way to reduce it to a CSP instance is the following:

Let \({\mathcal{LC}}\) be a family of label cover predicates mapping \(\Sigma'\) to \(\bits\) and \(\cP\) be a family of predicates mapping \(\Sigma^k\) to \(\bits\) for some \(\Sigma,k\). A \((c',s')\mapsto (c,s)\) gadget reduction from \(CSP({\mathcal{LC}})\) to \(CSP(\cP)\) consists of an encoding map \(E\from \Sigma'\to\Sigma^t\) and a gadget map that takes a predicate \(LC\in {\mathcal{LC}}\) to a \(CSP(\cP)\) instance \(\cG_{LC}\) on \(2t\) variables, such that for every \(n\)-variable instance \(I'\) of \(CSP({\mathcal{LC}})\), if we let \(I\) be the \(nt\)-variate instance in for every constraint \(LC(x_i,x_j)=1\) we place the \(\cG_{LC}\) instance on the \(2t\) variables of the \(i\)-th and \(j\)-th blocks then it holds that:

  • If \(x'\in {\Sigma'}^n\) satisfies at least a \(c'\) fraction of the constraints of \(I'\), then \(x= (E(x'_1),\ldots,E(x'_n)) \in \Sigma^{nt}\) satisfies at least \(c\) fraction of the constraints of \(I\).
  • If \(x\in \Sigma^{tn}\) satisfies at least \(s\) fraction of the constraints of \(I\), then there exists some \(x \in {\Sigma'}^n\) that satisfies at least \(s'\) fraction of the constraints of \(I'\).

Note that for every \(t\), a gadget reduction maps an instance \(I'\) of \(n\) variables and \(m\) constraints into an instance \(I\) of \(nt\) variables and at most \(m(2t)^k|\cP|\) constraints, which for \(t,k,|\cP|\) constant means that the size of \(I\) is linear in the size of \(I'\). Hence in particular one can show the following:

Prove that if there is a \((c',s')\mapsto (c,s)\) reduction from \(CSP({\mathcal{LC}})\) to \(CSP(\cP)\) with parameter \(t\), then if there is a \(T(n)\) time algorithm for the \(c\) vs \(s\) problem for \(CSP(\cP)\) then there is a \(T(Cn)\) time algorithm for the \(c'\) vs \(s'\) problem for \(CSP({\mathcal{LC}})\) where \(C\) is a constant depending only on \(|\cP|,k,t\).

Prove that under the assumptions above, if \(I'\) is a \(CSP({\mathcal{LC}})\) instance that has a degree \(d\) pseudo-distribution \(\mu'\) such that \(\pE_{\mu'(x)} \tfrac{1}{I'}\sum_{f\in I'} f(x) \geq c'\) then there exists a degree \(d/C\) pseudo-distribution \(\mu\) such that \(\pE_{\mu(x)} \tfrac{1}{|I|}\sum_{f\in I}f(x) \geq c\) where \(C\) is a constant depending only on \(|\cP|,k,t\).

In particular this means that, if we assume that the original label cover instance could not be solved in time \(\exp(n^{1-\epsilon})\) then the same holds for the resulting instance, and if we had an \(\Omega(n)\) lower bound on the sos degree for the original instance then that lower bound carries over to the resulting instance. If the unique games conjecture is NP hard, then (assuming the ETH), the corresponding computational problem cannot be solved in time \(\exp(n^{o(1)})\), while we know that it can be solved by an \(\exp(n^\epsilon)\) time algorithm for some small \(\epsilon>0\), and in fact by the degree \(n^\epsilon\) sos program. This means that if want to establish the UGC via a gadget reduction, we’d better start with a label cover instance that has intermediate complexity, in both the time and the sos degree senses.

On label cover instances with intermediate complexity.

Some of the approaches to prove the unique games conjecture involve gadget reduction on top of certain label cover instances. Thus these approaches attempt to first prove (variants of) the “intermediate complexity conjecture” and then use that to derive the unique games conjecture. This raises the question of what properties of label cover instances could yield to them having intermediate complexity in certain approximation regimes. Assuming the unique games conjecture then having 1 to 1 projections (or even \(O(1)\) to \(1\)) is one such property, but is it easier to show this for other properties? Can we use sos to get some intuition on whether we expect this to be true?

The original way to manufacture label cover instances that are very hard to approximate was to start with a label cover instance over alphabet \(\Sigma\) with say \(1\) vs \(1-\epsilon\) hardness (e.g., by starting from 3SAT) and then transform it into an instance over alphabet \(\Sigma^{\otimes t}\) with gap, say, \(1\) vs \((1-\epsilon^{O(1)})^{\Omega(t)}\) using an amplification result such as the parallel repetition theorem (Raz 1995). For example, if we started with the label cover corresponding to a 3XOR instance, we would get a label cover instance of alphabet \(\Sigma = [4]^t\) where the projection maps \(\Sigma\) to an alphabet of size \(2^t = \sqrt{|\Sigma|}\) and the hardness of approximation would be \(1\) vs \(|\Sigma|^{-\epsilon}\) for some \(\epsilon>0\).

The parallel repetition theorem blows up an instance of size \(n\) to size \(N=n^t\), and so one could a priori conjecture that the label cover problem with gap of \(1-\epsilon\) vs \(\epsilon\) has intermediate complexity, in the sense that it is NP hard but has an algorithm that runs in time \(2^{N^{1/t}}\) or so where \(N\) is the instance size. However, this turns out to be false. Moshkovitz and Raz (2010) showed an alternative construction to get \(1\) vs \(1-\epsilon\) hardness for label cover using only quasilinear blowup from, say, 3SAT. In the sos world, this is even easier. If we consider the \(3LIN(GF(2^t))\) problem, where the equations \(x_i + x_j + x_k = b\) are taken in the field \(GF(2^t)\) then the same proof as Grigoriev shows that a random instance (where one would not be able to satisfy more than \(2^{-t}+o(1)\) fraction of the constraints) has a pseudo-distribution that pretends to be completely satisfiable. Here to the corresponding label cover would involve a projection of \(2^{2t}\) to \(2^t\) or an alphabet \(\Sigma\) to alphabet of size \(\sqrt{|\Sigma|}\).

Another amplification construction is the “match/confuse games” of Feige and Kilian (1994) . In this construction, one takes a basic instance (such as the \(3LIN(2)\) instance), and transforms it into a label cover where each variable corresponds to \(t\) tuples of constraints, and we put a constraint between pairs of tuples where \(t-t'\) of the constraints are identical (for some \(t'\ll t\)) and the rest share a variable. One can show that this again amplifies the gap to something like \(1\) vs \(2^{-\Omega(t')}\), but now the projections are “smoother” or closer to being 1 to 1 in the sense that they map the alphabet \(\Sigma = [4]^t\) (in the case when the underlying CSP is \(3LIN(2)\)) to an alphabet of size \([4]^{t-t'}2^{t'}=|\Sigma|^{1-o(1)}\), since when two constraints are identical we require their projection to be identical too. Label cover instances of this type are sometimes known as smooth label cover. One can also think of a smooth label cover as a CSP over a family of \(k\)-ary predicates \(\cP\subseteq \bits^{\Sigma^k}\) that satisfy that for every \(P\in\cP\), \(P^{-1}(1)\) is an error correcting code of the maximum distance of \(k-1\) (i.e., every two distinct vectors agree on at most a single coordinate). It is an interesting open question whether there are sos instances of smooth label cover that require linear degree to obtain a \(1\) vs \(\epsilon\) approximation, a positive answer can be interpreted as an obstacle to various approaches to proving the UGC.The right notion of approximation here seems to be strong soundness where in the soundness case, not only every assignment \(x\) satisfies at most an \(\epsilon\) fraction of the constraints, but even that the average fractional Hamming distance of the projection of \(x\) to a clause and \(P^{-1}(1)\) is at least \(1-\epsilon\).

References

Arora, Sanjeev, Boaz Barak, and David Steurer. 2015. “Subexponential Algorithms for Unique Games and Related Problems.” J. ACM 62 (5): 42:1–42:25.

Feige, Uriel, and Joe Kilian. 1994. “Two Prover Protocols: Low Error at Affordable Rates.” In STOC, 172–83. ACM.

Moshkovitz, Dana, and Ran Raz. 2010. “Sub-Constant Error Probabilistically Checkable Proof of Almost-Linear Size.” Computational Complexity 19 (3): 367–422.

Raz, Ran. 1995. “A Parallel Repetition Theorem.” In STOC, 447–56. ACM.