Sum-of-squares: proofs, beliefs, and algorithms — Boaz Barak and David Steurer

Index PDF

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Integrality Gap for the Knapsack Problem

Note: These notes are still somewhat rough.

Suppose we have \(n\) items of unit size and we want to pack as many as possible in a knapsack of size \(r\). Then, we can pack at most \(\lfloor r \rfloor\) number of items in the knapsack. In this lecture, we show that this seemingly simple reasoning that one can’t pack a ``fractional" item is not captured by low-degree SoS algorithm. Specifically, we’ll prove Grigoriev’s theorem (Grigoriev 2001):

For any \(r \leq n/2\), there is a pseudodistribution of degree \(\Omega(r)\) supported on points \(x \in \bits^n\) such that \[ \sum_{i = 1}^n x_i = r. \label{eq:knapsack} \]

Observe that for non-integral \(r\) the above yields an integrality gap for the special case of knapsack presented above of degree \(\Omega(r)\).

Grigoriev’s original proof of this result was a elegant argument that analyzed a natural, maximally symmetric pseudodistribution that we will momentarily describe. This argument essentially invented several basic results about the spectrum of matrices from the Johnson Scheme studied in algebraic combinatorics. We highly recommend reading Grigoriev’s original proof - in the lecture however, we will rely on standard results from the theory of association schemes and get a shorter proof. This argument was first presented by Meka and Wigderson (Meka and Wigderson 2013) in a work that attempted to show the first SoS lower bound for the Planted Clique problem.

The Pseudodistribution

Fix \(d = \Theta(r)\) to be chosen later. The idea for constructing degree \(d\) pseudodistribution is very natural - the constraint \(\sum_{i}x_i = r\) is symmetric in \(r\). Thus, if we had an arbitrary degree \(2d\) pseudodistribution, we could average it over all permutations \(\sigma\) of \([n]\) and obtain another degree \(d\) pseudodistribution over the hypercube that is symmetric w.r.t. permutations of \([n]\) and satisfies \eqref{eq:knapsack}.

Thus, for any symmetric pseudodistribution \(\mu\), there exists a function \(f\) such that for every multilinear monomial \(x_S = \Pi_{i \in S} x_i\), \(\pE[x_S] = f(|S|).\) It turns out that there’s no choice in \(f\) either.

First, \(\sum_{i = 1}^n \pE_{\mu}[x_i] = n f(1)\). On the other hand, since \(\mu\) is supported on \(x\) satisfying \eqref{eq:knapsack}, \(\sum_{i = 1}^n\pE[x_i] = r\). Thus, \(f(1) = r/n\) for every \(i\). \[r f(1) = r \pE[x_i] = \pE[x_i(\sum_{j = 1}^n x_j)]= \pE[x_i] + (n-1) \pE[x_ix_j] = f(1) + (n-1) f(2),\] which implies that \(f(2) = (r-1)/(n-1)\). One can repeat this argument to obtain that \[ \pE[x_S] = f(|S|) = \frac{{r \choose |S|}}{{n \choose |S|}}, \label{eq:def-pE} \] for every \(S\) such that \(|S| \leq 2d\).

It is easy to check that \(\pE\) constructed here satisfies the constraint \eqref{eq:knapsack}. Thus, as usual, we are left with showing the positivity of \(\pE\).

Reduction to Positivity over Squared Homogenous Polynomials

Because of the linear equality constraint Reference:eq:knapsack, it turns out that it is enough to prove positivity of \(\pE[p^2]\) for any homogenous degree \(d\) polynomial.

Suppose \(M\) be any linear operator on degree \(d\) polynomials that is consistent with the constraints \(x_i^2 = x_i\) for every \(i \in [n]\) and \(\sum_{i = 1}^n x_i - r = 0.\) Suppose \(M(p^2) \geq 0\) for every degree \(d\) homogenous polynomial \(p\). Then, \(M(q^2) \geq 0\) for any degree \(d\) polynomial \(q\).

The idea is that any polynomial \(p\) of degree \(d\) can be written as \[ p_1 + \sum_{i = 1}^n (x_i^2 - x_i) p_i + q (\sum_{i = 1}^n x_i - r) \label{eq:homogenization} \] for a homogenous degree \(d\) polynomial \(p_1\), degree at most \(d-2\) polynomials \(p_i\) and degree at most \(d-1\) polynomial \(q\). This can be shown by polynomial division, for example.

Once we have the above representation, squaring the right hand side of Reference:eq:homogenization yields \(p_1^2\) plus terms that have either \((x_i^2-x_i)\) or \((\sum_{i = 1}^n x_i - r)\) as a factor. Applying \(M\) to the RHS, using linearity and the fact that \(M\) satisfies both Boolean and the knapsack constraints thus yields that \(M(p^2) = M(p_1^2) \geq 0\).

We can now go to the matrix view of things in order to show positivity of \(\pE\) on homogenous degree \(d\) polynomials - we have seen this argument several times in the lectures before.

Let \(\cM \in \R^{\nchoose{d} \times \nchoose{d}}\) be defined by \(\cM(S,T) = \pE[ x_{S \cup T}] = f(2d-|S\cap T|)\) for any \(S, T \in \nchoose{d}\). Show that \(\pE[p^2] \geq 0\) for every homogenous degree \(d\) polynomial if and only if \(\cM \succeq 0.\)

Note that \(\cM\) is the \({n \choose d} \times {n \choose d}\) dimensional principal sub-matrix of the usual moment-matrix of \(\pE\).

Johnson Scheme

The discussion here is based on (Meka and Wigderson 2013) which in turn is based on (Godsil 1993). Association schemes are well-studied objects in Algebraic Combinatorics. For our purposes, we can think of them as a commutative algebra of square matrices - i.e. adding or multiplying any two matrices from the set yields another matrix in the set and for any two matrices \(A,B\) in the set, \(AB = BA\). We are interested in one such well-studied scheme.

Let \(d < n/2 \in \N\) be parameters. The Johnson Scheme \(\cJ_{n,d}\) of order \(d\) on \([n]\) is the linear subspace of all matrices \(J\) in \(\R^{\nchoose{d}\times \nchoose{d}}\) that are set symmetric i.e. \(J(I,J) = h(|I \cap J|)\) for any \(I,J \in \nchoose{d}\). In other words, any entry of any matrix in the subspace depends only on the size of the intersection of the row and column index sets.

We now define two basis for \(\cJ_{n,d}\).

For \(0 \leq \ell \leq d \leq n\), let \(D_{\ell} \in \R^{\nchoose{d} \times \nchoose{d}}\) be the matrix defined by \[ D_{\ell}(I,J) = \begin{cases} 1 & \text{ if } |I \cap J| = \ell\\ 0 & \text{ otherwise.}. \end{cases} \]

\(D_0\) is then the well-studied Set Disjointness matrix from communication complexity. It is easy to check that \(D_{\ell}\) for \(\ell \leq d\) span \(\cJ_{n,d}\). Further, it’s also easy to verify that the \(D_{\ell}\)’s commute with each other - and thus every pair of matrices in \(\cJ_{n,d}\) commute establishing that \(\cJ_{n,d}\) is indeed a commutative algebra of matrices.

For the purposes of proving PSDness, another basis of \(\cJ_{n,d}\) is very useful - the \(P\)-Basis.

For \(0 \leq t \leq d\), let \(P_t \in \R^{\nchoose{d} \times \nchoose{d}}\) be defined by \[ P_t(I,J) = {{|I \cap J|} \choose t}, \] where it’s understood that if \(|I \cap J| = 0\), then \(P_t(I,J) = 0.\)

There’s an equivalent definition of \(P_t\)s that’s helpful in calculations.

Let \(R_T\) be the rank \(1\) matrix defined by \(R_T(I,J) = \1(I \supseteq T) \1(J \supseteq T)\). Show that \(P_t(I,J) = \sum_{T \subseteq [n], |T|= t} R_T.\)

The above exercise can be used to obtain explicit basis change coefficients between the \(D\) and the \(P\) basis.

Show that

  1. For \(0\leq t \leq d\), we have \(P_t = \sum_{\ell = t}^d {\ell \choose t} D_{\ell}.\)
  2. For \(0 \leq \ell \leq d\), we have \(D_{\ell} = \sum_{t \geq \ell} (-1)^{t-\ell} {t \choose \ell} P_t.\)

The main results from the theory of association schemes of interest to us is the characterization of eigenspaces and eigenvalues of the matrices in \(\cJ_{n,d}\). This is done using the fact that there’s a natural action (the relabeling action on elements of \([n]\)) of \(S_n\) that commutes with every matrix in \(\cJ_{n,d}\) (because of set-symmetry of \(\cJ_{n,d}\)-matrices). This implies that matrices in \(\cJ_{n,d}\) must share eigenspaces with those corresponding to the action of the symmetric group \(S_n\). The latter are well understood as representations of \(S_n\) and one can use this understanding to arrive at an explicit description of eigenvalues and eigenspaces of matrices in \(\cJ_{n,d}\).

For our purposes, we will just state the results required for us.

For \(P_t = P_{t,n,d}\) defined as above, there exist pairwise orthogonal subspaces \(V_0, V_1, V_2, \ldots, V_d\) such that

  1. \(V_0, V_1, V_2,\ldots,V_d\) are eigenspaces for \(P_t\) for every \(0 \leq t \leq r\) and consequently for every matrix in \(\cJ_{n,d}\).
  2. \(dim(V_j) = {n \choose j} - {n \choose {j-1}}\).
  3. For any matrix \(J \in \cJ_{n,d}\), let \(\lambda_i(J)\) denote the eigenvalue of \(J\) on the subspace \(V_i\). Then, \[ \lambda_i(P_t) = \begin{cases} {{n-t-i} \choose {d-t}} \cdot {{d-i} \choose {t-i}} & \text{ if } i \leq t\\ 0 & \text{ otherwise.} \end{cases} \]

The above lemma helps us estimate the eigenvalues of any matrix that can be written as a linear combination of the matrices \(P_t\) or \(D_{\ell}\) matrices. In particular, we will use the following estimate on the eigenvalues of such matrices in our analysis of \(\cM\) from the previous section.

Let \(Q= \sum_{\ell} \alpha_{\ell} D_{\ell} \in \cJ_{n,d}\) and \(\beta_t = \sum_{t \leq \ell} {t \choose \ell} \alpha_{\ell}\) for \(\alpha_{\ell} \geq 0\). Then, for \(0 \leq j \leq r\), \[ \lambda_j(Q) \leq \sum_{t \geq j} \beta_t {{n-t-j} \choose {d-t}} {{d-j} \choose {t-j}}. \]

\[ \begin{aligned} \sum_{\ell} \alpha_{\ell} D_{\ell} &= \sum_{\ell} \alpha_{\ell} (\sum_{t \geq \ell}(-1)^{t-\ell}{t \choose \ell} P_t) \\ &= \sum_{t} P_t (\sum_{\ell \leq t} (-1)^{t-\ell} {t \choose \ell} \alpha_{\ell})\\ &\leq \sum_{t} P_t (\sum_{t \leq \ell} \alpha_{\ell}) & = \sum_{t } \beta_t P_t \end{aligned} \]

Using that \(Q\) and \(P_t\) share eigenspaces and applying Lemma Reference:eigenspaces-of-johnson-scheme, we obtain the estimate claimed.

PSDness of \(\pE\)

\(\cM \succeq 0\).

We are going to use Lemma Reference:eigenvalue-estimates-for-johnson-scheme. For any non-negative \(\alpha_1, \alpha_2, \ldots, \alpha_{d}\), note that: \[ 0 \preceq \sum_{t} \alpha_{t} P_t = \sum_{\ell= 0}^r ( \sum_{t = 0}^{\ell} \alpha_t) {\ell \choose t}) D_{\ell} \]

From the definition of \(\cM\), we know that \(\cM = \sum_{\ell = 0}^d f(\ell) D_{2d-\ell}.\) We will thus be done from above if we can find non-negative \(\alpha_{t}\) such that \(( \sum_{t = 0}^{\ell} \alpha_t) {\ell \choose t} = f( 2d-\ell)\) for every \(\ell\).

Observe that \(f(2d-\ell) = f(2d) \cdot \frac{{{n-2d+\ell} \choose \ell}}{{{r-2d+\ell} \choose \ell}}.\)

Choose \(\alpha_t = \alpha_t = \frac{f(d)}{{n \choose d}} \cdot \frac{{{n-r} \choose t}}{{{r-2d+t-1} \choose t}}.\)

We can now verify: \[ \begin{aligned} f(2d) {{n-2d+\ell} \choose \ell} &= f(2d) \sum_{t = 0}^{\ell} {{n-r} \choose t} \cdot {{r-2d+\ell-1} \choose {\ell-t}}\\ &= \sum_{t = 0}^{\ell} \alpha_t \cdot {{r-2d+t-1} \choose t} {{r-2d+\ell-1} \choose {\ell-t}}\\ &= \sum_{t = 0}^{\ell} \alpha_t {\ell \choose t} \cdot {{r-2d+\ell} \choose \ell}. \end{aligned} \]

The lemma now follows and in fact shows that the minimum eigenvalue of \(\cM\) is \(\alpha_d\).

References

Godsil, Chris D. 1993. Algebraic Combinatorics. Chapman and Hall Mathematics Series. Chapman; Hall.

Grigoriev, Dima. 2001. “Complexity of Positivstellensatz Proofs for the Knapsack.” Computational Complexity 10 (2): 139–54.

Meka, Raghu, and Avi Wigderson. 2013. “Association Schemes, Non-Commutative Polynomial Concentration, and Sum-of-Squares Lower Bounds for Planted Clique.” Electronic Colloquium on Computational Complexity (ECCC) 20: 105.