Sum-of-squares: proofs, beliefs, and algorithms — Boaz Barak and David Steurer

Index PDF

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Grothendieck-type inequalities

Suppose \(A\in\R^{n\times m}\) is a linear operator from \(\R^m\) to \(\R^n\) represented by an \(n\)-by-\(m\) matrix. An important parameter of \(A\) is its operator norm, the smallest number \(c\ge 0\) such that \(\norm{A x} \le c \cdot \norm{x}\) for all \(x\in\R^m\). This quantity depends on the choice of norms for the input and output spaces of operator—\(\R^m\) and \(\R^n\) in our case. The most common choice is the Euclidean norm. In this case, the operator norm is simply the largest singular value of \(A\), which can be computed in polynomial time.

Suppose that we wanted to bound the maximum operator norm of \(A\) over all choices of \(\ell_p\) norms for \(\R^m\) and \(\R^n\)—fixing the norms of the coordinate basis vectors to be \(1\). It turns out that the worst choice of norms is \(\ell_\infty\) for the input space \(\R^m\) and \(\ell_1\) for the output space \(\R^n\).The reason is that \(\ell_\infty\) is the smallest \(\ell_p\) norm of \(\R^m\) and that \(\ell_1\) is the largest norm of \(\R^n\) (when we fix the norm of coordinate basis vectors). We let \(\norm{A}_{\infty\to 1}\) denote the operator norm of \(A\) for this choice of norms for the input and output space, \[ \norm{A}_{\infty\to 1} = \max_{x\in \R^m - 0} \frac{\norm{A x}_1}{\norm {x}_\infty}\,. \] Unlike the largest singular value, computing this operator norm is NP-hard. However, as we will see, there exists a polynomial-time algorithm for this norm that achieves a constant approximation factor (the constant is bigger than \(\tfrac12\)).

The following lemma shows that \(\norm{A}_{\infty \to 1}\) is the optimum value of a quadratic optimization problem over the hypercube. For convenience, we work with the set \(\sbits^n\) instead of \(\bits^n\).

For every matrix \(A\in \R^{n\times m}\), \[ \norm{A}_{\infty \to 1} = \max_{x\in \sbits^m, y\in \sbits^n} \iprod{Ax,y}\,. \]

The lemma follows from the fact that for every vector \(z\in\R^n\), the maximum value of \(\iprod{z,y}\) over all \(y\in\sbits^n\) is equal to \(\norm{z}_1\).

One application of the \(\infty\)-to-\(1\) norm is to approximate the cut norm of a matrix \(A=(a_{ij})\in\R^{n\times m}\), which is the maximum of \(\sum_{i\in S,j\in T} a_{ij}\) over all subsets \(S\subseteq [n],T\subseteq [m]\).

Prove that for every matrix \(A\), the cut norm of \(A\) is between \(\norm{A}_{\infty \to 1}/4\) and \(\norm{A}_{\infty \to 1}\).

Alexander Grothendieck (1928–2014) was one of the leading mathematicians of the 20th century, transforming the field of algebraic geometry. One of his early works established a result he called “the fundamental theorem in the metric theory of tensor products” and is now known as Grothendieck’s inequality. This inequality has found applications in a diverse variety of fields including Banach spaces, \(C^*\)-algebras, quantum mechanics, and computer science. The surveys of Pisier (2012) and Khot and Naor (2012) are good sources for the amazing arrays of applications.

Grothendieck’s inequality is equivalent to the following theorem about degree-2 pseudo-distributions (see Alon and Naor (2004)).

There exists an absolute constant \(K_\mathrm{G}\) such that for every matrix \(A\in \R^{n\times m}\) and degree-\(2\) pseudo-distribution \(\mu\from \sbits^{n}\times \sbits^m\to \R\), \[ \pE_{\mu(x,y)}\iprod{Ax,y} \le K_{\mathrm{G}} \cdot \max_{x\in \sbits^m, y\in \sbits^n} \iprod{Ax,y}\,. \]

Up to now we have defined only pseudo-distributions over \(\bits^\ell\) for some \(\ell\in\N\), but here it is convenient to work with pseudo-distributions defined over \(\sbits^\ell\). We can simply use the linear map \(x \mapsto \Ind - 2x\) to map one set to the other, but it is also easy to define directly the notion of pseudo-distributions and pseudo-expectations over the signed Boolean cube \(\sbits^\ell\). The only difference is that when reducing a general polynomial to a multilinear one, in the \(\sbits\) case we use the identity \(x_i^2=1\) instead of \(x_i^2=x_i\) as we did in the \(\bits\) case.

By the duality between pseudo-distributions and sos certificates, Grothendieck’s inequality is also equvialent to the statement that the polynomial \(K_G\cdot\norm{A}_{\infty\to 1}-\iprod{Ax,y}\) has a degree-\(2\) sos certificate.

The smallest value of \(K_{\mathrm G}\) satisfying this inequality is known as Grothendieck’s constant. Computing the exact numerical value of this constant is a longstanding open problem, though we know that it is around \(1.7\). In 1977, Krivine proved that \(K_{\mathrm G} \leq \tfrac{\pi}{2\log(1+\sqrt{2})} \sim 1.782..\) and conjectured that this bound is tight. However, this conjecture was disproved by Braverman et al. (2011) . Raghavendra and Steurer (2009) showed that one can compute \(K_{\mathrm G}\) up to accuracy \(\epsilon\) in time double exponential in \(1/\epsilon\).

We show a proof of Grothendieck’s inequality due to Krivine (see Alon and Naor (2004)).

As in the proof for Max Cut, we may assume that \(\mu\) has mean~\(0\). We will show that there are joint Gaussian vectors \(\xi,\zeta\) such that \[ \pE_{\mu(x,y)} x\transpose y = K_{\mathrm{Krivine}} \cdot \E_{\xi,\zeta} (\sign \circ \xi)\transpose {(\sign \circ \zeta)} \, \label{eq:krivine} \] where \(K_{\mathrm{Krivine}}\) is an absolute constant to be determined later. (Here, \((\sign\circ \xi)\in\bits^m\) and \((\sign \circ\zeta)\in \sbits^n\) denote the vectors obtained by taking the signs coordinate-wise for \(\xi\) and \(\zeta\).) Equation \eqref{eq:krivine} implies the theorem because \[ \begin{aligned} \pE_\mu \iprod{Ax,y} & = \Tr A \pE_\mu x\transpose y \\ & = K_{\mathrm{Krivine}} \cdot \Tr A \E_{\xi,\zeta} (\sign \circ\xi)\transpose{ (\sign \circ \zeta)}\\ & = K_{\mathrm{Krivine}} \cdot \E_{\xi,\zeta} \bigiprod{A (\sign\circ \xi), (\sign\circ \zeta)}\\ & \le K_{\mathrm{Krivine}} \cdot \norm{A}_{\infty\to 1}\,. \end{aligned} \] It remains to show the existence of Gaussian vectors such that \eqref{eq:krivine} holds. We will choose the Gaussian vectors such that the diagonals of the covariances \(\E \dyad \xi\) and \(\E \dyad \zeta\) are all ones. Then, as in the proof for Max Cut (also see Reference:grothendieck-identity), \[ \E_{\xi,\zeta} (\sign \circ\xi)\transpose {(\sign\circ \zeta)} = \tfrac 2\pi\cdot \arcsin\circ \Paren{\E \xi \transpose \zeta}\,, \] where we apply the \(\arcsin\) function entry-wise to the matrix \(\E \xi\transpose \zeta\). Therefore, our goal is to choose the distribution of \(\xi,\zeta\) such that \[ \sin\circ\Paren{c \cdot\pE_\mu x\transpose y} = \pE \xi\transpose \zeta\,, \] where \(c=\tfrac \pi {2 {K_{\mathrm{Krivine}}}}\) and we apply the \(\sin\) function again entry-wise. By Reference:sin-and-sinh-applied-to-block-psd-matrices below, the following matrix is positive semidefinite \[ \Paren{\begin{matrix} \sinh \circ \Paren{c \pE_\mu \dyad x} & \sin \circ \Paren{c \pE_{\mu} x\transpose y} \\ \sin \circ \Paren{c \pE_{\mu} y\transpose x} & \sinh \circ \Paren{c \pE_\mu \dyad y}\\ \end{matrix}} \] It follows that we can choose \((\xi,\zeta)\) to be Gaussian vectors with the above matrix as covariance. Recall that we required the entries of \(\xi\) and \(\zeta\) to have variance \(1\). Since \(\pE_\mu \dyad x\) and \(\pE_\mu \dyad y\) are all ones on their diagonals, this requirement translates to the condition \(\sinh(c)=1\). The solution to this equation is \(c=\sinh^{-1}(1)=\ln(1+\sqrt 2)\). Therefore we can choose \({K_{\mathrm{Krivine}}}=\frac \pi{2 \ln(1+\sqrt 2)}\le 1.783\) for the conclusion of the theorem.

Exercises to complete Krivine’s proof of Grothendieck’s inequality

The following exercises ask you to fill in some details for Krivine’s proof of Grothendieck’s inequality.

Show that for every \(\rho\in\R\) with \(-1\le \rho\le 1\) \[ \E_{s,t\sim \cN(0,1)} \sign s \cdot \sign \Paren{\rho \cdot s + \sqrt{1-\rho^2}\cdot t} = \tfrac 2 \pi \arcsin\rho\,. \]

For every two matrices \(M,N\) of the same dimension we define the Hadamard product of \(M\) and \(N\), denoted as \(M\odot N\), as the matrix \(H\) where \(H_{i,j} = M_{i,j}N_{i,j}\) for all \(i,j\). Prove that if \(M\) and \(N\) are psd then so is \(M\odot N\).

Let \(p\) be a univariate polynomial with nonnegative coefficients in the monomial basis. Show that for every positive semidefinite matrix \(M\in\R^{n\times n}\), the matrix \(N=p\circ M\) with entries \(N_{i,j}=p(M_{i,j})\) is also positive semidefinite.

Let \(p=\sum_i p_i x^i\) be a univariate polynomial and let \(p_+ = \sum_i \abs{p_i} x^i\) be the corresponding polynomial with only nonnegative coefficients. Show that for every 2-by-2 block psd matrix \(\Paren{\begin{smallmatrix} A & B \\ \transpose B & D\end{smallmatrix}}\), the following matrix is also positive semidefinite, \[ \Paren{\begin{matrix} p_+ \circ A & p \circ B \\ p\circ \transpose B & p_+ \circ D \end{matrix}}\,. \]

Show that there exists a sequence of univariate polynomials \(\set{\super p k}_{k\in \N}\) that converges point-wise to the \(\sin\) function (i.e., \(\lim_{k\to \infty} \super p k(x)=\sin x\) for every \(x\in\R\)). Show that the corresponding polynomials \(\set{\super p k _+}_{k\in \N}\) with nonnegative coefficients in the monomial basis converges point-wise to the \(\sinh\) function.Hint: Look up the Taylor-series expansion of the \(\sin\) and \(\sinh\) functions.

Show that for every 2-by-2 block psd matrix \(\Paren{\begin{smallmatrix} A & B \\ \transpose B & D\end{smallmatrix}}\), the following matrix is also positive semidefinite, \[ \Paren{\begin{matrix} \sinh \circ A & \sin \circ B \\ \sin\circ \transpose B & \sinh \circ D \end{matrix}}\,. \]

More general Grothendieck-type inequalities

We have used crucially the fact that we need to optimize on disjoint sets of variables \(x_1,\ldots,x_n\) and \(y_1,\ldots,y_n\) in the proof above, since we only needed to fix the two off-diagonal blocks of the covariance matrix of the Gaussian we used, and so had freedom in choosing the two diagonal blocks in a way to help make this matrix psd. One can ask more general questions of looking at maximizers of the form \(x^\top A x\) where \(x\in\sbits^{2n}\) and \(A\) is an arbitrary matrix whose support (i.e., non zero entries) is contained in some graph \(H\). The Grothendieck constant of \(H\) is the maximum over all such matrices of the ratio between the pseudo-distribution and actual value. The standard Grothendieck value corresponds to the case that \(H\) is bipartite but one can study the questions for other graphs as well. For some graphs \(H\), the Grothendieck constant corresponding to \(H\) might not be an absolute constant but can depend on \(H\). Specifically, Alon et al. (2005) show that there are some absolute constants \(c,C\) such that the Grothendieck constant of \(H\) always lies in \([c\log \omega(H),C\log \chi(H)]\) where \(\omega(H)\) denotes the clique number of \(H\) and \(\chi(H)\) denotes the chromatic number of \(H\).

References

Alon, Noga, and Assaf Naor. 2004. “Approximating the Cut-Norm via Grothendieck’s Inequality.” In STOC, 72–80. ACM.

Alon, Noga, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. 2005. “Quadratic Forms on Graphs.” In STOC, 486–93. ACM.

Braverman, Mark, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. 2011. “The Grothendieck Constant Is Strictly Smaller Than Krivine’s Bound.” In FOCS, 453–62. IEEE Computer Society.

Khot, Subhash, and Assaf Naor. 2012. “Grothendieck-Type Inequalities in Combinatorial Optimization.” Comm. Pure Appl. Math. 65 (7): 992–1035. doi:10.1002/cpa.21398.

Pisier, Gilles. 2012. “Grothendieck’s Theorem, Past and Present.” Bull. Amer. Math. Soc. (N.S.) 49 (2): 237–323. doi:10.1090/S0273-0979-2011-01348-9.

Raghavendra, Prasad, and David Steurer. 2009. “Towards Computing the Grothendieck Constant.” In SODA, 525–34. SIAM.