Vanishing Ideals Of Finitely Many Points

Let $I$ be the vanishing ideal of a finite set of points in $\mathbb{R}^n$.

(a) Prove that $p(x)$ is nonnegative over $V_{\mathbb{R}}(I)$ if and only if it is a sum of squares modulo the ideal $I$.

(b) Using the above fact, prove that for $\mathcal{B}$ a $\theta$-basis of $\mathbb{R}[x]/I$, the spectrahedron $\{ y \in \mathbb{R}^{\mathcal{B}} \,:\, M_{\mathcal{B}}(y) \succeq 0, y_0 = 1 \}$ is the simplex whose vertices are the vectors $(f_i(s) \,:\, f_i+I \in \mathcal{B})$ as $s$ varies over the finitely many points in $V_\mathbb{R}(I)$.


Solution

(a) The idea is to construct another polynomial $g$ that takes the same values as $p$ on $V_{\mathbb{R}}(I)$ and that is SOS. If we manage to construct such a polynomial then we are done since we then have $p-g = 0$ on $V_{\mathbb{R}}(I)$ and thus $p-g \in I$ by definition of $I$ being the vanishing ideal of a set of points.
We can construct the polynomial $g$ using Lagrange interpolation: Let $a_1,\dots,a_N$ be the $N$ points of $V_{\mathbb{R}}(I)$. For $i \in [N]$, consider the polynomial $g_i(x) = \prod_{j \neq i} \|x - a_j\|_2^2 / \prod_{j \neq i} \|a_i - a_j\|_2^2$. Note that $g_i$ is a SOS since it is a product of polynomials that are SOS. Also note that $g_i$ satisfies $g_i(a_i) = 1$ and $g_i(a_j) = 0$ for all $j \neq i$. Hence the polynomial $g(x) = \sum_{i=1}^N p(a_i) g_i(x)$ is SOS and satisfies $g(a_i) = p(a_i)$ for all $i \in [N]$, i.e., $p=g$ on $V(I)$. Hence $p-g \in I$, and $p$ is a sum of squares mod $I$.

Note: this same question was in the exercise Stability number of a graph

(b) Call $P$ the polytope

(1)
\begin{align} P = \text{convexhull}([f_i(v)]_{i \in \mathcal B} \; : \; v \in V_{\mathbb{R}}(I)) \subset \mathbb{R}^{\mathcal B} \end{align}

and $S$ the spectrahedron

(2)
\begin{align} \{ y \in \mathbb{R}^{\mathcal{B}} \,:\, M_{\mathcal{B}}(y) \succeq 0, y_0 = 1 \} \end{align}

We have to show that $P = S$.

In this question we identify $\mathbb{R}^{\mathcal B}$ with $\mathbb{R}[x]/I$ where a vector $\alpha \in \mathbb{R}^{\mathcal B}$ is identified with the element $\sum_{i \in \mathcal B} \alpha_i f_i \in \mathbb{R}[x]/I$.

Recall that if $y \in \mathbb{R}^{\mathcal B}$, the matrix $M_{\mathcal B}(y)$ is defined by:

(3)
\begin{align} M_{\mathcal B}(y)_{i,j} = \sum_{\ell \in \mathcal B} \lambda_{i,j}^{\ell} y_{\ell} \end{align}

where $\lambda_{i,j}^{\ell}$ are the coefficients in the expansion of $f_i f_j \in \mathbb{R}[x]/I$ in the basis $(f_{\ell})_{\ell \in \mathcal B}$:

(4)
\begin{align} f_i f_j = \sum_{\ell \in \mathcal B} \lambda_{i,j}^{\ell} f_{\ell} \end{align}

The following statement says what it means for $M_{\mathcal B}(y)$ to be positive semidefinite:

Fact: For $y \in \mathbb{R}^{\mathcal B}$ we have

(5)
\begin{aligned} M_{\mathcal B}(y) \succeq 0 \; &\Leftrightarrow \; \forall p \text{ sos mod } I, \; \langle y, p \rangle \geq 0\\ &\Leftrightarrow \; y \in \Sigma(I)^* \end{aligned}

where $\langle \cdot, \cdot \rangle$ is the canonical inner product on $\mathbb{R}[x]/I$ (after identification of $\mathbb{R}[x]/I$ with $\mathbb{R}^{\mathcal B}$ via the basis $\mathcal B$) and where $\Sigma(I) \subset \mathbb{R}[x]/I$ is the cone of polynomials that are sos mod $I$.

Proof: The proof of the equivalence follows directly from the observation that if $\alpha \in \mathbb{R}^{\mathcal B}$ then

(6)
\begin{align} \alpha^T M_{\mathcal{B}}(y) \alpha = \langle y, g_{\alpha}^2 \rangle. \end{align}

where $g_{\alpha} = \sum_{i \in \mathcal B} \alpha_i f_i$.
To prove the equality above we simply use the definition of $M_{\mathcal B}(y)$:

(7)
\begin{aligned} \langle y, (\sum_{i \in \mathcal B} \alpha_i f_i)^2 \rangle &= \langle y, \sum_{i,j \in \mathcal B} \alpha_i \alpha_j f_i f_j \rangle\\ &= \langle y, \sum_{i,j \in \mathcal B} \alpha_i \alpha_j \sum_{\ell \in \mathcal B} \lambda_{i,j}^{\ell} f_{\ell} \rangle\\ &= \sum_{\ell \in \mathcal B} y_{\ell} \sum_{i,j \in \mathcal B} \alpha_i \alpha_j \lambda_{i,j}^{\ell}\\ &= \sum_{i,j \in \mathcal B} \alpha_i \alpha_j \sum_{\ell \in \mathcal B} \lambda_{i,j}^{\ell} y_{\ell} = \sum_{i,j \in \mathcal B} \alpha_i \alpha_j M_{\mathcal B}(y)_{i,j} = \alpha^T M_{\mathcal{B}}(y) \alpha \end{aligned}

The proof of the equivalence 5 then follows easily from the identity 6.

We now prove the equality $P=S$.

  • We first show the inclusion $P \subset S$: Since $S$ is convex, it suffices to show that the vertices of $P$ belong to $S$. Let $y=[f_i(v)]_{i \in \mathcal B} \in \mathbb{R}^{\mathcal B}$ for some $v \in V_{\mathbb{R}}(I)$ be a vertex of $P$. Since we have $f_0(v)=1$ (by definition of a $\theta$-basis), we have that $y_0 = 1$. To show that $M_{\mathcal B}(y) \succeq 0$, we show that $\langle y, p \rangle \geq 0$ whenever $p$ is sos mod $I$. Observe that by definition of $y$ we have for any $p \in \mathbb{R}[x]/I$, $\langle y, p \rangle = p(v)$ and thus if furthermore $p$ is sos mod $I$ we have $\langle y, p \rangle = p(v) \geq 0$ since $v \in V_{\mathbb{R}}(I)$.
  • We now show the reverse inclusion, namely that $S \subseteq P$. Let $y \in S$. To show that $y \in P$, we will show that if $\ell$ is an affine function s.t. $\ell \geq 0$ on $P$ then $\ell(y) \geq 0$. Let $\ell(z) = a_0 + \sum_{i \in \mathcal B} a_i z_i$ be such that $\ell(z) \geq 0$ for all $z \in P$. Consider the polynomial $p(x) = a_0 f_0(x) + \sum_{i \in \mathcal B} a_i f_i(x) \in \mathbb{R}[x]$. Since $\ell$ is nonnegative on $P$, the polynomial $p$ is nonnegative on $V_{\mathbb{R}}(I)$. Hence by part (a), $p$ is sos mod $I$. Thus, since $M_{\mathcal B}(y) \succeq 0$, we have $\langle y, p \rangle \geq 0$ which means that $\ell(y) \geq 0$ since $y_0 = 1$. Now this is true for all $\ell$ affine functions such that $\ell(z) \geq 0$ for $z \in P$, and hence $y \in P$ (since $P$ is a polytope).
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License