Stability number of a graph

The stability number $\alpha(G)$ of a graph $G$ is the cardinality of the largest stable set. Recall that a stable set of $G$ is a subset of the vertices such that no two of them are connected by an edge. Define the ideal $I = \langle x_i(1-x_i)\; i \in V, x_i x_j, (i,j) \in E\rangle$.

(a) Show that $\alpha(G)$ is exactly given by

(1)
\begin{align} \min \gamma \quad : \quad \gamma - \sum_{i \in V} x_i \text{ is SOS mod } I \end{align}

[Hint: recall from Rekha's lecture (or prove!) that if $I$ is zero-dimensional and radical, then $p(x)\geq 0$ on $V(I)$ if and only if $p(x)$ is SOS mod $I$.]

(b) Recall that a polynomial is 1-SOS if it can be written as a sum of squares of affine (degree 1) polynomials. Show that an upper bound on $\alpha(G)$ can be obtained by solving

(2)
\begin{align} \min \gamma \quad : \quad \gamma - \sum_{i \in V} x_i \text{ is 1-SOS mod } I \end{align}

(c) Show that the given generators of the ideal $I$ are already a Gröbner basis. Show that there is a natural bijection between standard monomials and stable sets of $G$.

(d) As a consequence of the previous fact, show that $\alpha(G)$ is equal to the degree of the Hilbert function of $\mathbb{R}[x]/I$.

Now let $G=(V,E)$ be the Petersen graph (a picture of the Petersen graph is given on Wikipedia here)

(a) Find a stable set in the Petersen graph of maximum cardinality.

(b) Solve problem (2) for the Petersen graph. What is the corresponding upper bound?

(c) Compute the Hilbert function of $I$ using Macaulay2, and verify that this answer is consistent with your previous results.


Solution

(a) The stability number can be written as:

(3)
\begin{aligned} \alpha(G) &= \max \sum_{i \in V} x_i \quad : \quad x \in V(I)\\ &= \min \gamma \quad : \quad \gamma - \sum_{i \in V} x_i \geq 0 \text{ on } V(I) \end{aligned}

If we use the hint then we get that:

(4)
\begin{align} \alpha(G) = \min \gamma \quad : \quad \gamma - \sum_{i \in V} x_i \geq 0 \text{ is SOS mod } I \end{align}

We now show the result mentioned in the hint, namely that if $I$ is a radical ideal and $V(I)\subseteq \mathbb{R}^n$ is finite, then $p \in \mathbb{R}[x]$ is nonnegative on $V(I)$ iff $p$ is SOS mod $I$.

Proof: The idea is to construct another polynomial $f$ that takes the same values as $p$ on $V(I)$ and that is SOS. If we manage to construct such a polynomial then we are done since we then have $p-f = 0$ on $V(I)$ and thus $p-f \in I$ since $I$ is radical and thus $p=f+I$ is SOS mod $I$.
We can construct the polynomial $f$ using Lagrange interpolation: Let $a_1,\dots,a_N$ be the $N$ points of $V(I)$. For $i \in [N]$, consider the polynomial $f_i(x) = \prod_{j \neq i} \|x - a_j\|_2^2 / \prod_{j \neq i} \|a_i - a_j\|_2^2$. Note that $f_i$ is a SOS since it is a product of polynomials that are SOS. Also note that $f_i$ satisfies $f_i(a_i) = 1$ and $f_i(a_j) = 0$ for all $j \neq i$. Hence the polynomial $f(x) = \sum_{i=1}^N p(a_i) f_i(x)$ is SOS and satisfies $f(a_i) = p(a_i)$ for all $i \in [N]$, i.e., $f=p$ on $V(I)$. Hence since $I$ is radical, we have $p-f \in I$ and thus $p$ is SOS mod $I$.

(Update: Note that in the argument above we have assumed that all the points of $V(I)$ are real. If $V(I)$ has complex points the polynomials $f_i$ as defined above will have complex coefficients which is something we do not want (nonnegativity only makes sense for polynomials with real coefficients). One easy way to deal with this though is to split the points $a_i \in V(I)$ into real parts and imaginary parts)

In order to use this result we need to show that $I$ is zero-dimensional and radical. Clearly $I$ is zero-dimensional, since $V(I)$ is the set of stable sets of $G$ which is of course finite. To show that $I$ is radical, we have to show that if $f \in \mathbb{R}[x]$ is zero on $V(I)$ then $f \in I$. Let $f \in \mathbb{R}[x]$ and assume that $f=0$ on $V(I)$. Consider a division of $f$ by the generators of the ideal $I$:

(5)
\begin{align} f(x) = \sum_{i \in V} a_i(x) x_i(1-x_i) + \sum_{ij \in E} b_{i,j}(x) x_i x_j + r(x) \end{align}

where $r(x)$ is the remainder. We will show that necessarily $r(x)=0$. We know that the monomials of $r(x)$ are not divisible by any leading term of the generators of $I$. Thus if $x^\alpha$ is a monomial of $r(x)$ then $x_i^2 \nmid x^\alpha$ for all $i$, which means $0 \leq \alpha_i \leq 1$. Furthermore $x_i x_j \nmid x^\alpha$ for all $ij \in E$ and thus $\alpha_i \alpha_j = 0$ whenever $ij \in E$. This shows that any monomial that appears in $r(x)$ is necessarily of the form $x^\alpha$ with $\alpha$ a characteristic vector of a stable set of $G$ (i.e., $\alpha_i \in \{0,1\}$ and $\alpha_{i}\alpha_j = 0$ whenever $ij \in E$). Now if $c_\alpha$ is the coefficient of such a monomial in the remainder $r(x)$ then, since $\alpha \in V(I)$, we clearly have $0=f(\alpha)=c_\alpha$. This is true for all possible monomials $\alpha$ that can appear in $r(x)$ and thus $r(x) = 0$ and thus $f \in I$.

(b) If $\gamma-\sum x_i$ is 1-SOS mod $I$, then it is of course SOS mod $I$ and thus the quantity

(6)
\begin{align} \min \gamma \quad : \quad \gamma - \sum_{i \in V} x_i \text{ is 1-SOS mod } I \end{align}

gives an upper bound on the quantity of (a) which is equal to $\alpha(G)$.

(c) To show that the generators $x_i(1-x_i)$, $i \in V$ and $x_i x_j$, $ij \in E$ are already a Groebner basis of $I$, we can compute the $S$-pairs:

(7)
\begin{equation} S(x_i(1-x_i),x_j(1-x_j)) = x_j^2 x_i - x_i^2 x_j \end{equation}
(8)
\begin{equation} S(x_i(1-x_i),x_k x_l) = x_i x_k x_l \end{equation}
(9)
\begin{equation} S(x_i x_j, x_k x_l) = 0 \end{equation}

In the second and third case the $S$-pairs clearly reduce to zero and for the first case they also reduce to zero since the division gives $x_j^2 x_i - x_i^2 x_j = -x_i [x_j(1-x_j)] + x_j [x_i (1-x_i)]$ and the remainder is zero. Thus the given generators are a Gröbner basis of $I$.

A standard monomial $x^\alpha$ is one that is not divisible by any leading term of any element in the Gröbner basis. Since $x_i^2 \nmid x^\alpha$, this means that $0 \leq \alpha_i \leq 1$ for all $i$. Furthermore, since $x_i x_j \nmid x^\alpha$ for all $(i,j) \in E$, this means that $\alpha_i \alpha_j = 0$. In other words, we see that $x^\alpha$ is a standard monomial iff $\alpha$ is the characteristic vector a stable set of $G$. Observe that the total degree $\sum_{i=1}^n \alpha_i$ of the standard monomial corresponds to the size of the associated stable set in $G$.

(d) The Hilbert series of $\mathbb{R}[x]/I$ is defined by (see for example these lecture notes by Pablo http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-972-algebraic-techniques-and-semidefinite-optimization-spring-2006/lecture-notes/lecture_15.pdf)

(10)
\begin{align} H_I(t) = \sum_{k=0}^{\infty} \dim(\mathbb{R}[x]/I \cap \mathbb{R}[x]_k) t^k \end{align}

where $\mathbb{R}[x]/I \cap \mathbb{R}[x]_k$ is the space spanned by the standard monomials of total degree exactly $k$. Since the standard monomial with the largest total degree has degree $\alpha(G)$ this means that the sum defining $H_I(t)$ is actually finite (so $H_I(t)$ is a polynomial), and the degree of $H_I(t)$ is precisely $\alpha(G)$.

Petersen graph

(a) We can find a stable set of size 4.

(b) We can find the translation of the problem (2) into a nice SDP form in the paper on Theta Bodies available on the arxiv here (the SDP formulation of the 1st theta body of the stable set ideal is given in page 13). The SDP associated to problem (2) actually corresponds to the "Lovasz Theta Number" of the graph which is a well-known quantity introduced by Lovasz in his paper "On the Shannon capacity of a graph" (there is also Wikipedia page on this: Lovasz number.

Here is a Yalmip code that computes the Lovasz number for the Petersen graph. The value we find is 4. The upper bound is tight in this case.

% Compute the Lovasz number of the Petersen graph
n = 10; % number of vertices
% edges is a |E|x2 matrix
edges = [[1,2];[2,3];[3,4];[4,5];[5,1];[1,6];[2,7];[3,8];[4,9];[5,10];[6,8];[6,9];[7,9];[7,10];[8,10]];
M = sdpvar(n+1,n+1,'symmetric');
y = sdpvar(n,1);
Constraints = [M(1,1) == 1, ...
               M(2:(n+1),1) == y, ...
               diag(M(2:(n+1),2:(n+1))) == y, ...
               M(sub2ind(size(M),edges(:,1)+1,edges(:,2)+1)) == 0, ... % M_{ij} = 0 if ij \in E
               M >= 0];
sol = solvesdp(Constraints,-sum(y));
double(sum(y))

(c) The following Macaulay2 code computes the Hilbert function of the ideal $I$ of the Petersen graph:

R = QQ[x_1..x_10];
-- Define edges of the Petersen graph
edges = {{1,2},{2,3},{3,4},{4,5},{5,1},{1,6},{2,7},{3,8},{4,9},{5,10},{6,8},{6,9},{7,9},{7,10},{8,10}};
-- Form the generators of the ideal
la = for i from 1 to 10 list x_i*(1-x_i);
lb = for i from 0 to (#edges-1) list x_(edges#i#0)*x_(edges#i#1);
I = ideal(flatten {la,lb}); -- Stable set ideal
hilbertSeries(I,Reduce => true)

The Hilbert series we obtain is

(11)
\begin{equation} H_I(t) = 1 + 10 t + 30 t^2 + 30 t^3 + 5 t^4 \end{equation}

The degree of $H_I(t)$ is indeed 4.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License