Celeb Glow
news | April 04, 2026

A practical way to check if a matrix is positive-definite

$\begingroup$

Let $A$ be a symmetric $n\times n$ matrix.

I found a method on the web to check if $A$ is positive definite:

$A$ is positive-definite if all the diagonal entries are positive, and each diagonal entry is greater than the sum of the absolute values of all other entries in the corresponding row/column.

I couldn't find a proof for this statement. I also couldn't find a reference in my linear algebra books.

I've a few questions.

  1. How do we prove the above statement?

  2. Is the following slightly weaker statement true?

A symmetric matrix $A$ is positive-definite if all the diagonal entries are positive, each diagonal entry is greater than or equal to the sum of the absolute values of all other entries in the corresponding row/column, and there exists one diagonal entry which is strictly greater than the sum of the absolute values of all other entries in the corresponding row/column.

$\endgroup$ 4

5 Answers

$\begingroup$

These matrices are called (strictly) diagonally dominant. The standard way to show they are positive definite is with the Gershgorin Circle Theorem. Your weaker condition does not give positive definiteness; a counterexample is $ \left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \end{matrix} \right] $.

$\endgroup$ 3 $\begingroup$

Here is another way to see this. Define $R_i = A_{ii} - \sum_{j\neq i} \lVert A_{ij} \rVert$. Your condition is that $R_i>0$ for all $i$.

Let $s_{ij} = \frac{A_{ij}}{\lVert A_{ij}\rVert}$ be the sign of $A_{ij}$. Then you can check algebraically (just match coefficients) that for all $x\in\mathbb{R}^n$ \[x^T A x = \sum_{i=1}^n R_ix_i^2 + \sum_{i=1}^n \sum_{j>i} \lVert A_{ij}\rVert (x_i + s_{ij} x_j)^2. \] Since squares are nonnegative and the $R_i$ are assumed positive, all summands are nonnegative for all $x\in\mathbb{R}^n$. Furthermore, if $x\neq 0$ then $x_i\neq 0$ for some $i$, so $x^TAx\geq R_ix_i^2>0$. Therefore $A$ is positive definite.

This expression for $x^TAx$ can be alternatively viewed as expressing $A$ as a weighted sum $A = \sum_k c_k v_kv_k^T$, where each $c_k>0$ and each $v_k\in\mathbb{R}^n$ is a vector with support (number of nonzero entries) at most two, each of which is $\pm 1$. But $v_kv_k^T$ is always positive semidefinite for $v_k\in\mathbb{R}^n$ and positive combinations of positive semidefinite matrices are positive semidefinite. Since each $R_i>0$, we can decrease some of the $c_k$ slightly (those which correspond to the $v_k$ which are standard unit vectors) and instead write $A = cI + \sum_k c_k v_kv_k^T$ for $c>0$, which shows that $A$ is in fact positive definite.

One nice thing about this proof: Every positive definite (or semidefinite) matrix can be written as a positive combination of matrices $vv^T$, but this proof shows that for diagonally dominant matrices we can take all the $v$ to have support at most $2$. This gives some intuition for why "most" positive definite matrices are not diagonally dominant. For example if $v$ is any vector of support size at least three then for small enough $c$, $cI + vv^T$ is positive definite but not diagonally dominant.

$\endgroup$ 2 $\begingroup$

Before continuing, let me add the caution that a symmetric matrix can violate your rules and still be positive definite, give me a minute to check the eigenvalues

$$ H \; = \; \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \end{array} \right) . $$This is positive by Sylvester's Law of Inertia,

A proof is given here as a consequence of Gershgorin's circle theorem. For additional information, see and or just Google "diagonally dominant symmetric"

Later methodology, amounting to repeated completing the square:

$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$

$$ P^T H P = D $$

$$\left( \begin{array}{rrr} 1 & 0 & 0 \\ - \frac{ 2 }{ 3 } & 1 & 0 \\ \frac{ 4 }{ 5 } & - \frac{ 6 }{ 5 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & - \frac{ 2 }{ 3 } & \frac{ 4 }{ 5 } \\ 0 & 1 & - \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 0 \\ 0 & 0 & \frac{ 3 }{ 5 } \\ \end{array} \right) $$$$ $$

$$ Q^T D Q = H $$

$$\left( \begin{array}{rrr} 1 & 0 & 0 \\ \frac{ 2 }{ 3 } & 1 & 0 \\ 0 & \frac{ 6 }{ 5 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 0 \\ 0 & 0 & \frac{ 3 }{ 5 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) $$

$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$

Algorithm discussed at reference for linear algebra books that teach reverse Hermite method for symmetric matrices
$$ H = \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) $$$$ D_0 = H $$$$ E_j^T D_{j-1} E_j = D_j $$$$ P_{j-1} E_j = P_j $$$$ E_j^{-1} Q_{j-1} = Q_j $$$$ P_j Q_j = Q_j P_j = I $$$$ P_j^T H P_j = D_j $$$$ Q_j^T D_j Q_j = H $$

$$ H = \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) $$

==============================================

$$ E_{1} = \left( \begin{array}{rrr} 1 & - \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$$$ P_{1} = \left( \begin{array}{rrr} 1 & - \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{1} = \left( \begin{array}{rrr} 1 & \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{1} = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 2 \\ 0 & 2 & 3 \\ \end{array} \right) $$

==============================================

$$ E_{2} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & - \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) $$$$ P_{2} = \left( \begin{array}{rrr} 1 & - \frac{ 2 }{ 3 } & \frac{ 4 }{ 5 } \\ 0 & 1 & - \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{2} = \left( \begin{array}{rrr} 1 & \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{2} = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 0 \\ 0 & 0 & \frac{ 3 }{ 5 } \\ \end{array} \right) $$

==============================================

$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$

$$ P^T H P = D $$

$$\left( \begin{array}{rrr} 1 & 0 & 0 \\ - \frac{ 2 }{ 3 } & 1 & 0 \\ \frac{ 4 }{ 5 } & - \frac{ 6 }{ 5 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & - \frac{ 2 }{ 3 } & \frac{ 4 }{ 5 } \\ 0 & 1 & - \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 0 \\ 0 & 0 & \frac{ 3 }{ 5 } \\ \end{array} \right) $$$$ $$

$$ Q^T D Q = H $$

$$\left( \begin{array}{rrr} 1 & 0 & 0 \\ \frac{ 2 }{ 3 } & 1 & 0 \\ 0 & \frac{ 6 }{ 5 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & \frac{ 5 }{ 3 } & 0 \\ 0 & 0 & \frac{ 3 }{ 5 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & \frac{ 2 }{ 3 } & 0 \\ 0 & 1 & \frac{ 6 }{ 5 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 2 & 0 \\ 2 & 3 & 2 \\ 0 & 2 & 3 \\ \end{array} \right) $$

$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$

$\endgroup$ 5 $\begingroup$

This is in part a rehash of Noah Stein answer, without too much clutter. More importantly, it gives you an algorithmic way to determine what type of matrix you have.

It's easy if you follow Cholesky method in here.

A squared matrix is positive definite if it is symmetric (!) and $x^TAx>0$ for any $x\neq0$. Then by Cholesky decomposition theorem $A$ can be decomposed in exactly one way into a product

$$ A = R^TR $$

where $R$ is upper triangular and $r_{ii}>0$.

If this is true, then (see the reference!), the diagonal elements of $R$ must fulfill

$$ r_{ii} = +\sqrt{a_{ii}-\sum_{k=1}^{i-1}r_{ki}^2}\; j=i+1,\dots,n, $$

where $n$ is the real vector space dimension. If at some point this square root is either complex or zero, then $A$ cannot be positive definite.

Nice and neat! You just have to squared, take the difference and the square root to find if $A$ is positive. It is even easy to program this method even for arbitrary $n$.

$\endgroup$ $\begingroup$

For the two statements mentioned above, the first one is true just based on the Gershgorin Circle Theorem. But actually, the second statement is true if you add one more condition on A: matrix A is irreducible. Then the counter-example provided below is no sense any more since they are reducible.

$\endgroup$ 1

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy