Fisher information matrix
Given a statistical model of a random vector X, the Fisher information matrix![]()
, , is the variance
![]()
of the score function
![]()
. So,
If there is only one parameter involved, then is simply called the Fisher information or information of .
Remarks
- •
If belongs to the exponential family, . Furthermore, with some regularity conditions imposed, we have
- •
As an example, the normal distribution

, , belongs to the exponential family and its log-likelihood function

is
where . Then the score function is given by
Taking the derivative with respect to , we have
Therefore, the Fisher information matrix is
- •
Now, in linear regression model with constant variance , it can be shown that the Fisher information matrix is
where X is the design matrix of the regression model.
- •
In general, the Fisher information meansures how much “information” is known about a parameter . If is an unbiased estimator

of , it can be shown that
This is known as the Cramer-Rao inequality, and the number is known as the Cramer-Rao lower bound. The smaller the variance of the estimate of , the more information we have on . If there is more than one parameter, the above can be generalized by saying that
is positive semidefinite, where is the Fisher information matrix.
| Title | Fisher information matrix |
| Canonical name | FisherInformationMatrix |
| Date of creation | 2013-03-22 14:30:15 |
| Last modified on | 2013-03-22 14:30:15 |
| Owner | CWoo (3771) |
| Last modified by | CWoo (3771) |
| Numerical id | 14 |
| Author | CWoo (3771) |
| Entry type | Definition |
| Classification | msc 62H99 |
| Classification | msc 62B10 |
| Classification | msc 62A01 |
| Synonym | information matrix |
| Defines | Fisher information |
| Defines | information |
| Defines | Cramer-Rao inequality |
| Defines | Cramer-Rao lower bound |