It's well known that certain distributions have maximum entropy under moment constraints. For instance, maximum entropy H(X) under a variance constraint is given by the Gaussian distribution. Generally, for any positive integer k and for given constant c, there exist algorithms to find the distribution maximizing H(X) subject to E[X^k] = c.

However, here's what I learned yesterday: if we change the constraint to E[X^-k] = c, there exists no such maximum entropy distribution -- because you can always achieve unbounded entropy subject to the constraint.

At first I thought this was surprising, but then I realized it's not too hard to think of an example that shows why it's true. Think about it for a few minutes -- I'll post a hint in the comments.

## 2 comments:

Here's the hint: Since |X^-k| -> 0 as X -> infinity, you can pack lots of entropy into very large values of X without affecting E[X^-k] very much.

Another interesting fact is that if you constrain E[X^3], non-negativity constraints could become active and put the solution outside of the realm of exponential families, here's an example -- http://yaroslavvb.blogspot.com/2008/01/maximum-entropy-with-skewness.html

Post a Comment