I have regular meetings in the Bahen center at U of T, which is where most of their communications people are located. However, the math department is located on the 6th floor, and if I'm early for a meeting, I usually hang out in their library.
On one recent visit to the math library, the Fibonacci Quarterly caught my eye. The lead-off article was on escalator numbers. Unfortunately I can't find an online version of the article, but another article by the same author (Grundman) is here.
A note on notation: in the text, take a[i] to mean a with subscript i. An escalator sequence a[1], a[2], ... has the property that, for all n,
(Eq. 1)
The escalator numbers A[i] are the partial sums (or products) of the escalator sequence:
Escalator sequences are defined by their starting element a[1], which cannot be equal to 1. It is easy to show that
(Eq. 2)
and
(Eq. 3)
The escalator number relation (Eq. 1) reminds me a lot of the arithmetic-geometric mean inequality (AGMI): for positive numbers x[1], x[2], ...,
From (Eq. 2) and (Eq. 3), we see that if A[i-1] > 1, then a[i] > 1 and A[i] > 1. Since A[1] = a[1], we can use an inductive argument to show that if a[1] > 1, then a[j] > 1 for all j, and therefore a[j] is positive for all j. Thus, if a[1] > 1, the AGMI can be applied to the escalator sequence.
After playing around a bit, I got the following (hopefully nontrivial) result. Using the AGMI, if a[1] > 1, we can show that
(Eq. 4)
This is shown as follows. First, we have
where the middle inequality is the AGMI. The rest follows from algebraic manipulation.
The bound in (Eq. 4) is surprisingly tight for large n when a[1] is reasonable. For instance, if a[1]=2, then A[100] = 106.4308..., while the bound gives 104.7616... .
Three questions. First, is this result already known? Second, is the bound asymptotically tight for large n? Third, can we get an upper bound?
The engineer in me can't help trying to think about applications for escalator numbers, but I haven't thought of anything yet.
Wednesday, September 30, 2009
Tuesday, September 22, 2009
The Globe points a finger
First Simpson, now Wente: the Globe and Mail thinks it knows what's wrong with Canada's universities. The answer? Research! Oh, and also: overpaid professors.
So did you know that university research is holding Canada back? Simpson (emphasis added):
Or that sixty-hour weeks pounding out papers are just one big fat vacation? Wente (again, emphasis added):
But economic data suggests that private industry is not ready to take over for my lazy six-months-off ass. According to Parliament's Standing Committee on Industry, Science, and Technology (link):
What's more, Simpson knows this. Concerning the collapse of Nortel, he wrote in favor of government intervention:
So that's right, Mr. Simpson and Ms. Wente. In a country starved for private sector research, let's pretend that the really important thing is use universities as an extension of high school, whose job is largely to churn out B.A. and B.Sc. grads. Not at all to advance the economy into the 21st century with new ideas and new technologies, no sir! By the way, where are all those grads -- in a "better off" Canada -- going to work?
And the complaints about professors' pay -- both Simpson's and Wente's -- are just ridiculous. Even on their face, they are just out-of-context numbers. But in my own situation, according to this, the median salary for a PhD in electrical engineering with 1-4 years of experience is $88,647 (US) -- considerably more than I'm making. And anecdotally, my friends in industry seem to be making 10-20% more than me. So I would suggest that I'm a bargain.
So did you know that university research is holding Canada back? Simpson (emphasis added):
If big universities spent half as much time and sustained effort trying to improve undergraduate teaching as they do searching for more research money, they, the students and the country might be better off.
Or that sixty-hour weeks pounding out papers are just one big fat vacation? Wente (again, emphasis added):
But the full professors ... have a very pleasant life. They can make $125,000 a year, with a good pension and six months off each year to do as they please.
But economic data suggests that private industry is not ready to take over for my lazy six-months-off ass. According to Parliament's Standing Committee on Industry, Science, and Technology (link):
business R&D intensity (expenditure as a percentage of GDP) in Canada is lower than the OECD average, and that the business sector both funds and performs a lower percentage of total national R&D than does the business sector in other OECD countries.
What's more, Simpson knows this. Concerning the collapse of Nortel, he wrote in favor of government intervention:
Even in decline, Nortel continued to spend $1.8-billion a year on research, in a country starved for private-sector research.
So that's right, Mr. Simpson and Ms. Wente. In a country starved for private sector research, let's pretend that the really important thing is use universities as an extension of high school, whose job is largely to churn out B.A. and B.Sc. grads. Not at all to advance the economy into the 21st century with new ideas and new technologies, no sir! By the way, where are all those grads -- in a "better off" Canada -- going to work?
And the complaints about professors' pay -- both Simpson's and Wente's -- are just ridiculous. Even on their face, they are just out-of-context numbers. But in my own situation, according to this, the median salary for a PhD in electrical engineering with 1-4 years of experience is $88,647 (US) -- considerably more than I'm making. And anecdotally, my friends in industry seem to be making 10-20% more than me. So I would suggest that I'm a bargain.
Monday, September 21, 2009
Toronto versus New York
Toronto: Bike thief followed by earnest good Samaritan, who snaps an iPhone picture. Theft reported to police; to help find witnesses, photo posted to the internet.
New York: Bike thief gets ass beat by enraged bike owner and friends. Ass beaters, proud of themselves, record the beating; video posted to the internet.
(Tip of the hat to commenter tapesonthefloor at Torontoist.)
New York: Bike thief gets ass beat by enraged bike owner and friends. Ass beaters, proud of themselves, record the beating; video posted to the internet.
(Tip of the hat to commenter tapesonthefloor at Torontoist.)
Tuesday, September 15, 2009
Trace Tricks
The trace of a square matrix is the sum of the elements on the main diagonal. That is, for an n by n square matrix A, the trace of A is
This might not seem too exciting at first. However, the trace operator has a neat quasi-commutative property: for matrices U and V, so long as the internal dimensions work out, it is true that
The proof isn't too hard so I'll skip it. If we had a third matrix W (again assuming the internal dimensions work out), since matrix multiplication is associative, it is also true that
It's not truly commutative, since you can only do cyclic shifts of the arguments. So, e.g., tr(UVW) is not equal to tr(WVU) in general.
What can you do with this? For one thing, note that the trace of a scalar a is itself: tr(a) = a. So if you have a matrix multiplication that results in a scalar, you can use trace to rearrange the arguments.
For instance, let U be a 1 by n row vector, and let V be an n by n matrix. If U' is the transpose of U, then UVU' is a scalar. This kind of expression comes up pretty often in jointly Gaussian distributions.
Now say U is a zero-mean vector with covariance matrix E[U'U], and I want to know E[UVU']. Using the trace trick, I can express this expectation in terms of E[U'U]: first, we can write
and since expectation distributes over the trace sum, we have
As a result, if you know the covariance E[U'U], there's no need to recalculate any expectations.
This might not seem too exciting at first. However, the trace operator has a neat quasi-commutative property: for matrices U and V, so long as the internal dimensions work out, it is true that
The proof isn't too hard so I'll skip it. If we had a third matrix W (again assuming the internal dimensions work out), since matrix multiplication is associative, it is also true that
It's not truly commutative, since you can only do cyclic shifts of the arguments. So, e.g., tr(UVW) is not equal to tr(WVU) in general.
What can you do with this? For one thing, note that the trace of a scalar a is itself: tr(a) = a. So if you have a matrix multiplication that results in a scalar, you can use trace to rearrange the arguments.
For instance, let U be a 1 by n row vector, and let V be an n by n matrix. If U' is the transpose of U, then UVU' is a scalar. This kind of expression comes up pretty often in jointly Gaussian distributions.
Now say U is a zero-mean vector with covariance matrix E[U'U], and I want to know E[UVU']. Using the trace trick, I can express this expectation in terms of E[U'U]: first, we can write
and since expectation distributes over the trace sum, we have
As a result, if you know the covariance E[U'U], there's no need to recalculate any expectations.
Subscribe to:
Posts (Atom)