Saturday, September 18, 2010

Embedding fonts on Mac (or, another reason why I love my MacBook)

I just learned something remarkable, and I have to write it down so I don't forget:

Often IEEE conferences will require you to submit a PDF with embedded fonts. If you're forced to do this, no doubt you're familiar with a hugely irritating fact about most LaTeX packages: they don't embed fonts by default. There are workarounds, like the ps2pdf command line parameters that force font embedding, but using them can be awkward (e.g., if your LaTeX target isn't postscript, or if you're using a GUI LaTeX editor).

But if you're a Mac user, like me, here's all you have to do: Open the PDF in Preview, and then save it again. It's saved with fonts embedded. That's it, seriously!

Friday, September 17, 2010

Odds and ends

Odds and ends from the past week:

  • If you're the kind of person who turns off WiFi SSID broadcast because you think it makes your network more secure, don't bother.
  • Once in a while, I write an exam question which compares the data rate of an internet connection with, e.g., physically carrying a hard drive from place to place. Turns out somebody did this in real life using carrier pigeons. Result: in rural England, pigeons with microSD cards beat the internet by a wide margin. (via)

Tuesday, September 14, 2010

PhD positions open at UIC

I got the following last night. For any senior EE undergrads or master's students out there, this is a great opportunity to do some interesting information theory work. (Let me also say that Chicago is one of my favorite cities in the USA.)

Click the poster to see full size, or click here for the PDF version. I don't have any more details than are in the announcement -- contact the professors directly for more info.

Thursday, September 9, 2010

A question about physical layer, quote, security, endquote

Physical layer security sure is hot these days. Its proponents claim provable security, something the cryptographic community hasn't yet been able to provide. Sounds great!

Physical layer security is largely based on the wire-tap channel; here (pdf) is one of the seminal papers on the subject. The great achievement in the wire-tap channel is to allow transmission at capacity on the from source to the legitimate receiver, while the mutual information from source to wire-tapper is zero, even if the wire-tapper knows the code book. Thus, we have "proven" that communication is secure, because the wire-tapper can never accurately guess the message that was sent to the legitimate receiver.

But here's the thing. Look at figures 1 and 2 in the paper I link. Everything relies on the wire-tapper having a physically degraded channel with respect to the legitimate receiver. This makes sense: if, somehow, the situation were reversed and the legitimate receiver were degraded with respect to the wire-tapper, it would obviously be impossible to prevent the wire-tapper from decoding the messages. Put another way, there is no security unless the wire-tapper has a worse channel than the legitimate receiver.

Here is my question: isn't it misleading to call this "secure"? It is unlikely that the wire-tapper would oblige us by providing his channel state information. Thus, we merely exchange one set of uncertainties for another: namely, exchanging uncertainty about the hardness of the factoring problem for uncertainty about the wire-tapper's channel state -- except that factoring is very widely believed to be intractable, whereas it's not hard to imagine a committed adversary being able to find a good channel for a wire-tap.

Saturday, September 4, 2010

One up, one down

Maxim Raginsky kicks off his new blog, The Information Structuralist.

Meanwhile, Mitzenmacher signs off his blog to focus on his new administrative appointment.

(Raginsky's blog via)

Thursday, September 2, 2010

Five-Minute Info Theory Problem

I can't believe it's September already. Astonishingly, I managed to get eight of my nine summer to-do list items done, at least partly.

Here's a simple problem that came up yesterday when I was looking for a quick bound on entropy ... I thought it would be a nice warmup to kick off the term:

Let A and B be independent random variables. WLOG, suppose H(A) <= H(B).

Show that H(A) <= H(B) <= H(A+B) <= H(A)+H(B).

Post your answer in the comments.