Saturday, December 18, 2010

A look back

At the beginning of the year, I resolved to blog more seriously in 2010.  I didn't quite hit my goal of 52 blog posts, but I'm still pretty happy with how it turned out.

This blog will be on hiatus until the new year.  Happy holidays, and see you in 2011.

Thursday, December 2, 2010

Two new papers on molecular communication

I just got back from lovely Cambridge, Mass., where I managed to sneak in a day to see some of the interesting talks at Bionetics before storming back to Toronto to cover my teaching schedule.  December in Boston, what more could you ask for?

I'm a coauthor on a couple of new molecular communication papers that just came out, one at Bionetics, and the other on arXiv:
  • For Bionetics, my Ph.D. student, Nariman Farsad, and our collaborators, Satoshi Hiyama and Yuki Moritani of NTT DOCOMO, continued some of our earlier work on microchannels.  This was a short "work-in-progress" paper, where we considered the performance of active transport systems in a more realistic simulation environment.  Surprisingly, Brownian motion with drift usually does better than molecular motors -- probably because every molecule can start propagating right away, rather than waiting to be picked up by a motor. [N. Farsad, A. W. Eckford, S. Hiyama, and Y. Moritani, "Information rates of active propagation in microchannel molecular communication,” in Proc. 5th International Conference on Bio-Inspired Models of Network, Information, and Computing Systems, Boston, MA, USA, 2010. (pdf)]
  • On arXiv (and also submitted to Trans. Info. Theory), K. V. Srinivas, Ravi Adve, and I extended Sachin's earlier work on Brownian motion with drift.  We're looking at molecular timing channels, where information is encoded in the transmission time x. The receiver sees the reception time y, where y = x + n, and where n is the first arrival time of a Brownian motion; thus, these are additive noise channels. It turns out that the first arrival time distribution of a Brownian motion with positive drift is given by the inverse Gaussian distribution (kind of an odd name at first glance, since it's not the reciprocal of a Gaussian PDF).  So a molecular timing channel in the presence of positive drift is an additive inverse Gaussian noise channel. There is a rich literature on the inverse Gaussian, which lets us get closed-form solutions and bounds for a number of important quantities, like ML detectors, probabilities of error, and capacities -- sadly, not as clean and nice as the solutions for the AWGN, but we do what we can. [K. V. Srinivas, R. S. Adve, and A. W. Eckford, "Molecular communication in fluid media: The additive inverse Gaussian noise channel," arXiv:1012.0081 (submitted to IEEE Transactions on Information Theory).]
You can also check out more of my molecular communication work here.

    Thursday, November 25, 2010

    Sixty years of Hamming codes

    Via Slashdot, this year marks the 60th anniversary of the venerable Hamming code.  Arriving just two years after Shannon's seminal paper, the Hamming codes were the first nontrivial error-correcting code, and Hamming's contribution to the theory of error correcting codes (using metrics which are now known as Hamming distance and Hamming weight) are still taught today.

    Here (pdf) is Hamming's original paper (from BSTJ), and here is my Hamming code lecture from last spring's information theory course:

    Friday, November 12, 2010

    Wednesday, November 10, 2010

    Capacity in the presence of intractability

    Shannon's channel capacity, C = max_p(x) I(X;Y), gives the highest rate at which information can be reliably transmitted over a noisy channel.  But the simplicity of the equation is deceptive: there are very few channels for which capacity is known.  Normally, the difficulty is in optimizing over all input distributions p(x), but what if I(X;Y) is itself intractable?

    I(X;Y) can be hard to calculate if f(y) or f(y|x) are hard.  The former can happen in intersymbol interference channels with long memories, especially under a binary (or discrete) input constraint.  The latter can happen in timing channels, when x is a vector of inputs, and the output is of the form y = sort(x+n), where n is the vector of timing noise. In this case, calculating f(y|x) requires the calculation of a matrix permanent, a classic intractable problem for vectors of practical length. (See also: the Bapat-Beg theorem.)

    However, there is an elegant trick you can use to find a lower bound on I(X;Y), as long as you can (tractably) generate random variables x and y according to their true distributions f(x) and f(y|x).  Let g(y|x) be any valid conditional distribution of y given x.  Further, let



    Then





    where expectation is always taken with respect to the true distribution.  That is, if you can pick a tractable approximate distribution g(y|x), and you can simulate both x and y, then you can perform a Monte Carlo expectation on log g(y|x)/g(y), and get a lower bound on I(X;Y).  You can then optimize with respect to the bound to find a lower bound on capacity.

    The proof is straightforward.  First let g(x|y) = g(y|x)f(x)/g(y). Then:









    So the accuracy of the bound is equal to the relative entropy between the true f(x|y) and the approximate g(x|y).

    This result has been around for a while, but a nice recent application is here:

    D. M. Arnold, H.-A. Loeliger, P. O. Vontobel, A. Kavcic, and W. Zeng, “Simulation-based computation of information rates for channels with memory,” IEEE Trans. Inform. Theory, vol. 52, pp. 3498–3508, Aug. 2006.

    Tuesday, November 2, 2010

    List of blogs in machine learning

    Here is a thread with a huge list of good machine learning blogs.

    The most interesting discovery from the list: Radford Neal, the co-re-discoverer of LDPC codes, has a blog. I took his graphical inference course when I was a Ph.D. student at U of T.

    Tuesday, October 26, 2010

    Announcing the Nano Networks blog

    As one of my volunteer service positions, I am chair of the IEEE ComSoc Emerging Technologies Subcommittee on Nanoscale, Molecular, and Quantum Networking.  I've decided to transform our committee's home page into a blog, to share research news and other details about this field.

    You can read the blog at http://nano-networks.blogspot.com/.  You can also subscribe to our Twitter feed: @NanoNetworks.

    The blog will be newsletter-style; if you have anything interesting to post (new paper, profile of yourself or your lab, upcoming conference, or anything else), please send me a short blurb, preferably something ready to post.  I will try to make use of anything sent to me.

    Thursday, October 14, 2010

    Entropy fact of the day

    It's well known that certain distributions have maximum entropy under moment constraints. For instance, maximum entropy H(X) under a variance constraint is given by the Gaussian distribution. Generally, for any positive integer k and for given constant c, there exist algorithms to find the distribution maximizing H(X) subject to E[X^k] = c.

    However, here's what I learned yesterday: if we change the constraint to E[X^-k] = c, there exists no such maximum entropy distribution -- because you can always achieve unbounded entropy subject to the constraint.

    At first I thought this was surprising, but then I realized it's not too hard to think of an example that shows why it's true. Think about it for a few minutes -- I'll post a hint in the comments.

    Friday, October 1, 2010

    How to get my attention

    Every couple of months or so, a wave of emails from prospective PhD applicants will flood my inbox. A typical email will go something like this:
    Dear Professor Andrew Eckford,

    I saw your web page and am very interested in your research on computer networks. I have the same research interests as you and would like to apply for a PhD position under your supervision. My master's thesis was entitled "[topic I'm not interested in]" and I published my results in [conference/journal I've never heard of]. I am attaching my resume to this email and look forward to your positive reply.
    The messages are so similar, both in terms of content and geographic origin, that I have to think there is some agency behind them. But particularly because they are so similar, it's easy to dismiss them as spam and ignore them. It probably doesn't hurt an applicant to send these emails, but I would never remember such a message at admission time, so it certainly doesn't help either.

    What would get my attention is an applicant who makes it clear that s/he is not just interested in the PhD slot. In fact it's probably unnecessary to mention in the first email that you're a Ph.D. applicant; what I really want are bright students who are interested in research in general, and my research in particular. I've never received an email like this from an applicant, but I would certainly remember it if I did:
    Dear Professor, I read your recent papers on fractional cooperation, and thought the idea was interesting. I have the following comments and questions about your scheme ... [some insightful commentary follows].
    Of course, a more personal touch works for me, because my group is small and I can afford to be very selective with my Ph.D. applicants. Maybe the form letter approach works for profs whose groups are measured in the dozens.

    Saturday, September 18, 2010

    Embedding fonts on Mac (or, another reason why I love my MacBook)

    I just learned something remarkable, and I have to write it down so I don't forget:

    Often IEEE conferences will require you to submit a PDF with embedded fonts. If you're forced to do this, no doubt you're familiar with a hugely irritating fact about most LaTeX packages: they don't embed fonts by default. There are workarounds, like the ps2pdf command line parameters that force font embedding, but using them can be awkward (e.g., if your LaTeX target isn't postscript, or if you're using a GUI LaTeX editor).

    But if you're a Mac user, like me, here's all you have to do: Open the PDF in Preview, and then save it again. It's saved with fonts embedded. That's it, seriously!

    Friday, September 17, 2010

    Odds and ends

    Odds and ends from the past week:

    • If you're the kind of person who turns off WiFi SSID broadcast because you think it makes your network more secure, don't bother.
    • Once in a while, I write an exam question which compares the data rate of an internet connection with, e.g., physically carrying a hard drive from place to place. Turns out somebody did this in real life using carrier pigeons. Result: in rural England, pigeons with microSD cards beat the internet by a wide margin. (via)

    Tuesday, September 14, 2010

    PhD positions open at UIC

    I got the following last night. For any senior EE undergrads or master's students out there, this is a great opportunity to do some interesting information theory work. (Let me also say that Chicago is one of my favorite cities in the USA.)

    Click the poster to see full size, or click here for the PDF version. I don't have any more details than are in the announcement -- contact the professors directly for more info.

    Thursday, September 9, 2010

    A question about physical layer, quote, security, endquote

    Physical layer security sure is hot these days. Its proponents claim provable security, something the cryptographic community hasn't yet been able to provide. Sounds great!

    Physical layer security is largely based on the wire-tap channel; here (pdf) is one of the seminal papers on the subject. The great achievement in the wire-tap channel is to allow transmission at capacity on the from source to the legitimate receiver, while the mutual information from source to wire-tapper is zero, even if the wire-tapper knows the code book. Thus, we have "proven" that communication is secure, because the wire-tapper can never accurately guess the message that was sent to the legitimate receiver.

    But here's the thing. Look at figures 1 and 2 in the paper I link. Everything relies on the wire-tapper having a physically degraded channel with respect to the legitimate receiver. This makes sense: if, somehow, the situation were reversed and the legitimate receiver were degraded with respect to the wire-tapper, it would obviously be impossible to prevent the wire-tapper from decoding the messages. Put another way, there is no security unless the wire-tapper has a worse channel than the legitimate receiver.

    Here is my question: isn't it misleading to call this "secure"? It is unlikely that the wire-tapper would oblige us by providing his channel state information. Thus, we merely exchange one set of uncertainties for another: namely, exchanging uncertainty about the hardness of the factoring problem for uncertainty about the wire-tapper's channel state -- except that factoring is very widely believed to be intractable, whereas it's not hard to imagine a committed adversary being able to find a good channel for a wire-tap.

    Saturday, September 4, 2010

    One up, one down

    Maxim Raginsky kicks off his new blog, The Information Structuralist.

    Meanwhile, Mitzenmacher signs off his blog to focus on his new administrative appointment.


    (Raginsky's blog via)

    Thursday, September 2, 2010

    Five-Minute Info Theory Problem

    I can't believe it's September already. Astonishingly, I managed to get eight of my nine summer to-do list items done, at least partly.

    Here's a simple problem that came up yesterday when I was looking for a quick bound on entropy ... I thought it would be a nice warmup to kick off the term:

    Let A and B be independent random variables. WLOG, suppose H(A) <= H(B).

    Show that H(A) <= H(B) <= H(A+B) <= H(A)+H(B).

    Post your answer in the comments.

    Thursday, August 19, 2010

    Tuesday, August 17, 2010

    Paper at IEEE NANO: Brownian motion versus motors for molecular communication

    IEEE NANO is happening this week, and I have a paper:

    A. W. Eckford, N. Farsad, S. Hiyama, and Y. Moritani, “Microchannel molecular communication with nanoscale carriers: Brownian motion versus active transport,” in Proc. IEEE International Conference on Nanotechnology, Seoul, Korea, 2010. [PDF] [Poster PDF]

    Building on my earlier molecular communication work, this paper looks at communication in microchannels, like you would find in a lab-on-chip device. In these devices, molecules can propagate either randomly by Brownian motion, or by piggy-backing on a molecular motor. We compared the information rates achievable by each, and found two distinct regimes where each method works best. When the number of information-bearing molecules is small, motors are superior, since their motion is much less random than Brownian motion. However, for large numbers of molecules, Brownian motion is best, because all the molecules can start propagating immediately, rather than waiting for a motor to pick them up.

    This was joint work with my Ph.D. student, Nariman Farsad, as well as Satoshi Hiyama and Yuki Moritani from NTT DOCOMO. Nariman is presenting the paper.

    Friday, August 13, 2010

    Making the list

    In case you're wondering, the 2010 Academic Ranking of World Universities (PDF) is out. How'd we do? York is in the 401-500 category alongside Canadian universities like Carleton, Concordia, Sherbrooke, and the Université du Québec system. Some other schools in the category were Boston College, Lehigh University, and the University of Tehran.

    Altogether there were 23 Canadian universities in the top 500, with U of T topping the Canadian contingent at 27th spot.

    Of course there are lots of problems with university rankings -- probably more meaningful to compare individual departments, and even then not so much. But it's nice to be on the list anyway.

    (List via @InklessPW.)

    Monday, August 9, 2010

    P vs. NP: Solved? (Update: No)

    There's a paper under review claiming to prove that P ≠ NP. The claimed proof is being treated credibly: Stephen Cook is quoted as saying, "This appears to be a relatively serious claim to have solved P vs NP." Dick Lipton, who maintains a blog on complexity theory and who has a summary of the paper, is optimistic but points to some possible problems with the proof.

    Complexity touches information theory at a number of points. For instance, many optimal detection and error-control coding problems are known to be NP-hard: syndrome decoding of a linear code (1), minimization of the trellis complexity of a linear code (2), and optimal multiuser detection (3) are good examples.

    Update (Aug. 11): This BBC article doesn't add anything new, but has one of the nicest layman's descriptions of P versus NP that I've ever read:

    Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology (MIT) in the US, explained to BBC News why this problem was so significant. "People sometimes use the analogy of a jigsaw - it can be hard to complete the jigsaw, but if someone has done it, it's pretty easy to check - you just look at it," he said.

    P vs NP poses the following question: If there is a problem that has this property - whereby you could recognise the correct answer when someone gives it to you - then is there some way to automatically find that correct answer? "There's always one way a computer can find the answer- just by trying all the possible combinations one by one," said Dr Aaronson. "But if you're trying to break a cryptographic code, for example, that could take an astronomical amount of time. P vs NP is asking - can creativity be automated?"

    Meanwhile, the theoretical CS community is going bonkers over this result. Aaronson has bet $200,000 that the proof will not stand up to scrutiny. Lipton has collected the four main objections in early readings of the proof so far, and there is now a wiki devoted to analysis and criticism of the proof.

    Also, from here, a nice quote about proofs in general: "A proof only becomes a proof after the social act of 'accepting it as a proof'." -- Yuri Manin

    Update (Aug. 12): In a comment on Lipton's blog, the awesome Terry Tao more or less declares the proof to be toast. Aaronson regrets his bet, even though his money is most likely safe.

    Final update (Aug. 13): A highly technical post on Lipton's blog (which I don't pretend to understand at all) seems to torpedo the proof. The post's commenters agree.

    Wednesday, July 28, 2010

    On rejection

    Early on, a major issue with evolutionary theory was the lack of "early" animal fossils: complex animals, like trilobites, were well known in the Cambrian fossil record, but there was no evidence of their evolutionary precursors.

    All this changed in 1946, when geologist Reginald Sprigg made a startling discovery. The New York Times has the story:

    Reginald Sprigg, a geologist for the South Australia government, was checking out some old mines in the Ediacaran Hills of the Flinders Range several hundred miles north of Adelaide. Sprigg noticed some striking disc-shaped impressions up to four inches in diameter on the exposed surfaces of rocks nearby. Sprigg interpreted the patterns as the [pre-Cambrian] fossil remains of soft-bodied creatures like jellyfish or their relatives.

    The kind of career-making discovery a young scientist could only dream of! But the story goes on (emphasis added):
    When Sprigg first showed the imprints to leading authorities, they dismissed them as artifacts made by the weathering of the rocks. ... Later that year, when Sprigg found the frond-like forms he called Dickinsonia, he was certain that such geometrical impressions could have been made only by living creatures. But despite their potential importance, Sprigg’s discoveries were ignored at an international geology meeting and his paper describing the fossils was rejected by the leading journal [Nature]. Sprigg moved on to other, more rewarding pursuits in the oil, gas, and mining industries.

    It took another decade for Sprigg's earth-shattering contribution to be widely recognized. Something to keep in mind both as an author and as a reviewer.

    Friday, July 23, 2010

    The other Blackwell channel

    In memory of the late David Blackwell (see obits here and here), I'd like to talk about the Blackwell channel. Not this one, which is the one everyone thinks of, but the other one -- otherwise known as the trapdoor channel, the billiard-ball channel, or the chemical channel. (Find me another information theorist with two channels named after them.)

    The channel works like this. Information is transmitted using coloured balls (red to represent 0 and black to represent 1, say). The "channel" is a bag containing a single (red or black) ball. At each channel use, the transmitter inserts a single ball into the bag, and the receiver then removes one of the two balls, selecting uniformly at random between the two. It should be obvious that this model can be generalized in various ways.

    Blackwell proposed this model in his 1961 book as an example of a channel with memory: the channel output is clearly dependent on prior channel inputs. Remarkably for such a simple model, capacity is still unknown. As far as I know, the best result is the channel capacity with feedback, found by Permuter et al. to be the logarithm of the golden ratio. Since the transmitter is free to disregard the feedback, this must also be an upper bound on the capacity without feedback.

    This channel also admits zero-error codes. For example, even if the initial channel state is unknown, a threefold repetition code (i.e., red, red, red or black, black, black) can be decoded without errors, since there can be at most one wrong-coloured ball at the output; thus, the zero-error capacity is (trivially) at least 1/3. Zero-error capacity is also an open problem for this channel.

    Monday, July 12, 2010

    Why the rage against Engage?

    The Engage grant is a new NSERC initiative for kick-starting collaborations between industry and academia. The main features of these grants are: short duration (6 months max), limited value ($25,000 max), industry focus (an industrial partner, who will own all the IP, is mandatory), and quick decisions (6 week turnaround, which is only possible with no peer review). I found out on Friday that my Engage proposal would be funded; I'll write more about it as we publish, but the project is in the broad area of fractional resource sharing for femtocells, similar to the earlier work we've done on fractional cooperation (e.g., 1, 2, 3).

    So imagine my surprise to learn that the CAUT has come out strongly opposed to Engage, passing the following resolution:
    NSERC must remain a granting council that enables peer-reviewed fundamental research. Whereas linkages between industry and universities consistent with academic freedom are possible, the targeting of granting council funds to private industry’s needs erodes Canada’s capa­city to contribute to the general advancement of knowledge in the public interest.
    Of course I'm biased because I'm a recipient of NSERC's largesse under this program. But CAUT's position is bizarre on a number of counts. First, it treats all scientific research as equally separate from industrial applications. In wireless communications, there is no tension between the "general advancement of knowledge" and "industry's needs" -- they are one and the same, because essentially all of the applications are industrial. Second, it assumes that only non-industrial research is in the public interest. Again, in wireless, it's the opposite -- the type of research that has a real impact, say by improving your mobile's performance (like this), is coming out of industrial labs, not out of academia. Third, CAUT has completely missed the point of this program: its limited scope, duration, and funding makes it only useful as a door-opener, to give a researcher an easy way to approach a company, and to give that company a low-risk way to say yes to a collaborative project. What's more, Engage grants can only be approved for companies and researchers that have never collaborated before, so this granting program can't be part of some great outsourcing of corporate R&D to universities.

    My view on these projects is broadly held across engineering faculties. So CAUT is misunderstanding its membership by opposing the Engage grant, and promoting a "pure" vision of academia that is potentially harmful in some disciplines.

    Tuesday, July 6, 2010

    Molecular communication using Brownian motion with drift

    I'm third author on this paper, and Sachin and Ravi rightly deserve most of the credit. However, we got a writeup in Technology Review, so I thought I would blog a bit about the paper.

    In molecular communication, a transmitter sends a message to a receiver by releasing a pattern of molecules into a shared medium. One way to communicate using molecules is to use timing -- i.e., releasing a molecule at different times to express different messages. Using Brownian motion, the timing message is distorted by the random propagation time from transmitter to receiver.

    My original work on molecular communication (1, 2) considered the case of drift-free Brownian motion. In this paper, we extend these results into Brownian motion with drift -- it's a much easier case to deal with, since the first arrival distribution without drift has a very heavy tail, as well as infinite mean. We also have some nice results about modulation: even when multiple molecules are available, pulse position modulation turns out to work quite well if the drift velocity is low.

    The paper was submitted to IEEE Trans. Nanobioscience, and we're expecting the first reviews back any day now.

    Tuesday, June 22, 2010

    Just like being there

    I didn't catch ISIT this year -- my first miss since 2003. Sarwate (1, 2, 3, 4, 5) and Mitzenmacher (1, 2) have some nice summaries. Congrats to Shlomo Shamai for his Shannon Award win.

    Thursday, June 3, 2010

    You should be tweeting

    Surprisingly few academics in my field are on Twitter. I'm one of them, and did my share of tweeting through ICC. Other than me, only four other people used the #icc2010 hashtag: @jabriffa, @csgrad, @gvrooyen, and @mayanm. This is at a conference with around a thousand attendees! Why the low uptake? It's not as though Twitter is brand new.

    At ICC, it struck me that Twitter would be a great tool for participants to get the most out of large conferences. With 20 parallel sessions and over 1000 accepted papers, ICC is enormous -- it would be physically impossible to see more than a small fraction of the available presentations. To deal with this overload of information, Twitter would be an ideal tool to quickly find out about interesting sessions. For example, say you're at a huge conference: tweeting audience members could be giving real-time information on which sessions are worth attending, which presentations are the most engaging, and which papers might interest you. You might change your plans to swing by and check out a session that seems interesting -- or if you didn't have time, you could make a note and check out the papers in the proceedings. Otherwise, seeing these tweets would be a nice way to find out about researchers with interests similar to yours -- and for them to find out about you.

    This networking effect of Twitter is a key feature that goes beyond conferences, especially for junior faculty and graduate students. Research -- even excellent research -- doesn't usually attract attention to itself; especially in a huge field like communications, it's incredibly hard for individual researchers to boost their signal above the background. As a result, researchers need to be advertising their work all the time, using every tool they can find. Twitter is ideal for this -- for instance, whenever I publish a paper or give a talk, I mention it on my Twitter feed, and it is broadcast to all my followers. (Of course, the trick is then to get lots of followers.) This principle applies to all social media, not just Twitter: I maintain a personal website, this blog and a YouTube page as well as Twitter, to make it as easy as possible for people to find out about my work.

    As one commenter on Mitzenmacher's blog put it, any graduate student wanting an academic career should have a blog. I'd go further: graduate students and junior faculty should be blogging, tweeting, and doing whatever else they can to get their name and work out there.

    Monday, May 31, 2010

    Paper at ICC: PDF and video

    I'm back from beautiful Cape Town, where I presented the paper I mentioned earlier.

    Citation:
    N. Farsad and A. W. Eckford, “Resource allocation via linear programming for multi-source, multi-relay wireless networks,” in Proc. IEEE International Conference on Communications (ICC), Cape Town, South Africa, 2010.
    PDF of the paper is here.

    Presentation video (this is a playlist):

    Tuesday, May 25, 2010

    Seven continents

    It's long been a life goal of mine to visit all seven continents, which I managed to cross off the list when I stepped off the plane in Africa.

    Pictures from: Australia (Sydney, Australia, 2005); Europe (Fontainebleau, France, 2008); Asia (Great Wall near Beijing, China, 2008); Africa (Cape of Good Hope, South Africa, 2010); South America (Colonia, Uruguay, 2006); North America (Banff, Canada, 2008); Antarctica (Weddell Sea, 2006).

    Monday, May 17, 2010

    Paper at ICC

    UPDATE: Paper PDF and presentation video are here.

    I'll be heading to Cape Town later this week to present a paper at ICC:

    Resource Allocation via Linear Programming for Multi-Source, Multi-Relay Wireless Networks
    N. Farsad and A. W. Eckford
    CT05: Tuesday, May 25, 1400-1545, Roof Terrace A

    This paper continues my work on fractional cooperation, in which relay nodes only retransmit a fraction of the source's packet. Since the transmission is corrupted both by the channel effects and the possibility that not every source bit is retransmitted, we use LDPC codes to recover the original transmission. There are two key insights here: first, in LDPC decoding, the log-likeihood ratios of the relays are additive, weighted by coefficients equal to the source fractions (i.e., a linear combination); and second, for sufficiently many relaying partners, the sum of the LLRs has approximately a Gaussian distribution. Similarly to the design problem solved by EXIT charts, the problem of assigning source fractions can be posed as a linear program.

    I'll be presenting the paper. Video of the talk and the PDF of the paper will be posted after the conference.

    Friday, May 14, 2010

    Paper at Queen's: PDF

    It was a beautiful day in Kingston on Wednesday, which I mostly spent indoors listening to talks.

    Here are the PDF and citation of the QBSC paper I mentioned earlier:

    S. E. T. Hadley, A. W. Eckford, and W. H. Gage, “Power saving in a biomechanical sensor network using activity detection,” in Proc. 25th Biennial Symposium on Communications, Kingston, ON, pp. 173-176, 2010.

    Monday, May 10, 2010

    Summertime, and the living is busy

    I love it when I tell someone that I'm a professor, and they respond with something like, "Oh, it must be great to have the whole summer off!" Especially today: with my spring grading finally out of the way, I'm making up my lengthy summer to-do list. Some highlights:
    • Clean up my office disaster
    • Attend this conference (and attend an executive committee meeting)
    • Attend and speak at this conference
    • Finally get the revisions done for this paper (which should have been done months ago)
    • Write a journal version of this paper
    • Get a grant proposal done
    • Read and edit theses as two students get ready to defend
    • Get some chapters done for a textbook project that were promised for months ago
    • Make some headway on a new consulting contract
    • Lots more that I'm forgetting, for sure
    Yep. Another lazy, coronas-on-the-beach summer.

    Tuesday, May 4, 2010

    Paper at Queen's

    My paper at the Queen's Biennial Symposium on Communications:

    Power Saving in a Biomechanical Sensor Network Using Activity Detection
    Scott E. T. Hadley, Andrew W. Eckford, and William H. Gage
    Wednesday, May 12, 4:50-6:10 PM (second talk in session), Walter Light Hall rm. 210

    This is the first paper out of an interdisciplinary project with Will Gage in Kinesiology. The basic idea is to place accelerometers on stroke patients to monitor their recovery in walking, but the main challenge is to design a system with the robustness, endurance, and ease of use to be used in the home, rather than in the therapist's office. It's a practical project with lots of neat signal processing, and we just got a grant to continue it.

    One of the key issues is to reduce power consumption and extend battery life. In this paper, we show that signal processing can help with power saving: we use activity detection to scale the sampling rate based on what the patient is doing. As a result, we strike a nice balance between power saving and data loss: switching modes, we lose about a second of data (about one step -- because it takes that long to detect acceleration with the reduced sampling rate), whereas we would lose about 30 seconds by putting the node to sleep and waking it up.

    Scott will be presenting the paper. I'll be at Queen's for Wednesday only, so I hope to see you there. PDF of the paper will be posted shortly.

    Monday, April 19, 2010

    What's the worst noise?

    It's a well-known theorem of information theory that, of all additive noise channels with constant noise variance (i.e., constant noise power), the Gaussian channel has the minimum capacity. I learned this long ago, and over time the concept was distilled in my memory down to "Gaussian is the worst noise".

    I was trying to use this "property" to find a quick lower bound for the capacity of a new binary-input channel. But after some funny results, my grad student found an old paper of Shamai and Verdú, which -- much to my surprise and dismay -- shows that Gaussian noise is not worst-case if the input distribution is constrained! In particular, if the input is binary {+1,-1}, the worst constant-power distribution is supported on the discrete set {...,-2,0,+2,...}, and can't generally be expressed in closed form.

    For more details, see:

    S. Shamai and S. Verdú, "Worst-case power-constrained noise for binary-input channels," IEEE Trans. Info. Theory, vol. 38, no. 5, pp. 1494--1511, 1992.

    Wednesday, April 7, 2010

    Wednesday, March 31, 2010

    Steve Munro Misses the (Swan?) Boat

    Some Toronto-specific stuff now. There are days when I wonder whether Steve Munro is this city's most overrated blogger, today for example: here he is arguing that the Toronto Board of Trade was full of it by saying Toronto’s commute times are the worst in the world.

    Go read Munro's post and see if you can piece together his rambling argument. The best I could do: the Board of Trade study is wrong because it used a metric, namely average commute time, which under-emphasized proper urban development. But development is important, and development goes hand in hand with transit, so the government should invest in transit.

    It's important to bear in mind that the report was hugely in favor of transit investment, stating unequivocally:

    ... congestion threatens Toronto’s viability over the long-term and serves as an argument for increased investment in public transit and policies that encourage Torontonians to leave their cars behind.


    So Munro is spectacularly missing the point on two fronts. The first is the current political context: the provincial government just postponed (and maybe killed) Transit City, of which Munro is a staunch promoter. The Board of Trade report, coming on the heels of this announcement, is an obvious political win for Transit City advocates. Other commentators are picking up on the obvious ways to shame the province, and keep the issue in the public eye. Not Munro, though - he’d rather quibble about metrics, and take a far more convoluted route to argue about transit funding.

    The second is that Munro completely misunderstands the report, by failing to see the value of time. The economic impact of commuting is most acutely felt in the time spent commuting, not in the distance or speed, since a long commute is both time and effort diverted from other (potentially money-making) tasks. If you’re going to argue that the province should invest in transit, surely the return on the investment should be largely economic, especially since an economic boost will return to the province’s coffers in the form of tax revenues.

    Monday, March 29, 2010

    Talk at UIUC: "Writing on Tiny Paper"

    I'll be giving a seminar next Monday, April 5, at UIUC. Thanks to Todd for the invite. Here are the title and abstract:

    Writing on tiny paper: Towards information-theoretic analysis of molecular communication

    In molecular communication, messages are transmitted as a pattern of molecules propagating from a transmitter to a receiver. This form of communication is useful where electromagnetic communication is impractical, such as in a lab-on-chip device. In this talk, we review current approaches to molecular communication. We also present some of our own results on modelling and analyzing these channels from an information-theoretic perspective, looking at molecular channels both as timing channels and as mass-transport channels.

    Friday, March 26, 2010

    Interview on TalentEgg

    At the beginning of the term, TalentEgg contacted me for an interview on my use of social media in the classroom. They posted the interview to their YouTube channel a few weeks ago, but I just noticed it today:



    (I wish she told me to comb my hair ...)

    Friday, March 12, 2010

    Tenure

    Yesterday, I was informed that my application for tenure has been approved by the university president. I'll be an Associate Professor effective July 1.

    Once again, Futurama shows us how to enjoy tenure properly:
    Mayor: "Dr. Wernstrom, can you save my city?"
    Wernstrom: "Of course, but it'll cost you. First I'll need tenure."
    Mayor: "Done."
    Wernstrom: "And a big research grant."
    Mayor: "You got it!"
    Wernstrom: "Also, access to a lab and five graduate students...at least three of them Chinese."
    Mayor: "Did...all right, done. What's your plan?"
    Wernstrom: "What plan? I'm set for life. Au revoir, suckers!"
    Leela: "That rat! Do something!"
    Mayor: "I wish I could, but he's got tenure."

    Monday, March 8, 2010

    Student rates

    Recently, Mitzenmacher had a nice post about conference budgeting. One interesting tidbit: this year's registration fee for STOC will be around US$500, and he asks, "At what point do registration fees become a noticeable concern?" Many commenters agreed that $500 was outrageous; meanwhile, in the IT community, last year's ISIT registration would have set you back a cool US$850.

    However, concerning student registration rates, Mitzenmacher also asks the following question: "... every student who attends is actually a loss, that has to be covered from elsewhere. Is this the right way to go?"

    At first, it seems like the answer should be: yes, of course; helping students is good. But thinking back to my graduate student days, I never paid for conference registration out of my own pocket -- it was always covered by my supervisor. Thus, the primary beneficiary of a student rate is a professor with many graduate students attending the conference -- who, as a result of his/her large group, is already well funded!

    In fact, if student rates are offered at a loss, that loss is made up by professors (like me) with limited funding, who can only afford to send themselves and maybe their one student. So are student rates actually regressive?

    Monday, March 1, 2010

    You learn something new every day

    Did you know that (x!)^(1/x) approaches x/e + (const) for sufficiently large x? Wolfram Alpha says so.

    Now I'm looking for a proof. Suggestions would be welcome.

    UPDATE: Wrong! Alpha was right, I misinterpreted the result. Looks more like x/e + O(log x).

    Thursday, February 25, 2010

    Why is ITA invitation-only?

    I was reminded today that ITA came and went earlier this month, so I've kept alive my five-year-long streak of not attending.

    By all accounts, ITA is now a major event on the information theory calendar: the participants page lists over 350 attendees -- maybe a bit less than half of an average ISIT. And the ITSoc boldface names are attending; insofar as the meeting-and-greeting career implications are concerned, I suppose I should be attending too. In the past, my excuse was that ITA was an invitation-only conference, both for attendance and for paper presentation, and I had never been invited. Maybe my impression was wrong, or maybe this year it changed: it seems like the plebs can register, but papers are still by invitation.

    Is there a good reason why ITA is invitation-only? It's clearly not intended to be a small gathering, and never has been: the participants page for the original workshop lists over 200 researchers. Perhaps the organizers are trying to spare themselves the effort of organizing reviews, but the IT community is reasonably self-selecting, and the Allerton organizers allow unsolicited papers.

    On the other hand, I can think of three good reasons why ITA should not be invitation-only. First, I'm not sure ITA should be described as a "workshop" (which suggests "conference") if the purpose for non-invitees is to hear pre-selected lectures; that strikes me as more of a "school". Second, it reinforces the stereotype of ITSoc as a "clubby" society; as an information theorist, you are either one of the inner circle who gets an invite, one of the outer circle who doesn't get an invite (but who goes anyway to hang around with the cool kids), or a nobody. Third, ITA is killing CISS: at last year's conference in Baltimore, the decline in attendance was stark compared to past years; since CISS takes submissions, this removes an opportunity for the less-well-known to publish. Consider the impact of that on cross-disciplinary work.

    On the other hand, maybe ITSoc is sending a message that there's no such thing as a less-well-known information theorist?

    Thursday, February 4, 2010

    Social Media and the Modern Day Classroom

    (I was invited to speak at Social Media Week on Social Media and the Modern Day Classroom. These are my remarks.)

    I was hired at York University in 2006, and -- like most new faculty -- I was full of ideas and energy for transforming undergraduate teaching as we know it. At the time I was also starting to play around with blogs, so it seemed natural to try to bring the blog into the classroom.

    My idea was to replace the standard course website with a blog: the unchanging details of the course (schedule, location, etc.) could still be hosted in some static place, but the blog would communicate the day-to-day details of the course. My first attempt, still online, can be found here. It was an incredible success: the blog formed the core of a vibrant community, which allowed the students to communicate both with me and among themselves. One student was even inspired to start his own blog, transcribing my course notes after each lecture. Not all of the feedback was positive, of course; I kept the commenting system open to allow anonymous comments, and students took full advantage to express their frustrations and problems. But this was also a huge positive -- too often, the professor gets a sterile view of how the students are doing, because nobody is brave enough to speak up.

    So began my efforts to integrate social media into the classroom. In addition to blogs, I've experimented with three other social media sites: facebook, YouTube, and Twitter. Each has its own unique features. Facebook and Twitter are quite similar: in Facebook, my approach is to create groups for each class; in Twitter, I create a class twitter account which students follow (and vice versa). These sites are, I guess you could say, "democratic" (although a less charitable word would be "anarchic") -- unlike a blog, there is far less room for the instructor to direct the conversation, and it shows. YouTube, though it re-creates the experience of listening to a lecture, turns out to be surprisingly passive: again unlike blogs, my students are reluctant to use YouTube's commenting features to leave feedback, which takes away from the sense of community (then again, perhaps this is not so surprising, since YouTube is comparable to the highly passive activity of watching television). In comparison, I would argue that the blog is as close as one can get in social media to the lecture: a blog post sets out a particular thesis, and permits an organized discussion on that thesis.

    It's worth considering whether social media can replace the university classroom, and I'm going to cop out by answering "yes and no". For one thing, social media is unlikely to replace the small undergraduate class. To form a community based on social media, you need a critical mass highly committed "community builders", who are willing to jump in and participate in whatever media is in use -- be it a blog, facebook, or Twitter. In small classes, there simply aren't enough people to form a critical mass, so anyone trying to participate is left to feel awkward. However, for larger classes, social media does indeed pose an alternative to the traditional classroom order. We have already noticed the trend towards distance learning, so students are already willing to miss out on the impersonal 200-student lecture, even without social media tools at their disposal. A well-thought-out social media strategy, coupled with a high-reputation distance-learning program, could indeed recreate much of the classroom experience, and pose a viable alternative (or threat?) to the traditional university experience. The comparison between traditional universities and traditional media is chillingly apt.

    Monday, February 1, 2010

    Not because it is easy, but because it is hard

    Today President Obama announced that the United States would not be returning people to the moon, or sending them anywhere else beyond Earth orbit. Now, people can certainly argue that robots are more efficient, and spaceflight is expensive, and blah blah blah. But I think I'll leave the case for manned space flight to the man who proposed the lunar project in the first place.

    Tuesday, January 26, 2010

    Formspring

    Continuing my tradition of screwing around with "social media" sites, I'm now andreweckford on formspring. Here's Gawker's take on this question-and-answer service, although my goals for the site are more professional than personal.

    Friday, January 22, 2010

    Speaking of YouTube

    White, middle-aged men should never rap -- they just end up looking like dorks. But I'll make an exception for this guy, 'cause if you're going to dork it up, you may as well go all the way. That mofo is ghost riding a Prius, yo:



    [@IBMResearch via Gizmodo]

    Thursday, January 21, 2010

    Disappointing arXiv paper title of the week

    When I saw the paper Real Interference Alignment, I initially read the title as "Real-world interference alignment", but what they actually mean is "Real-number interference alignment".

    This is not to criticize the paper -- Amir Khandani and his group do great work, and this paper looks nice. But I'm still hoping for a paper explaining how to achieve interference alignment under practical, real-world assumptions.

    Thursday, January 14, 2010

    I won't be seeing you in Austin

    For a variety of personal reasons (not the least of which is currently nine months old), I didn't get my act together to submit a paper to ISIT this year. I had a couple that I was working on, but they didn't come together in time. So I'll be missing my first ISIT since 2003 in Yokohama.

    St. Petersburg is going to be a blast, though. Assuming the LHC hasn't destroyed the earth by then.

    Monday, January 11, 2010

    Information theory for the YouTube masses

    I'm teaching a graduate-level course on information theory this term. As I've done with a couple of my other courses, I'm going to make video recordings of the lectures and post them online.

    The course YouTube channel is here. As I've also done elsewhere, I'm organizing the video and course announcements on a blog, which is here. I just posted the first lecture -- this is still kind of experimental, so comments would be appreciated.

    As far as I know, this will be YouTube's first full course on information theory. However, while searching for other IT-related material, I ran across this gem. Plus a slew of raging creationists and evolutionists, each using information theory to somehow try to prove their point.

    Monday, January 4, 2010

    Happy new year

    I'm back at work for the first day after the holidays. Posting has been light for the past while, but I plan to be more disciplined about this blog in the new year. My goal for 2010: 52 blog posts, and at least two per calendar month.