Thursday, August 19, 2010
Tuesday, August 17, 2010
Paper at IEEE NANO: Brownian motion versus motors for molecular communication
IEEE NANO is happening this week, and I have a paper:
A. W. Eckford, N. Farsad, S. Hiyama, and Y. Moritani, “Microchannel molecular communication with nanoscale carriers: Brownian motion versus active transport,” in Proc. IEEE International Conference on Nanotechnology, Seoul, Korea, 2010. [PDF] [Poster PDF]
Building on my earlier molecular communication work, this paper looks at communication in microchannels, like you would find in a lab-on-chip device. In these devices, molecules can propagate either randomly by Brownian motion, or by piggy-backing on a molecular motor. We compared the information rates achievable by each, and found two distinct regimes where each method works best. When the number of information-bearing molecules is small, motors are superior, since their motion is much less random than Brownian motion. However, for large numbers of molecules, Brownian motion is best, because all the molecules can start propagating immediately, rather than waiting for a motor to pick them up.
This was joint work with my Ph.D. student, Nariman Farsad, as well as Satoshi Hiyama and Yuki Moritani from NTT DOCOMO. Nariman is presenting the paper.
A. W. Eckford, N. Farsad, S. Hiyama, and Y. Moritani, “Microchannel molecular communication with nanoscale carriers: Brownian motion versus active transport,” in Proc. IEEE International Conference on Nanotechnology, Seoul, Korea, 2010. [PDF] [Poster PDF]
Building on my earlier molecular communication work, this paper looks at communication in microchannels, like you would find in a lab-on-chip device. In these devices, molecules can propagate either randomly by Brownian motion, or by piggy-backing on a molecular motor. We compared the information rates achievable by each, and found two distinct regimes where each method works best. When the number of information-bearing molecules is small, motors are superior, since their motion is much less random than Brownian motion. However, for large numbers of molecules, Brownian motion is best, because all the molecules can start propagating immediately, rather than waiting for a motor to pick them up.
This was joint work with my Ph.D. student, Nariman Farsad, as well as Satoshi Hiyama and Yuki Moritani from NTT DOCOMO. Nariman is presenting the paper.
Friday, August 13, 2010
Making the list
In case you're wondering, the 2010 Academic Ranking of World Universities (PDF) is out. How'd we do? York is in the 401-500 category alongside Canadian universities like Carleton, Concordia, Sherbrooke, and the Université du Québec system. Some other schools in the category were Boston College, Lehigh University, and the University of Tehran.
Altogether there were 23 Canadian universities in the top 500, with U of T topping the Canadian contingent at 27th spot.
Of course there are lots of problems with university rankings -- probably more meaningful to compare individual departments, and even then not so much. But it's nice to be on the list anyway.
(List via @InklessPW.)
Altogether there were 23 Canadian universities in the top 500, with U of T topping the Canadian contingent at 27th spot.
Of course there are lots of problems with university rankings -- probably more meaningful to compare individual departments, and even then not so much. But it's nice to be on the list anyway.
(List via @InklessPW.)
Monday, August 9, 2010
P vs. NP: Solved? (Update: No)
There's a paper under review claiming to prove that P ≠ NP. The claimed proof is being treated credibly: Stephen Cook is quoted as saying, "This appears to be a relatively serious claim to have solved P vs NP." Dick Lipton, who maintains a blog on complexity theory and who has a summary of the paper, is optimistic but points to some possible problems with the proof.
Complexity touches information theory at a number of points. For instance, many optimal detection and error-control coding problems are known to be NP-hard: syndrome decoding of a linear code (1), minimization of the trellis complexity of a linear code (2), and optimal multiuser detection (3) are good examples.
Update (Aug. 11): This BBC article doesn't add anything new, but has one of the nicest layman's descriptions of P versus NP that I've ever read:
Meanwhile, the theoretical CS community is going bonkers over this result. Aaronson has bet $200,000 that the proof will not stand up to scrutiny. Lipton has collected the four main objections in early readings of the proof so far, and there is now a wiki devoted to analysis and criticism of the proof.
Also, from here, a nice quote about proofs in general: "A proof only becomes a proof after the social act of 'accepting it as a proof'." -- Yuri Manin
Update (Aug. 12): In a comment on Lipton's blog, the awesome Terry Tao more or less declares the proof to be toast. Aaronson regrets his bet, even though his money is most likely safe.
Final update (Aug. 13): A highly technical post on Lipton's blog (which I don't pretend to understand at all) seems to torpedo the proof. The post's commenters agree.
Complexity touches information theory at a number of points. For instance, many optimal detection and error-control coding problems are known to be NP-hard: syndrome decoding of a linear code (1), minimization of the trellis complexity of a linear code (2), and optimal multiuser detection (3) are good examples.
Update (Aug. 11): This BBC article doesn't add anything new, but has one of the nicest layman's descriptions of P versus NP that I've ever read:
Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology (MIT) in the US, explained to BBC News why this problem was so significant. "People sometimes use the analogy of a jigsaw - it can be hard to complete the jigsaw, but if someone has done it, it's pretty easy to check - you just look at it," he said.
P vs NP poses the following question: If there is a problem that has this property - whereby you could recognise the correct answer when someone gives it to you - then is there some way to automatically find that correct answer? "There's always one way a computer can find the answer- just by trying all the possible combinations one by one," said Dr Aaronson. "But if you're trying to break a cryptographic code, for example, that could take an astronomical amount of time. P vs NP is asking - can creativity be automated?"
Meanwhile, the theoretical CS community is going bonkers over this result. Aaronson has bet $200,000 that the proof will not stand up to scrutiny. Lipton has collected the four main objections in early readings of the proof so far, and there is now a wiki devoted to analysis and criticism of the proof.
Also, from here, a nice quote about proofs in general: "A proof only becomes a proof after the social act of 'accepting it as a proof'." -- Yuri Manin
Update (Aug. 12): In a comment on Lipton's blog, the awesome Terry Tao more or less declares the proof to be toast. Aaronson regrets his bet, even though his money is most likely safe.
Final update (Aug. 13): A highly technical post on Lipton's blog (which I don't pretend to understand at all) seems to torpedo the proof. The post's commenters agree.
Subscribe to:
Posts (Atom)