Say you've got two genetically engineered bacteria, which you're using as a communication system. Maybe you've engineered the organism's quorum sensing abilities, or its calcium receptors, to do your bidding. Whatever way you're doing it, what you want to do is to send a message, encoded in a pattern of molecules, that propagate from one bacteria to the other. This is molecular communication.

So exactly how fast can you send information? In other words, what is the Shannon capacity?

Would you believe that it's infinite?

To see why, we'll use the following simple example of a molecular communication system:

However, for any positive

Again, log(

A couple of comments on this result.

First, an infinite capacity is not achievable in practice, as we have ignored physical features (like saturation) that would prevent you from doing this. Thus, the physical constraints of the system are important in finding the true capacity.

Second, this result is not so different from the realm of conventional electromagnetic communication. In the EM world, it's also true that capacity in bits per second is infinite, if bandwidth is unlimited. As a result, figures of merit are often given in terms of the

It seems clear that capacity for molecular communication needs to be measured by some similar figure. Let's call it the

(These results are from [1], and the bits-per-second result is based on an idea from [2].)

References:

[1] A. W. Eckford, “Molecular communication: Physically realistic models and achievable information rates,” arXiv:0812.1554v1 [cs.IT] 8 December 2008.

[2] R. Sundaresan and S. VerdĂș, "Capacity of queues via point-process channels,"

So exactly how fast can you send information? In other words, what is the Shannon capacity?

Would you believe that it's infinite?

To see why, we'll use the following simple example of a molecular communication system:

Say our communication session lastsSo what's the capacity of this system?Tseconds. We'll divide the interval intonintervals ofT/nseconds each. Leta(T/n) represent the probability of arrival withinT/nseconds: i.e., the probability that any single molecule released from the transmitter at the beginning of an interval of lengthT/nwill arrive at the receiver before the end of the interval.

We'll make three assumptions: first,a(T/n) > 0 as long asT/n> 0 (this is for convenience and can be relaxed); second, the limit ofa(T/n) is 1 asT/ngoes to infinity (this is true of, e.g., the Wiener process model of Brownian motion); and third, the arrival times of different molecules are statistically independent.

The transmitter's job is to select one (and only one) of thenintervals, and sendm> 0 molecules at thebeginningof that interval. Thus, the transmitter can send at most log2(n) bits.

The receiver's job is to measure the arrival time of thefirst molecule to arrive, figure out which of thenintervals the molecule arrived in, and decide that the transmitter released all its molecules at the beginning ofthat same interval. An error is made if and only if all molecules take longer thanT/nseconds to arrive.

**Bits per second:**Here we'll use a constant, finite value of*T*, but the number of molecules*m*can be as large as we want. The probability of error is therefore*n*,*a*(*T*/*n*) > 0 by assumption, so there exists a large enough*m*to make the probability of error as small as we want. Thus, you can reliably send log2(*n*)/*T*bits per second, for any*n*, and log2(*n*) goes to infinity as*n*goes to infinity. So the capacity in bits per second is infinite.**Bits per molecule:**Now say*T*can be as large as we want, but*m*= 1 (i.e., we have one molecule to send our message). We'll let*n*=*T*/log(*T*), so the number of bits sent by the transmitter is log2(*T*/log(*T*)) (the base of the inner log doesn't matter). The length of each interval is log(*T),*so the probability of error is now*T*) goes to infinity as*T*goes to infinity, and by assumption the limit of*a*(log(*T*)) is 1 as*T*goes to infinity, so there exists a large enough*T*to make the probability of error as small as we want. Since we're doing this with a single molecule, the capacity in bits per molecule is log2(*T*/log(*T*)), and it can be checked that this quantity goes to infinity as*T*goes to infinity.A couple of comments on this result.

First, an infinite capacity is not achievable in practice, as we have ignored physical features (like saturation) that would prevent you from doing this. Thus, the physical constraints of the system are important in finding the true capacity.

Second, this result is not so different from the realm of conventional electromagnetic communication. In the EM world, it's also true that capacity in bits per second is infinite, if bandwidth is unlimited. As a result, figures of merit are often given in terms of the

*spectral efficiency*, in bits/s/Hz.It seems clear that capacity for molecular communication needs to be measured by some similar figure. Let's call it the

**molecular efficiency**, and measure it in bits/s/molecule.(These results are from [1], and the bits-per-second result is based on an idea from [2].)

References:

[1] A. W. Eckford, “Molecular communication: Physically realistic models and achievable information rates,” arXiv:0812.1554v1 [cs.IT] 8 December 2008.

[2] R. Sundaresan and S. VerdĂș, "Capacity of queues via point-process channels,"

*IEEE Trans. Info. Theory*, vol. 52, pp. 2697-2709, 2006.
## No comments:

Post a Comment