Английская Википедия:Classical capacity
Шаблон:Short description In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel <math>\mathcal{N}</math>:
- <math>
\chi(\mathcal{N}) = \max_{\rho^{XA}} I(X;B)_{\mathcal{N}(\rho)} </math>
where <math>\rho^{XA}</math> is a classical-quantum state of the following form:
- <math>
\rho^{XA} = \sum_x p_X(x) \vert x \rangle \langle x \vert^X \otimes \rho_x^A , </math> <math>p_X(x)</math> is a probability distribution, and each <math>\rho_x^A</math> is a density operator that can be input to the channel <math>\mathcal{N}</math>.
Achievability using sequential decoding
We briefly review the HSW coding theorem (the statement of the achievability of the Holevo information rate <math>I(X;B)</math> for communicating classical data over a quantum channel). We first review the minimal amount of quantum mechanics needed for the theorem. We then cover quantum typicality, and finally we prove the theorem using a recent sequential decoding technique.
Review of quantum mechanics
In order to prove the HSW coding theorem, we really just need a few basic things from quantum mechanics. First, a quantum state is a unit trace, positive operator known as a density operator. Usually, we denote it by <math>\rho</math>, <math>\sigma</math>, <math>\omega</math>, etc. The simplest model for a quantum channel is known as a classical-quantum channel:
x\mapsto \rho_{x}.
</math>The meaning of the above notation is that inputting the classical letter <math>x</math> at the transmitting end leads to a quantum state <math>\rho_{x}</math> at the receiving end. It is the task of the receiver to perform a measurement to determine the input of the sender. If it is true that the states <math>\rho_{x}</math> are perfectly distinguishable from one another (i.e., if they have orthogonal supports such that <math>\mathrm{Tr}\,\left\{ \rho_{x}\rho_{x^{\prime}}\right\} =0</math> for <math>x\neq x^{\prime} </math>), then the channel is a noiseless channel. We are interested in situations for which this is not the case. If it is true that the states <math>\rho_{x}</math> all commute with one another, then this is effectively identical to the situation for a classical channel, so we are also not interested in these situations. So, the situation in which we are interested is that in which the states <math>\rho_{x}</math> have overlapping support and are non-commutative.
The most general way to describe a quantum measurement is with a positive operator-valued measure (POVM). We usually denote the elements of a POVM as <math>\left\{ \Lambda_{m}\right\} _{m}</math>. These operators should satisfy positivity and completeness in order to form a valid POVM:
- <math>
\Lambda_{m} \geq0\ \ \ \ \forall m</math>
- <math>\sum_{m}\Lambda_{m} =I.
</math> The probabilistic interpretation of quantum mechanics states that if someone measures a quantum state <math>\rho</math> using a measurement device corresponding to the POVM <math>\left\{ \Lambda_{m}\right\} </math>, then the probability <math>p\left( m\right) </math> for obtaining outcome <math>m</math> is equal to
- <math>
p\left( m\right) =\text{Tr}\left\{ \Lambda_{m}\rho\right\} , </math> and the post-measurement state is
- <math>
\rho_{m}^{\prime}=\frac{1}{p\left( m\right) }\sqrt{\Lambda_{m}}\rho \sqrt{\Lambda_{m}}, </math> if the person measuring obtains outcome <math>m</math>. These rules are sufficient for us to consider classical communication schemes over cq channels.
Quantum typicality
The reader can find a good review of this topic in the article about the typical subspace.
Gentle operator lemma
The following lemma is important for our proofs. It demonstrates that a measurement that succeeds with high probability on average does not disturb the state too much on average:
Lemma: [Winter] Given an ensemble <math>\left\{ p_{X}\left( x\right) ,\rho_{x}\right\} </math> with expected density operator <math>\rho\equiv\sum_{x}p_{X}\left( x\right) \rho_{x}</math>, suppose that an operator <math>\Lambda</math> such that <math>I\geq\Lambda\geq0</math> succeeds with high probability on the state <math>\rho</math>:
\text{Tr}\left\{ \Lambda\rho\right\} \geq1-\epsilon.
</math>Then the subnormalized state <math>\sqrt{\Lambda}\rho_{x}\sqrt{\Lambda}</math> is close in expected trace distance to the original state <math>\rho_{x}</math>:
\mathbb{E}_{X}\left\{ \left\Vert \sqrt{\Lambda}\rho_{X}\sqrt{\Lambda} -\rho_{X}\right\Vert _{1}\right\} \leq2\sqrt{\epsilon}.
</math>(Note that <math>\left\Vert A\right\Vert _{1}</math> is the nuclear norm of the operator <math>A</math> so that <math>\left\Vert A\right\Vert _{1}\equiv</math>Tr<math>\left\{ \sqrt{A^{\dagger} A}\right\} </math>.)
The following inequality is useful for us as well. It holds for any operators <math>\rho</math>, <math>\sigma</math>, <math>\Lambda</math> such that <math>0\leq\rho,\sigma,\Lambda\leq I</math>:
The quantum information-theoretic interpretation of the above inequality is that the probability of obtaining outcome <math>\Lambda</math> from a quantum measurement acting on the state <math>\rho</math> is upper bounded by the probability of obtaining outcome <math>\Lambda</math> on the state <math>\sigma</math> summed with the distinguishability of the two states <math>\rho</math> and <math>\sigma</math>.
Non-commutative union bound
Lemma: [Sen's bound] The following bound holds for a subnormalized state <math>\sigma</math> such that <math>0\leq\sigma</math> and <math>Tr\left\{ \sigma\right\} \leq1</math> with <math>\Pi_{1}</math>, ... , <math>\Pi_{N}</math> being projectors: <math> \text{Tr}\left\{ \sigma\right\} -\text{Tr}\left\{ \Pi_{N}\cdots\Pi _{1}\ \sigma\ \Pi_{1}\cdots\Pi_{N}\right\} \leq2\sqrt{\sum_{i=1}^{N} \text{Tr}\left\{ \left( I-\Pi_{i}\right) \sigma\right\} }, </math>
We can think of Sen's bound as a "non-commutative union bound" because it is analogous to the following union bound from probability theory:
\Pr\left\{ \left( A_{1}\cap\cdots\cap A_{N}\right) ^{c}\right\} =\Pr\left\{ A_{1}^{c}\cup\cdots\cup A_{N}^{c}\right\} \leq\sum_{i=1}^{N} \Pr\left\{ A_{i}^{c}\right\} ,
</math>where <math>A_{1}, \ldots, A_{N}</math> are events. The analogous bound for projector logic would be
- <math>
\text{Tr}\left\{ \left( I-\Pi_{1}\cdots\Pi_{N}\cdots\Pi_{1}\right) \rho\right\} \leq\sum_{i=1}^{N}\text{Tr}\left\{ \left( I-\Pi_{i}\right) \rho\right\} , </math> if we think of <math>\Pi_{1}\cdots\Pi_{N}</math> as a projector onto the intersection of subspaces. Though, the above bound only holds if the projectors <math>\Pi_{1}</math>, ..., <math>\Pi_{N}</math> are commuting (choosing <math>\Pi_{1}=\left\vert +\right\rangle \left\langle +\right\vert </math>, <math>\Pi_{2}=\left\vert 0\right\rangle \left\langle 0\right\vert </math>, and <math>\rho=\left\vert 0\right\rangle \left\langle 0\right\vert </math> gives a counterexample). If the projectors are non-commuting, then Sen's bound is the next best thing and suffices for our purposes here.
HSW theorem with the non-commutative union bound
We now prove the HSW theorem with Sen's non-commutative union bound. We divide up the proof into a few parts: codebook generation, POVM construction, and error analysis.
Codebook Generation. We first describe how Alice and Bob agree on a random choice of code. They have the channel <math>x\rightarrow\rho_{x}</math> and a distribution <math>p_{X}\left( x\right) </math>. They choose <math>M</math> classical sequences <math>x^{n}</math> according to the IID\ distribution <math>p_{X^{n}}\left( x^{n}\right) </math>. After selecting them, they label them with indices as <math>\left\{ x^{n}\left( m\right) \right\} _{m\in\left[ M\right] }</math>. This leads to the following quantum codewords:
\rho_{x^{n}\left( m\right) }=\rho_{x_{1}\left( m\right) }\otimes \cdots\otimes\rho_{x_{n}\left( m\right) }.
</math>The quantum codebook is then <math>\left\{ \rho_{x^{n}\left( m\right) }\right\} </math>. The average state of the codebook is then
x^{n}\right) \rho_{x^{n}}=\rho^{\otimes n},
</math>|Шаблон:EquationRef}}where <math>\rho=\sum_{x}p_{X}\left( x\right) \rho_{x}</math>.
POVM Construction . Sens' bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob should first ask "Is the received state in the average typical subspace?" He can do this operationally by performing a typical subspace measurement corresponding to <math>\left\{ \Pi_{\rho,\delta} ^{n},I-\Pi_{\rho,\delta}^{n}\right\} </math>. Next, he asks in sequential order, "Is the received codeword in the <math>m^{\text{th}}</math> conditionally typical subspace?" This is in some sense equivalent to the question, "Is the received codeword the <math>m^{\text{th}}</math> transmitted codeword?" He can ask these questions operationally by performing the measurements corresponding to the conditionally typical projectors <math>\left\{ \Pi_{\rho_{x^{n}\left( m\right) },\delta},I-\Pi_{\rho_{x^{n}\left( m\right) },\delta}\right\} </math>.
Why should this sequential decoding scheme work well? The reason is that the transmitted codeword lies in the typical subspace on average:
- <math>
\mathbb{E}_{X^{n}}\left\{ \text{Tr}\left\{ \Pi_{\rho,\delta}\ \rho_{X^{n} }\right\} \right\} =\text{Tr}\left\{ \Pi_{\rho,\delta}\ \mathbb{E} _{X^{n}}\left\{ \rho_{X^{n}}\right\} \right\} </math>
- <math> =\text{Tr}\left\{ \Pi_{\rho,\delta}\ \rho^{\otimes n}\right\} </math>
- <math> \geq1-\epsilon,</math>
where the inequality follows from (\ref{eq:1st-typ-prop}). Also, the projectors <math>\Pi_{\rho_{x^{n}\left( m\right) },\delta}</math> are "good detectors" for the states <math>\rho_{x^{n}\left( m\right) }</math> (on average) because the following condition holds from conditional quantum typicality:
\mathbb{E}_{X^{n}}\left\{ \text{Tr}\left\{ \Pi_{\rho_{X^{n}},\delta} \ \rho_{X^{n}}\right\} \right\} \geq1-\epsilon.
</math>Error Analysis. The probability of detecting the <math>m^{\text{th}}</math> codeword correctly under our sequential decoding scheme is equal to
\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta}\ \Pi_{\rho,\delta}^{n}\ \rho_{x^{n}\left( m\right) }\ \Pi_{\rho,\delta}^{n}\ \hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta }\cdots\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho _{X^{n}\left( m\right) },\delta}\right\} ,
</math>where we make the abbreviation <math>\hat{\Pi}\equiv I-\Pi</math>. (Observe that we project into the average typical subspace just once.) Thus, the probability of an incorrect detection for the <math>m^{\text{th}}</math> codeword is given by
1-\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta}\ \Pi_{\rho,\delta}^{n}\ \rho_{x^{n}\left( m\right) }\ \Pi_{\rho,\delta}^{n}\ \hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta }\cdots\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho _{X^{n}\left( m\right) },\delta}\right\} ,
</math>and the average error probability of this scheme is equal to
1-\frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi }_{\rho_{X^{n}\left( 1\right) },\delta}\ \Pi_{\rho,\delta}^{n}\ \rho _{x^{n}\left( m\right) }\ \Pi_{\rho,\delta}^{n}\ \hat{\Pi}_{\rho _{X^{n}\left( 1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right\} .
</math>Instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code:
_{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta }\ \Pi_{\rho,\delta}^{n}\ \rho_{X^{n}\left( m\right) }\ \Pi_{\rho,\delta }^{n}\ \hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta}\cdots\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right\} \right\} .
</math>|Шаблон:EquationRef}}Our first step is to apply Sen's bound to the above quantity. But before doing so, we should rewrite the above expression just slightly, by observing that
- <math>
1 =\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \rho_{X^{n}\left( m\right) }\right\} \right\} </math>
- <math> =\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi
_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\right\} +\text{Tr}\left\{ \hat{\Pi}_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\right\} \right\} </math>
- <math> =\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi
_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} \right\} +\frac{1}{M}\sum_{m}\text{Tr}\left\{ \hat{\Pi}_{\rho,\delta} ^{n}\mathbb{E}_{X^{n}}\left\{ \rho_{X^{n}\left( m\right) }\right\} \right\} </math>
- <math> =\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi
_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} \right\} +\text{Tr}\left\{ \hat{\Pi}_{\rho,\delta}^{n}\rho^{\otimes n}\right\} </math>
- <math> \leq\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{
\Pi_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta} ^{n}\right\} \right\} +\epsilon </math> Substituting into (Шаблон:EquationNote) (and forgetting about the small <math>\epsilon</math> term for now) gives an upper bound of
- <math>
\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi _{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} \right\} </math>
- <math>
-\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi _{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta }\ \Pi_{\rho,\delta}^{n}\ \rho_{X^{n}\left( m\right) }\ \Pi_{\rho,\delta }^{n}\ \hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta}\cdots\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right\} \right\} . </math> We then apply Sen's bound to this expression with <math>\sigma=\Pi_{\rho,\delta }^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}</math> and the sequential projectors as <math>\Pi_{\rho_{X^{n}\left( m\right) },\delta}</math>, <math>\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}</math>, ..., <math>\hat{\Pi}_{\rho _{X^{n}\left( 1\right) },\delta}</math>. This gives the upper bound <math> \mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}2\left[ \text{Tr}\left\{ \left( I-\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right) \Pi_{\rho ,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} +\sum_{i=1}^{m-1}\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( i\right) },\delta }\Pi_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta} ^{n}\right\} \right] ^{1/2}\right\} . </math> Due to concavity of the square root, we can bound this expression from above by
- <math>
2\left[ \mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{
\left( I-\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right) \Pi_{\rho ,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} +\sum_{i=1}^{m-1}\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( i\right) },\delta }\Pi_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta} ^{n}\right\} \right\} \right] ^{1/2}</math>
- <math> \leq2\left[ \mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{
\left( I-\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right) \Pi_{\rho ,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} +\sum_{i\neq m}\text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( i\right) },\delta }\Pi_{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta} ^{n}\right\} \right\} \right] ^{1/2}, </math> where the second bound follows by summing over all of the codewords not equal to the <math>m^{\text{th}}</math> codeword (this sum can only be larger).
We now focus exclusively on showing that the term inside the square root can be made small. Consider the first term:
- <math>
\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \left(
I-\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right) \Pi_{\rho,\delta} ^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta}^{n}\right\} \right\} </math>
- <math> \leq\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \left(
I-\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right) \rho_{X^{n}\left( m\right) }\right\} +\left\Vert \rho_{X^{n}\left( m\right) }-\Pi _{\rho,\delta}^{n}\rho_{X^{n}\left( m\right) }\Pi_{\rho,\delta} ^{n}\right\Vert _{1}\right\} </math>
- <math> \leq\epsilon+2\sqrt{\epsilon}.
</math> where the first inequality follows from (Шаблон:EquationNote) and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. Consider now the second term and the following chain of inequalities:
- <math>
\sum_{i\neq m}\mathbb{E}_{X^{n}}\left\{ \text{Tr}\left\{ \Pi_{\rho
_{X^{n}\left( i\right) },\delta}\ \Pi_{\rho,\delta}^{n}\ \rho_{X^{n}\left( m\right) }\ \Pi_{\rho,\delta}^{n}\right\} \right\} </math>
- <math> =\sum_{i\neq m}\text{Tr}\left\{ \mathbb{E}_{X^{n}}\left\{ \Pi
_{\rho_{X^{n}\left( i\right) },\delta}\right\} \ \Pi_{\rho,\delta} ^{n}\ \mathbb{E}_{X^{n}}\left\{ \rho_{X^{n}\left( m\right) }\right\} \ \Pi_{\rho,\delta}^{n}\right\} </math>
- <math> =\sum_{i\neq m}\text{Tr}\left\{ \mathbb{E}_{X^{n}}\left\{ \Pi
_{\rho_{X^{n}\left( i\right) },\delta}\right\} \ \Pi_{\rho,\delta} ^{n}\ \rho^{\otimes n}\ \Pi_{\rho,\delta}^{n}\right\} </math>
- <math> \leq\sum_{i\neq m}2^{-n\left[ H\left( B\right) -\delta\right]
}\ \text{Tr}\left\{ \mathbb{E}_{X^{n}}\left\{ \Pi_{\rho_{X^{n}\left( i\right) },\delta}\right\} \ \Pi_{\rho,\delta}^{n}\right\} </math> The first equality follows because the codewords <math>X^{n}\left( m\right) </math> and <math>X^{n}\left( i\right) </math> are independent since they are different. The second equality follows from (Шаблон:EquationNote). The first inequality follows from (\ref{eq:3rd-typ-prop}). Continuing, we have
- <math>
\leq\sum_{i\neq m}2^{-n\left[ H\left( B\right) -\delta\right]
}\ \mathbb{E}_{X^{n}}\left\{ \text{Tr}\left\{ \Pi_{\rho_{X^{n}\left( i\right) },\delta}\right\} \right\} </math>
- <math> \leq\sum_{i\neq m}2^{-n\left[ H\left( B\right) -\delta\right]
}\ 2^{n\left[ H\left( B|X\right) +\delta\right] }</math>
- <math> =\sum_{i\neq m}2^{-n\left[ I\left( X;B\right) -2\delta\right] }</math>
- <math> \leq M\ 2^{-n\left[ I\left( X;B\right) -2\delta\right] }.
</math> The first inequality follows from <math>\Pi_{\rho,\delta}^{n}\leq I</math> and exchanging the trace with the expectation. The second inequality follows from (\ref{eq:2nd-cond-typ}). The next two are straightforward.
Putting everything together, we get our final bound on the expectation of the average error probability:
- <math>
1-\mathbb{E}_{X^{n}}\left\{ \frac{1}{M}\sum_{m}\text{Tr}\left\{ \Pi _{\rho_{X^{n}\left( m\right) },\delta}\hat{\Pi}_{\rho_{X^{n}\left( m-1\right) },\delta}\cdots\hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta }\ \Pi_{\rho,\delta}^{n}\ \rho_{X^{n}\left( m\right) }\ \Pi_{\rho,\delta }^{n}\ \hat{\Pi}_{\rho_{X^{n}\left( 1\right) },\delta}\cdots\hat{\Pi} _{\rho_{X^{n}\left( m-1\right) },\delta}\Pi_{\rho_{X^{n}\left( m\right) },\delta}\right\} \right\} </math>
- <math>\leq\epsilon+2\left[ \left( \epsilon+2\sqrt{\epsilon}\right)
+M\ 2^{-n\left[ I\left( X;B\right) -2\delta\right] }\right] ^{1/2}. </math> Thus, as long as we choose <math>M=2^{n\left[ I\left( X;B\right) -3\delta \right] }</math>, there exists a code with vanishing error probability.
See also
- Entanglement-assisted classical capacity
- Quantum capacity
- Quantum information theory
- Typical subspace
References