Skip to main content

All Questions

1 vote
1 answer
60 views

Compute the Expected Codeword Length when using a Huffman-Code for a Source that Generates the German Alphabet

Lemma we have to use: Let $ E(p_1, \dots, p_n) $ be the average codeword length of a Huffman code for an alphabet with $ n $ letters and probabilities $ p_1, \dots, p_n $. Let $ p_1 $ and $ p_2 $ be ...
Sgt. Slothrop's user avatar
0 votes
0 answers
36 views

Number of samples from an urn that is fraction $f$ of black balls to have probability $p$ of selecting one

I have two ways of interpreting a 'cost' for some event that occurs a fraction $f$ of the time. The first way is the coding length under the Shannon-Fano encoding, giving $-\log{f}$. The second way is ...
ludog's user avatar
  • 25
2 votes
1 answer
86 views

Volume of Hamming ball w.r.t power-law distribution on hypercube

Let $n$ be a large positive integer and let $X$ be a random element of hypercube $\{0,1\}^n$ such that $\mathbb P(|X|= \ell) \propto (\ell + 1)^{-\beta}$ for all $\ell \in \{0,1,\ldots,n\}$. Here $\...
dohmatob's user avatar
  • 9,723
2 votes
1 answer
381 views

Random codes in coding theory [closed]

What do people mean when they say that they choose a code at random for a probabilistic proof in coding theory ? What is random in the code ? This is for example used in Shannon’s proof for capacity ...
yosh's user avatar
  • 73
0 votes
1 answer
374 views

Justification for uniform distribution in Channel coding theorem

Noisy-channel coding theorem, or Shannon's theorem, states that for a discrete memoryless channel the code rate below the channel capacity is achievable. However, in the standard proof that I have ...
ImbalanceDream's user avatar
0 votes
1 answer
78 views

How can we choose the input distribution for the codewords?

I am now studying the channel coding theorem in information theory. The proof of the theorem in the textbook (by Cover & Thomas, Theorem 7.7.1) picks a fixed $p(x)$ for the codewords first, and ...
ImbalanceDream's user avatar
2 votes
1 answer
65 views

Why does the entropy of a $(K-1)$ order Markov process converge to the entropy of the process itself as $K\to\infty$

I am working through a book and I just can't seem to see why the following is true: Let $\mu$ be an ergodic process with process entropy $H$. Let $\epsilon>0$. Let $H_{K-1}=H\left(X_{K} \mid X_{1}^{...
J.R.'s user avatar
  • 491
1 vote
1 answer
94 views

Pairwise codeword Error Probability on binary symmetric channel(BSC)

I have a question regarding pairwise codeword error probability in coding theory. This coding theory class was opened for electrical engineering majors, and I'm self-studying it through the lecture ...
Joshua Woo's user avatar
  • 1,353
2 votes
1 answer
236 views

Proof that a random linear code is "good"

As we know Hadamard code is an extremly usefull code in computer science and maths. We wanna say something interesting about random linear codes. Let $A \in \{0,1\}^{n\times k}$ a random matrix. Lets ...
yellowcard123's user avatar
1 vote
1 answer
120 views

Can someone explain this Theorem about the probability of correction of a codeword during standard array decoding to me?

Hello I'm working on a presentation about linear codes and i dont really understand this theorem. Can someone give more detail on it please? The theorem is from a book called "A first course in ...
shekh's user avatar
  • 63
4 votes
3 answers
190 views

Maximum number of $\pm 1$ valued vectors with pairwise negative inner product

Let $S$ be a subset of $\{\pm 1\}^n$ such that $\forall x,y\in S$ ($x\neq y$), $x\cdot y<0$. Determine the upper bound of $|S|$ as precise as possible. (Thanks to the example from @kodlu, the ...
Drinzjeng Triang's user avatar
1 vote
1 answer
64 views

Rewriting channel transition probability as equations.

I read from this paper (Equations 2 & 3) that for a given channel $p_{Y|Z}(y|z)$, we can write $Y$ as a function of $Z$ and another random variable $W$. For example, for additive white Gaussian ...
Resu's user avatar
  • 850
0 votes
1 answer
348 views

Show nearest neighborhood decoding is the same as maximum likelihood decoding in the following case

I don't see what's wrong about my solution. The question is as below Show that for every code C over an alphabet of size $q,$ the nearest neighborhood decoding is the same as maximum likelihood ...
Stack_Underflow's user avatar
2 votes
2 answers
161 views

The Entropy of the quantization symbols and the smallest number of bits that is required for representing source symbols.

Considering $N_s$ source symbols $v$ with PDF given as, $p(v)= e^{-v}I_{[0,\infty]}(v)$ The $k$-bits quantizer $Q$ maps the $N_s$ source symbols $v$ to symbols $s$, $s \in \{0,..., 2^k-1\}$ The ...
BL Lov's user avatar
  • 29
0 votes
1 answer
400 views

Binary Symmetric Channels: Crossover probability and reliability

I am having difficulties understanding how it is possible to create, given one binary symmetric channel (BSC) with crossover probability $p<\frac{1}{2}$, an equivalent BSC with crossover ...
Louie_the_unsolver's user avatar

15 30 50 per page
1
2 3 4 5