Error detection algorithm is one of the most important basic technologies in communication field, and there are such algorithms in many standards. Its function is that when data is transmitted, how can we judge that this data is transmitted correctly? If the transmission is not correct, do you want to remind the other party to retransmit? In the past, whether it is correct or not, each data packet is transmitted several times repeatedly. According to the probability, one time is correct, and then it is judged by the application layer. However, with the error detection algorithm, we can know whether the data is transmitted incorrectly in the first time. The advantage of the error detection algorithm is that it can be detected with very little error detection cost (cost), which is very low compared with the error correction algorithm. Therefore, in some relatively stable channels, error detection algorithms are generally used. At present, there are CRC, parity check and Hamming check on the market.
according to the characteristics of the industry, the new algorithm is generally named after the inventor, so it is called jelling code. Such as Shannon Fanor code, Huffman code, Solomon code and RS code.
the error detection algorithm of jelling code is extremely simple, specifically, there are two steps
1. Add rules to the sequence, such as "adding a after every 1" in the binary sequence. This method has many combinations, and the number of symbols added by each method is different, and the obtained probability coefficient r is different. Tell you a formula:
If a symbol is added after a symbol 1 in an equibinary sequence, the actual ratio (probability) of adding the symbol is:
p (c) = 1/((2 (c+1))-2)
Interested friends may wish to use this formula to verify the pseudo-random number. When c=1, p (c) = 1/2; When c=2, p(c)=1/6. Now a knowledge point is given away for free, and whether a sequence has reached the most chaotic (isentropic) state after coding can be judged by the above formula (this method has also been patented).
2. coding based on non-normalized probability model, because this theoretical coding can not only greatly eliminate the added symbols (the bit length after coding is very close to that before adding symbols), but also keep the law of artificial addition in decoding. Both theory and practice have proved that when the coding result is equal to the original sequence length, the added law is invalid.
Error detection evidence:
When decoding in sequence, if two symbols 1 are decoded continuously, it means that there is an error in data transmission.
Advantages:
1. The code rate can be infinitely close to 1, such as supporting any code rate of (,1), which greatly reduces the cost of communication error detection;
2. There is no code length limit, the minimum support is 1 bit, and the longest support is infinite;
3. Lossless compression can be realized when the probability in binary sequence is unequal, that is, the algorithm has dual functions of error detection and compression, and of course, the compression function can be restricted.
4. It is proved theoretically that when the code length is infinite, the error detection probability is equal to 1, that is, 1% error detection can be realized; Taking decoding 32bit as an example, the probability of misjudgment is .164126, and the longer the number of bits, the smaller the probability of misjudgment.
5. The logic is extremely simple, and only an int-type cache space is needed.
6. Without increasing the amount of code, functions can be arbitrarily combined, such as encryption and error detection, compression and error detection, compression and encryption, compression, encryption, error detection combination and so on.
7. Intellectual property rights have been protected, and all patents and intellectual property rights belong to Hunan Ruilide Information Technology Co., Ltd..
some experts may ask, if you do this, other entropy coding can do it, so you might as well try! I believe I didn't know the originality of my theory until I tried. There is a premise that the coding results should be as close to the entropy limit as possible, and at the same time keep regular.
At the same time, any expert can find out whether there are similar theories and methods for any of the above advantages!