Project Nayuki


Reed–Solomon error-correcting code decoder

Introduction

Reed–Solomon codes allow an arbitrary message to be expanded with redundant information, transmitted over a noisy channel, and decoded such that if the received message has fewer errors than a predefined number, then the original message can be recovered perfectly. This makes RS codes useful for protecting information integrity on noisy media such as radio waves, telephone lines, magnetic disks, flash memory, etc.; all of the data can be reconstructed even if there are a few errors.

On this page I present math and code to implement both Reed–Solomon ECC encoding and decoding. Although the code is not terribly long, the math behind it is not obvious, so the all major derivations will be explained here to justify the code. The math prerequisites are elementary algebra, polynomial arithmetic, linear algebra, and finite field arithmetic.

Contents


Preliminaries

  1. The Reed–Solomon procedures take place within the framework of a user-chosen field F. The field is usually GF(28) for convenient byte-oriented processing on computers, but it could instead be GF(24), GF(212), Z73, etc. We need a primitive element / generator α for the field. The generator must be such that the values {α0,α1,α2,...,α|F|2} are all unique and non-zero; hence the powers of α generate all the non-zero elements of the field F. (|F| is the size of the field, i.e. the total number of distinct elements/values.)

  2. We choose k to be the message length. k is an integer such that 1k<|F|. Each message to be encoded is a sequence/block of k values from the field F. For example we might choose k=25 so that each codeword conveys 25 payload (non-redundant) values. (The case k=0 is obviously degenerate because it means there is no useful information to convey.)

  3. We choose m to be the number of error correction values by which to expand the message. m is an integer such that 1m<|F|k. (The case m=0 is degenerate because it means no RS ECC is added, so the entire message has no protection whatsoever.) Note that when a message is expanded with m error correction values, the expanded message (a.k.a. codeword) can tolerate up to m/2 errors and still be decoded perfectly. For example, adding 6 EC values will allow us to fix any codeword with up to 3 erroneous values.

  4. We define n=k+m to be the block size / codeword length after encoding. Note that n is a positive integer that satisfies 2n<|F|. So if we want big blocks with a lot of error-correcting capability, then we need a sufficiently large field as the foundation.

Systematic encoder

  1. Reed–Solomon error-correcting codes come in a number of flavors, of equivalent error-correcting power but different pragmatic handling. The variant that we use is the BCH view with systematic encoding, which means that the original message is treated as a sequence of coefficients for a polynomial, and the codeword after encoding is equal to the message with some error-correcting data appended to it.

  2. Define the generator polynomial based on m and α:

    g(x)=m1i=0(xαi)=(xα0)(xα1)(xαm1).

    This polynomial has degree m, and its leading coefficient is equal to 1.

  3. Suppose the original message is the sequence of k values (M0,M1,...,Mk1), where each Mi is an element of field F. Define the original message polynomial by simply using the values as monomial coefficients:

    M(x)=k1i=0Mixi=M0x0+M1x1++Mk1xk1.

  4. Define and calculate the Reed–Solomon codeword being sent as the message shifted up minus the message polynomial modulo the generator polynomial:

    s(x)=M(x)xm[(M(x)xm) mod g(x)].

    Note that the remainder polynomial [(M(x)xm) mod g(x)] has a degree of m1 or less, so the monomial terms of the remainder don’t interact with the terms of (M(x)xm). Overall, s(x) has degree n1 or less.

    By construction, the sent codeword polynomial s(x) has the property that s(x)0 mod g(x); this will be useful shortly in the decoder.

  5. We encode s(x) into a sequence of values in the straightforward way by breaking it up into n monomial coefficients:

    s(x)=n1i=0sixi=s0x0+s1x1++sn1xn1.

    The codeword we transmit is simply the sequence of n values (s0,s1,,sn1), where each value si is an element of the field F.

Peterson–Gorenstein–Zierler decoder

Calculating syndromes

  1. Suppose the codeword we received is (r0,r1,,rn1), where each value is an element of the field F. This is known. We defined the received codeword polynomial straightforwardly:

    r(x)=n1i=0rixi=r0x0+r1x1++rn1xn1.

  2. On the receiving side here, we don’t know the sent values (s0,s1,,sn1), but will go ahead and define the error values anyway. Let e0=r0s0, e1=r1s1, , en1=rn1sn1. Define the error polynomial straightforwardly (again, we don’t know its value right now):

    e(x)=r(x)s(x)=n1i=0eixi=e0x0+e1x1++en1xn1.

  3. Now for some actual math: Define the m syndrome values for 0i<m, by evaluating the received codeword polynomial at various powers of the generator:

    Si=r(αi)=s(αi)+e(αi)=0+e(αi)=e(αi)=e0α0i+e1α1i++en1α(n1)i.

    This works because by construction, s(αi)=0 for 0i<m, which is because s(x) is divisible by the generator polynomial g(x)=(xα0)(xα1)(xαm1). Thus we see that the syndromes only depend on the errors that were added to the sent codeword, and don’t depend at all on the value of the sent codeword or the original message.

    If all the syndrome values are zero, then the codeword is already correct, there is nothing to fix, and we are done.

  4. We can show all of these m syndrome equations explicitly:

    {S0=e0α0×0+e1α1×0++en1α(n1)×0.S1=e0α0×1+e1α1×1++en1α(n1)×1.Sm1=e0α0(m1)+e1α1(m1)++en1α(n1)(m1).

  5. And rewrite this linear system as a matrix:

    [e0α0×0+e1α1×0++en1α(n1)×0e0α0×1+e1α1×1++en1α(n1)×1e0α0(m1)+e1α1(m1)++en1α(n1)(m1)]=[S0S1Sm1].

  6. And factorize the matrix:

    [α0×0α1×0α(n1)×0α0×1α1×1α(n1)×1α0(m1)α1(m1)α(n1)(m1)][e0e1en1]=[S0S1Sm1].

Finding error locations

  1. Choose ν (Greek lowercase nu) as the number of errors to try to find. We require 1νm/2. Unless there are time or space constraints, it is best to set ν as large as possible to catch as many errors as the error-correcting code allows.

  2. Let’s pretend we know the ν error locations as I0,I1,,Iν1. This is an orderless set of unique indexes into the received codeword of n values, so each element satisfies 0Ii<n.

    The significance of this set/sequence of indexes Ii is that the error values at these indexes may be non-zero, but all other error values must be zero. In other words, eI0, eI1, , eIν1 can each be any value (possibly zero), but the ei values at other indexes must be zero.

  3. Define some new variables for old values, but based on the error location indexes Ii, for 0i<ν:

    Xi=αIi.Yi=eIi.

  4. Because we know all other ei values are zero, we can substitute the new variables and rewrite the system of syndrome equations as follows:

    [X00X01X0ν1X10X11X1ν1Xm10Xm11Xm1ν1][Y0Y1Yν1]=[S0S1Sm1].

    (At this point we still don’t know any of the Xi or Yi values. However there is a clever multi-step procedure that will reveal them.)

  5. Define the error locator polynomial based on the unknown Xi variables:

    Λ(x)=ν1i=0(1Xix)=1+Λ1x+Λ2x2++Λνxν.

    (In other words, after all the factors are multiplied and expanded, the polynomial Λ(x) has the sequence of ν+1 monomial coefficients (1,Λ1,Λ2,,Λν).)

  6. By construction we know that for each 0i<ν, we have:

    0=Λ(X1i)=1+Λ1X1i+Λ2X2i++ΛνXνi.

    The polynomial is zero at these points because the product contains the factor (1XiX1i)=11=0.

  7. For 0i<ν and arbitrary jZ, let’s multiply all sides of the equation by YiXj+νi:

    (YiXj+νi)0=(YiXj+νi)Λ(X1i)=(YiXj+νi)(1+Λ1X1i+Λ2X2i++ΛνXνi).0=YiXj+νiΛ(X1i)=YiXj+νi+Λ1YiXj+ν1i+Λ2YiXj+ν2i++ΛνYiXji.

  8. Now sum this equation over our full range of i values:

    0=ν1i=0YiXj+νiΛ(X1i)=ν1i=0(YiXj+νi+Λ1YiXj+ν1i+Λ2YiXj+ν2i++ΛνYiXji)=(ν1i=0YiXj+νi)+Λ1(ν1i=0YiXj+ν1i)+Λ2(ν1i=0YiXj+ν2i)++Λν(ν1i=0YiXji)=Sj+ν+Λ1Sj+ν1+Λ2Sj+ν2++ΛνSj.

  9. By rearranging terms, this implies the following, which is valid for 0j<ν:

    ΛνSj+Λν1Sj+1++Λ1Sj+ν1=Sj+ν.

  10. Substituting all valid values of j, we can form a system of ν linear equations:

    {ΛνS0+Λν1S1++Λ1Sν1=Sν.ΛνS1+Λν1S2++Λ1Sν=Sν+1.ΛνSν1+Λν1Sν++Λ1S2ν2=S2ν1.

  11. We can rewrite and factorize this system into matrices and vectors:

    [S0S1Sν1S1S2SνSν1SνS2ν2][ΛνΛν1Λ1]=[SνSν+1S2ν1].

  12. Now we finally have something that can be solved. Put the above coefficients into a ν×(ν+1) augmented matrix and run it through Gauss–Jordan elimination. If the system of linear equations is inconsistent, then there are too many errors in the codeword and they cannot be repaired at the moment. If ν is not at the maximum possible value, then it might be possible to fix the codeword by re-running the procedure with a larger ν. Otherwise the codeword cannot be fixed in any way at all.

    We need to take special care if the linear system is consistent but under-determined. This happens when the codeword contains fewer than ν errors, because any of the locations where the error value is zero could be selected as a virtual “error”. Each different set of error location indexes {Ii|0i<ν} (unknown right now) would produce a different error locator polynomial Λ(x). For example if the set of true error locations is {2,5} and we want to find exactly 3 error locations, then {0,2,5}, {2,4,5}, etc. are all equally valid solutions.

    The key insight is that we only need some particular solution to the linear system. We don’t care about parametrizing the space of all solutions or anything else. One approach is to treat all the dependent variables as zero. When we scan each row of the matrix in reduced row echelon form (RREF), we look at the column of the leftmost non-zero coefficient. If the column is rightmost, then the linear system is inconsistent. Otherwise the i-th column corresponds to the i-th variable in the linear system, which corresponds to the coefficient Λνi. By setting the dependent variables (the columns without pivots) to zero, we don’t need to adjust the values of any other variables.

  13. Once we obtain the coefficients Λ1,Λ2,,Λν, we can evaluate the polynomial Λ(x) at any point we want. Plug in the values x=αi for 0i<n and check to see if Λ(αi)=0. For the solutions found, put the i values into the variables I0, I1, etc.

    Note that we may find any number of solutions, anywhere between 0 and n inclusive. If the number of solutions found is more than ν, then it is generally impossible to recover from the errors. If the number of solutions found is less than ν, then we simply redefine ν to this lower number for the remaining part of the decoder algorithm, and delete the higher-numbered Ii variables. The solutions can be found and saved into the Ii variables in any order.

    It’s unnecessary to test values of i that are at least n, because errors cannot occur outside of valid indexes into the codeword. However it’s possible to find solutions of Λ(αi)=0 where in. Not only can these solutions be ignored, they imply that the error-correcting capability has been exceeded because we can’t derive self-consistent information about where these errors are located. Thus this situation counts as an early failure.

Calculating error values

  1. At this point ν might have been redefined as a smaller number, and we know the error location indexes I0, I1, ..., Iν1. Because of this, we know all the values Xi=αIi for 0i<ν.

  2. Since we know the Xi values and Si values, we can solve one of the earlier equations for the vector of values Yi=eIi to obtain the error values/magnitudes:

    [X00X01X0ν1X10X11X1ν1Xm10Xm11Xm1ν1][Y0Y1Yν1]=[S0S1Sm1].

    We simply run a Gauss–Jordan elimination algorithm here. If the linear system is consistent and independent, then a unique solution exists and we are going to finish successfully. Otherwise the system is inconsistent so it is impossible to satisfy all the syndrome constraints, or the system is dependent/under-determined so there is no unique solution (even though one is required).

  3. With the error locations Ii and error values Yi known, we can attempt to fix the received codeword. Let the repaired codeword polynomial be r(x)=r0x0+r1x1+rn1xn1. For 0i<ν, we set the coefficient rIi=rIiYi=rIieIi. For each index i where 0i<n and i is not present in the set {Ij|0j<ν}, we simply copy the value ri=ri (i.e. don’t change the codeword values at locations not identified as errors).

  4. By design, the repaired codeword polynomial r(x) will have all-zero syndrome values – because if not, then the matrix-solving process would have identified that the linear system is inconsistent and has no solution. We can still check syndrome codes for paranoia, but here we’re done.

    This decoded codeword is the best guess based on the received codeword. If the number of errors introduced into the codeword is at most ν, then the decoding process is guaranteed to succeed and yield the original message. Otherwise for a received codeword with too many errors, all outcomes are possible – the decoding process might explicitly indicate a failure (most likely), a valid message is decoded but it mismatches the original message (occasionally), or the correct message is recovered (very unlikely).


Notes

Time complexity

The encoder is short and simple, and runs in Θ(mk) time. It is unlikely that the encoder can be significantly improved in conciseness or speed.

The PGZ decoder algorithm described here runs in Θ(m3+mk) time, assuming we choose the maximum error-correcting capability of ν=m/2. This cubic runtime is not ideal and can be improved to quadratic with other algorithms. The Berlekamp–Massey algorithm can find the error locator polynomial in Θ(m2) time, and the Forney algorithm can find the error values also in Θ(m2) time.

Alternatives to a generator

The algorithm described here uses powers of the generator α starting at 0, i.e. (α0,α1,,αm1). Some variants of Reed–Solomon ECC use powers starting at 1, i.e. (α1,α2,,αm).

In fact, the algorithm doesn’t seem to need powers or a generator at all. It appears to work as long as we can choose m unique non-zero values in the field F. For one, this means we can apply RS ECC in infinite fields such as the rational numbers Q (for pedagogical but not practical purposes), because infinite fields never have a multiplicative generator.

Although I haven’t modified the math and code to show that it works, it should be possible to adjust them to accommodate a set of unique values instead of generator powers. Suppose we have a sequence of m unique non-zero values named (α0,α1,,αm1). Then going through the mathematical derivation, we would replace every instance of αi with αi and it should probably all work out.

Source code

Java
Python

Note: The field and matrix code originally comes from my Gauss–Jordan elimination over any field page. The code has been modified to delete unnecessary classes and methods, and add new classes.

More info