In 2017 and 2018 I wrote an eighteen-part series of articles about linear feedback shift registers, or LFSRs:

**Ex-Pralite Monks and Finite Fields**, in which we describe what an LFSR is as a digital circuit; its cyclic behavior over time; the definition of groups, rings, and fields; the isomorphism between N-bit LFSRs and the field \( GF(2^N) \); and the reason why I wrote this series**libgf2 and Primitive Polynomials**, in which we describe how to use a Python library called libgf2, which nobody else actually uses, to analyze LFSRs and binary finite fields; primitive polynomials \( p(x) \) in \( GF(2)[x] \) and their relation to maximal-length LFSRs; how to find some of these primitive polynomials in Python; and how libgf2 handles arithmetic in binary finite fields**Multiplicative Inverse, and Blankinship’s Algorithm**, in which we describe how Blankinship’s Algorithm can be used to compute the extended GCD and thereby determine the multiplicative inverse, whether in multiplicative groups or in finite fields**Easy Discrete Logarithms and the Silver-Pohlig-Hellman Algorithm**, in which we describe what a discrete logarithm is; what makes discrete logarithms easy or difficult to compute; how to use the Silver-Pohlig-Hellman algorithm and the Chinese Remainder Theorem to compute a discrete logarithm; and how to use libgf2 to calculate discrete logarithms**Difficult Discrete Logarithms and Pollard’s Kangaroo Method**, in which we describe some simple methods for computing discrete logarithms with time complexity \( O(\sqrt{n}) \), namely Shanks’s baby-step giant-step algorithm, Pollard’s Rho method, and Pollard’s kangaroo algorithm; and in which we talk a little bit about the faster index calculus methods**Sing Along with the Berlekamp-Massey Algorithm**, in which we describe the Berlekamp-Massey algorithm to determine the characteristic polynomial given the output of an LFSR**LFSR Implementations, Idiomatic C, and Compiler Explorer**, in which we describe using the XC16 compiler to create reasonably-efficient LFSR implementations in a dsPIC; GCC extended assembly to improve efficiency; an idea of “idiomatic C”; the PCG random number generator; how to use Compiler Explorer to see which compilers can recognize idiomatic C for expressing bitwise rotation; and a test harness for verifying LFSR implementations in C**Matrix Methods and State Recovery**, in which we describe how to represent LFSRs with matrices; how to compute time-shifted output of an LFSR; how to recover LFSR state from its output bits; how to avoid matrices to make the same computations more efficiently; and how to use libgf2 for state recovery**Decimation, Trace Parity, and Cyclotomic Cosets**, in which we describe what happens when you decimate the output of an LFSR; the equivalency of something called trace parity to the LFSR output; how to compute trace parity in libgf2; and how to compute an unknown decimation ratio from characteristic polynomials using calculation of roots, discrete logarithms, and cyclotomic cosets**Counters and Encoders**, in which we describe counters and encoders as applications of LFSRs**Pseudorandom Number Generation**, in which we describe some methods of pseudorandom number generation (PRNG); some simple ways of evaluating correlation between PRNG outputs; correlation properties of LFSR output bits as a pseudorandom bit sequence (PRBS); and correlation properties of LFSR state**Spread-Spectrum Fundamentals**, in which we describe some properties of a numerical Fourier transform and show examples of spectral visualizations; how sharp edges affect the high-frequency spectral content; frequency-hopping and direct-sequence spread-spectrum (DSSS); how disturbance waveforms impact bit error rates of unencoded vs. DSSS-encoded signals; bit error rate curves; and how two DSSS-encoded signals can both share the same transmission bandwidth**System Identification**, in which we describe the use of perturbation signals for the identification of a transfer function; how sine-wave perturbations behave; how PRBS can be used to estimate the impulse response in the time domain; parameter estimation techniques; and how perturbation amplitude impacts estimated errors**Gold Codes**, in which we describe the importance of synchronization between direct-sequence spread-spectrum (DSSS) transmitters sharing bandwidth; how path delays present some challenges in the use of DSSS; how to modify PRBS from an LFSR using Gold Codes, in such a way as to reduce the effects of interference between multiple spread-spectrum transmitters; and how Gold Codes are used in the Global Positioning System for satellite-based navigation**Error Detection and Correction**, in which we describe bit error rate graphs; some aspects of Hamming distance; parity, checksums, and cyclic redundancy check (CRC) for error detection; and Hamming codes for error correction.**Reed-Solomon Error Correction**, in which we describe the theory of Reed-Solomon codes; how to encode data with Reed-Solomon in Python using unireedsolomon or libgf2; how to encode data with Reed-Solomon in C; how Hamming and Reed-Solomon encodings impact bit error rate curves; why we don’t just use Reed-Solomon encoding for error correction instead of CRCs for error detection; and how Reed-Solomon encoding is used in QR codes**Reverse-Engineering the CRC**, in which we describe how to determine the parameters of an unknown CRC using LFSRs, the Berlekamp-Massey algorithm, and carefully-chosen test messages; and look at some real-world CRC test cases**Primitive Polynomial Generation**, in which we describe eleven different techniques for finding the primitive polynomials over \( GF(2) \) of a given degree, including the St. Ives Algorithm, Gordon’s Method, Shoup’s Algorithm, and the Falling Coyote Algorithm.

If you end up using these articles extensively to learn on your own or to teach others, I would love to hear about it.

These articles are dedicated in memory of L. Sachs (1917 – 2019)

© 2017-2018 Jason M. Sachs, all rights reserved.