## Cooperating error-correcting codes and their decoding''

### Advisors: Prof.dr J.H. van Lint and Prof.dr.ir. H.C.A. van Tilborg, Eindhoven University of Technolgy, Eindhoven, The Netherlands

Error-correcting codes are essential for the reliability of digital systems for storage and transmission. This thesis describes cooperating error-correcting codes. The aim is to combine simple codes into a powerful code that can be encoded and decoded with low complexity. The thesis consists of an introductory chapter, followed by seven chapters dealing with various aspects of cooperating codes.

In the second chapter we discuss various error-correcting capabilities of the product code C$_{p}$, consisting of all matrices with all rows in a linear row code C$_{r}$ and all columns in a linear column code C$_{c}$. By expressing the number of low weight words of C$_{p}$ in the number of low weight words of C$_{c}$ and C$_{r}$, we show that C$_{p}$ can correct very many error patterns with a few errors.

By means of examples we show that the {\em full} weight distribution of C$_{p}$ is {\em not} determined by those of C$_{c}$ and C$_{r}$. We indicate a class of error patterns consisting of a combination of random errors and clustered errors, that can be decoded correctly with C$_{p}$.

In Chapter~3, we show that generalized minimum distance decoding (gmdd) correctly decodes a much larger class of error patterns than was known up to now. We apply our results to the decoding of product and generalized concatenatedcodes, and obtain that gmdd correctly decodes the class of error patterns from Chapter~2.

In the fourth chapter, we give an upper bound on the probability that an error pattern of weight at least $d-t$ results in a vector at distance $t$ from a codeword different from the transmitted one. (Here, $d$ is the minimum Hamming distance of the applied code, and $t < \frac{1}{2}d$.) The bound was already known for Reed--Solomon codes, when applied on a channel with independent errors, but it is valid for all codes and for a much wider class of channels. The bound can be used as a measure for the reliability of a decoding result in which $t$ errors have been observed. When decoding cooperating codes, reliability information obtained from decoding one code can be exploited in decoding the other code.

In the fifth chapter, we discuss a well-known modification for speeding up the Berlekamp-Massey algorithm for decoding BCH codes. The number of errors observed during decoding with this modification is a poorer indication for the reliability of the decoder output than with conventional decoding. This can have detrimental consequences for the decoding of cooperating codes.

In Chapter~6, we combine well-known codes and obtain codes with a large minimum distance, given their length and dimension. The results for binary codes have been published before and some of them have been improved upon; the results for quaternary codes are new.

In Chapter~7, we present constructions for block codes for the Partial Response (PR) channel, a popular model for certain magnetic storage channels. We derive upper and lower bounds on the maximum size of a code for the PR channel with minimum distance at least 2. We give bounds on the maximal cardinality of codes with small blocklength and a given minimum distance.

In the final chapter, we introduce Diamond codes. A word of a Diamond code is a strip with each column in a column code and each diagonal in a diagonal code. Diamond codes combine the error-correcting capabilities of product codes with the reduced memory requirements of the CIRC code applied in the Compact Disc system. We discuss encoding and decoding of Diamond codes, and discuss their distance properties. We give several variations on Diamond codes that are suited for block-oriented applications.