Linear Prediction





In order to gain good compression ratios for the vertex position information, it was decided that one method we could use is linear prediction.  In linear prediction, one uses ancestors (or previous samples) to predict where subsequent values will be.  Prediction parameters are computed (based upon a given number of ancestors), and from there on can then compute the error between actual and predicted position to gain values of smaller magnitude, which can then be compressed.

As noted before, the vertex tree is a good structure that references each vertex of a triangular mesh once and only once.  Within the tree, each node (or vertex position) has a unique path to the root of the tree.  This unique path contains a given number of ancestors that we can use for prediction of vertex positions.  However, there are multiple methods of determining linear prediction parameters.  The method we chose, the autocorrelation method, minimizes the square of the sum of the errors between actual and predicted positions.

It is important to note that in order to obtain a good set of predictions, actual ancestors in the tree must be used.  This is simply because, usually, ancestors of a given vertex have close geometric proximity to one another.  The closer vertices are to one another, the better the results obtained in most situations.

The procedure consists of determining the vertex tree first.  Once the vertex tree is known, the next best step is to quantize the vertex positions to 8- to 16-bit integer values, and then round them.  The amount of lost vertex position information is controlled by the quantization and the rounding.  From here, one then computes the linear prediction parameters, using the autocorrelation method.  Next, one computes predicted positions for each vertex, and then the error between the actual and predicted positions.  The errors for each vertex position (except for the root, whose actual quantized and rounded value is stored) are then stored and compressed, using a lossles compression technique (such as Huffman or arithmetic encoding).  To reproduce the vertex positions, one simply reverses this process.

We obtained good results using linear prediction, mainly when the vertex positions were quantized and rounded before linear prediction took place.  If one decided to quantize and round the values after linear prediction, too much information seemed to be lost, especially at the 8- and 10-bit quantization levels.  To see the results obtained, simply click HERE.  If, however, you would like to obtain more information about linear prediction, click HERE.

If you have any questions or comments, email me at scottm@meru.cecs.missouri.edu