Kopfbereich

Direkt zum Inhalt Direkt zur Navigation

Inhalt

Accuracy
next up previous
Next: Performance Up: A faster way to Previous: The tables for 12-bit

Accuracy

When downscale-decoding a JPEG image with the implementation explained in this document, visual differences between the current implementation of downscale-decoding can not be perceived, not even for 16-bit platforms with USE_INACCURATE_IDCT defined. Therefore the difference between the current implementation and these new variants have been measured for 8-bit sample data using the file testimg.jpg that comes with the JPEGLib. Table 1 shows the deviations for 32-bit and 16-bit platforms with USE_INACCURATE_IDCT undefined, table 2 shows the deviations for 32-bit and 16-bit platforms with USE_INACCURATE_IDCT defined.

 

Table 1: Deviations of the new implementation from the standard implementation with USE_INACCURATE_IDCT undefined
  16-bit platform 32-bit platform
Peak Error: 1 1
Mean Square Error: 0.028778 0.009541
Mean Error: -125 8
Different Pixel Components: 187 62
     

 

Table 2: Deviations of the new implementation from the standard implementation with USE_INACCURATE_IDCT defined
  16-bit platform 32-bit platform
Peak Error: 4 1
Mean Square Error: 0.357956 0.009541
Mean Error: -156 8
Different Pixel Components: 1587 62
     

What can be found surprising is the fact that at least in the case of downscale-decoding the file testimg.jpg for 32 bits, the result is exactly the same decoded image no matter whether USE_INACCURATE_IDCT is defined or not. However, it is unclear whether this is generally the case.
Last Updated on Sunday, 28 April 2002 16:19