Accuracy |
Next: Performance Up: A faster way to Previous: The tables for 12-bit AccuracyWhen downscale-decoding a JPEG image with the implementation explained in this document, visual differences between the current implementation of downscale-decoding can not be perceived, not even for 16-bit platforms with USE_INACCURATE_IDCT defined. Therefore the difference between the current implementation and these new variants have been measured for 8-bit sample data using the file testimg.jpg that comes with the JPEGLib. Table 1 shows the deviations for 32-bit and 16-bit platforms with USE_INACCURATE_IDCT undefined, table 2 shows the deviations for 32-bit and 16-bit platforms with USE_INACCURATE_IDCT defined.
What can be found surprising is the fact that at least in the case of downscale-decoding the file testimg.jpg for 32 bits, the result is exactly the same decoded image no matter whether USE_INACCURATE_IDCT is defined or not. However, it is unclear whether this is generally the case. |
||
Last Updated on Sunday, 28 April 2002 16:19 |