The DISR images in this directory have been processed to the second level (E-Images): 1) The 'Unsmoothed Images' have the least processing. Steps 1 through 12 below have been performed on them. Thus their photometric streach has been square root expanded to return them to 12 bits of depth, they have been flat-field corrected to eliminate the camera's photometric distorsions, the dark current has been removed using the imager model at the exposure temperature, the electronic shutter effect (induced by data clocking) has been compensated for, the images were further flat fielded to remove artifacts seen in the higher altitude images (which viewd the relatively flat upper atmosphere), the strech was enhansed to increase the dynamic range and known bad pixels were replaced by their neighbors. 2) The 'E-Images' have been furter processed to remove compressor induced artifacts, account for the imagers point spread functions (psf) and adjust for negative and saturated pixels. This helps reduce the annoying periodic wave pattern that occures when features observed are near the imagers noise floor. Following are the 23 steps for the E-images. Processing of DISR images Nov 13, 2005 Erich Karkoschka With respect to the Nov 13, 2005 version, only 15) was changed. 1) I estimated the compressor threshold for each image based on the relative frequency for the three lowest amplitude bins. I tested the estimates with the compressor simulation with three images. The difference between actual and estimated threshold were 0, 0.125, and 0.25, which is good enough. 2) I changed the square-root lookup table into a real function. 3) For adaptive schemes, I changed the slope of the function in both outer section to a new slope, which is a quarter of the original slope plus three quarters of the slope in the middle section. This way, regions outside the limits are not as noisy as in the original images, but still noisier than in the middle region. 4) I applied the reverse of the square-rooting to the data. 5) I multiplied the data by the on-board flatfield. 6) I created a dark image according to Andy's equation (dark model) for each image and subtracted it from the data. 7) I subtracted a constant data number from the whole image: one quarter of the dark current number (which peaks at 34 DN). 8) I subtracted the expected contributions of light during the vertical transfer of data in the CCD. This is based on the assumption that each pixel sees the light at the actual position for the exposure time and the light of each pixel below (lower row number) for 2 microseconds each. 9) I subtracted the constant of 0.9 DN from the whole image. 10) For images taking after landing, I subtracted the following data numbers 14.5 - 11.5 * SIND(360.*((228-J)/165.-((228-J)/300.)**2))), where J is the row number from 1 to 256 (in the SLI row 1 is above the horizon). 11) I shifted the on-board flatfields by cubic interpolation for resampling. The assumed shift of flatfield features with CCD temperature T(K) in micropixels is: SLI: X = 1485 (T-254.6) Y = -569 (T-223.9) MRI: X = 1678 (T-257.0) Y = -563 (T-221.9) HRI: X = 819 (T-260.8) Y = -704 (T-228.0). I modified the flatfields according to obvious flatfield artifacts seen in high-altitude images, typically by about 1 percent or a few percent. Most of them were adjustments next to the edge of the field of view. For the HRI, however, there was an adjustment of about 1 percent across the whole field of view. For the MRI, there was an adjustment of 2 percent over the top 50 rows. The latter two corrections are probably due to non-constant intensity of the integrating sphere. I then divided the image by the adjusted flatfield. 12) I replaced the data numbers at bad pixels by the average of the data numbers of the nearest good pixels. Bad pixels were those which were replaced before the application of the on-board flatfield. Also I added to the list of bad pixels about 100 pixels each in the SLI and HRI next to the edge of the field of view, which did not seem to produce consistent data numbers. 13) At this point, the image should be free of artifacts. I compared the discrete fourier transform of the resulting data with the original transforms. For each fourier coefficient, the original data has a minimum and maximum possible value. The standard processing adopts the mean between the minimum and maximum. Because the processing steps 1) through 10), the minimum and maximum values will be quite different. The fact that some processing steps are non-linear in the fourier coefficients is neglected here. The amplitudes of the fourier coefficients are changed in such a way that amplitudes are reduced. The (absolute) smaller amplitudes are reduced the most to values close to their (absolute) minimum. Large amplitudes are essentially unchanged to the mean between the minimum and maximum possible value. This smoothes features near the noise level with are not artifacts. 14) I increased the smoothing in 16x16 pixel blocks where the average spatial frequency of transmitted coefficients is high. Such a high value means usually that most of the data are noise, while real features typically have more low frequency coefficients. 15) I did a similar smoothing operation for 32x32 pixel blocks, shifted by 16 pixels in x and y. The inner 16x16 blocks are in four such blocks, and the results of the four calculations are averaged with increased weighting towards the center of a 32x32 pixel block. This smoothes discontinuities at 16x16 pixel block boundaries. It especially decreases amplitudes of spatial frequencies which are non-zero at only one of the four 16x16 blocks in a 32x32 pixel block. In that case, the pattern does not abruptly stop at the boundary, but decreases smoothly in amplitude across the boundary. Because the 32x32 pixel block data do not have minimum and maximum possible values. The amount of smoothing can be set. For the current run I used a smoothing factor SF = 1.5. With respect to the previous version I increased the smoothing of high frequencies near the noise level. 16) I increased the smoothing in 32x32 pixel blocks at areas where the variation of data number with position is much lower than the mean variation in that block. Typically, the compressor gets the strongest variations right but makes small errors where variations are much smaller than the average ("ringing"). This method decreased somewhat the ringing near strong features. All the smoothing operations together take out 88 percent of the coefficients transmitted to Earth, or reduce them to less than half of their original amplitude. Most of these coefficients are high frequency noise or high frequency patterns which are difficult to interpret if low spatial frequencies are missing. 17) For the last six HRI before landing, the lamp is on and creates a bright sloping background signal. I estimated this signal by adding the images and smoothing them. I subtracted this signal from these six images. I estimate that the subtraction of the about 1000 DN is accurate to 50 DN and may be as good as 20 DN. 18) I deconvolved the images before touch-down according to the program of April 2001. I used the following widths (FWHM) of the PSFs for the center and edge (at top center or bottom center): WIC = 1.5 (pixels) and WIE = 1.5 (SLI), 2.0 (HRI), and 2.5 (MRI). For the images taken from the surface, I changed the PSFs according to the defocus calculated from the distance, which is based on the assumption that the SLI window was 47 cm above the surface and the nose was pointing 1.7 degrees up. For these images, I took WIC = WIE, only dependent on the imager: 1.0 for the SLI, 1.5 for the MRI, and 4.5 for the HRI. The main purpose of deconvolution is not a sharpening but a transformation into consistent PSFs. Since the original PSFs have FWHM near 1.5 over parts of the field of view, the resulting images are not sharper. However, circular features become circular again. This is most noticeable near the corners of the MRI where all small features have radial streaks in raw images. 19) I flipped the images right/left and up/down. 20) I set negative data numbers to zero. 21) I investigated saturation. Saturation in the processed images occurred at data numbers somewhat below 4000 due to flatfield division, dark current, photon accumulation during transfer, and due to the compressor. Saturation essentially also occurred at the point where the adaptive square-root scheme has a discontinuity in the slope. Sometimes, the slope changes by more than a factor of 50. If the signal-to-noise ratio was 200 just below that point, this ratio may be only 3 just above this point, which is a useless measurement. Thus, all data numbers above that point were reduced to the same DN, which is listed in the file header. 22) I multiplied the data numbers by eight and rounded them to integers. 23) I wrote the image in pgm format, which is the simplest of the standard image formats such as pgm, jpeg, tiff, and postscript.