The DISR images in this directory have been processed to three levels: 1) The 'Unsmoothed Images' have the least processing. Steps 1 through 12 below have been performed on them. Thus their photometric streach has been square root expanded to return them to 12 bits of depth, they have been flat-field corrected to eliminate the camera's photometric distorsions, the dark current has been removed using the imager model at the exposure temperature, the electronic shutter effect (induced by data clocking) has been compensated for, the images were further flat fielded to remove artifacts seen in the higher altitude images (which viewd the relatively flat upper atmosphere), the strech was enhansed to increase the dynamic range and known bad pixels were replaced by their neighbors. 2) The 'E-Images' have been furter processed to remove compressor induced artifacts, account for the imagers point spread functions (psf) and adjust for negative and saturated pixels. This helps reduce the annoying periodic wave pattern that occures when features observed are near the imagers noise floor. 3) The 'G-Images' were even further processed to adjust for geometric distortions and intensity calibration into I/F (normalized intensity). Following are the 23 steps for the E-images and the 6 steps for the G-images. Processing of DISR images Nov 13, 2005 Erich Karkoschka With respect to the Nov 13, 2005 version, only 15) was changed. 1) I estimated the compressor threshold for each image based on the relative frequency for the three lowest amplitude bins. I tested the estimates with the compressor simulation with three images. The difference between actual and estimated threshold were 0, 0.125, and 0.25, which is good enough. 2) I changed the square-root lookup table into a real function. 3) For adaptive schemes, I changed the slope of the function in both outer section to a new slope, which is a quarter of the original slope plus three quarters of the slope in the middle section. This way, regions outside the limits are not as noisy as in the original images, but still noisier than in the middle region. 4) I applied the reverse of the square-rooting to the data. 5) I multiplied the data by the on-board flatfield. 6) I created a dark image according to Andy's equation (dark model) for each image and subtracted it from the data. 7) I subtracted a constant data number from the whole image: one quarter of the dark current number (which peaks at 34 DN). 8) I subtracted the expected contributions of light during the vertical transfer of data in the CCD. This is based on the assumption that each pixel sees the light at the actual position for the exposure time and the light of each pixel below (lower row number) for 2 microseconds each. 9) I subtracted the constant of 0.9 DN from the whole image. 10) For images taking after landing, I subtracted the following data numbers 14.5 - 11.5 * SIND(360.*((228-J)/165.-((228-J)/300.)**2))), where J is the row number from 1 to 256 (in the SLI row 1 is above the horizon). 11) I shifted the on-board flatfields by cubic interpolation for resampling. The assumed shift of flatfield features with CCD temperature T(K) in micropixels is: SLI: X = 1485 (T-254.6) Y = -569 (T-223.9) MRI: X = 1678 (T-257.0) Y = -563 (T-221.9) HRI: X = 819 (T-260.8) Y = -704 (T-228.0). I modified the flatfields according to obvious flatfield artifacts seen in high-altitude images, typically by about 1 percent or a few percent. Most of them were adjustments next to the edge of the field of view. For the HRI, however, there was an adjustment of about 1 percent across the whole field of view. For the MRI, there was an adjustment of 2 percent over the top 50 rows. The latter two corrections are probably due to non-constant intensity of the integrating sphere. I then divided the image by the adjusted flatfield. 12) I replaced the data numbers at bad pixels by the average of the data numbers of the nearest good pixels. Bad pixels were those which were replaced before the application of the on-board flatfield. Also I added to the list of bad pixels about 100 pixels each in the SLI and HRI next to the edge of the field of view, which did not seem to produce consistent data numbers. 13) At this point, the image should be free of artifacts. I compared the discrete fourier transform of the resulting data with the original transforms. For each fourier coefficient, the original data has a minimum and maximum possible value. The standard processing adopts the mean between the minimum and maximum. Because the processing steps 1) through 10), the minimum and maximum values will be quite different. The fact that some processing steps are non-linear in the fourier coefficients is neglected here. The amplitudes of the fourier coefficients are changed in such a way that amplitudes are reduced. The (absolute) smaller amplitudes are reduced the most to values close to their (absolute) minimum. Large amplitudes are essentially unchanged to the mean between the minimum and maximum possible value. This smoothes features near the noise level with are not artifacts. 14) I increased the smoothing in 16x16 pixel blocks where the average spatial frequency of transmitted coefficients is high. Such a high value means usually that most of the data are noise, while real features typically have more low frequency coefficients. 15) I did a similar smoothing operation for 32x32 pixel blocks, shifted by 16 pixels in x and y. The inner 16x16 blocks are in four such blocks, and the results of the four calculations are averaged with increased weighting towards the center of a 32x32 pixel block. This smoothes discontinuities at 16x16 pixel block boundaries. It especially decreases amplitudes of spatial frequencies which are non-zero at only one of the four 16x16 blocks in a 32x32 pixel block. In that case, the pattern does not abruptly stop at the boundary, but decreases smoothly in amplitude across the boundary. Because the 32x32 pixel block data do not have minimum and maximum possible values. The amount of smoothing can be set. For the current run I used a smoothing factor SF = 1.5. With respect to the previous version I increased the smoothing of high frequencies near the noise level. 16) I increased the smoothing in 32x32 pixel blocks at areas where the variation of data number with position is much lower than the mean variation in that block. Typically, the compressor gets the strongest variations right but makes small errors where variations are much smaller than the average ("ringing"). This method decreased somewhat the ringing near strong features. All the smoothing operations together take out 88 percent of the coefficients transmitted to Earth, or reduce them to less than half of their original amplitude. Most of these coefficients are high frequency noise or high frequency patterns which are difficult to interpret if low spatial frequencies are missing. 17) For the last six HRI before landing, the lamp is on and creates a bright sloping background signal. I estimated this signal by adding the images and smoothing them. I subtracted this signal from these six images. I estimate that the subtraction of the about 1000 DN is accurate to 50 DN and may be as good as 20 DN. 18) I deconvolved the images before touch-down according to the program of April 2001. I used the following widths (FWHM) of the PSFs for the center and edge (at top center or bottom center): WIC = 1.5 (pixels) and WIE = 1.5 (SLI), 2.0 (HRI), and 2.5 (MRI). For the images taken from the surface, I changed the PSFs according to the defocus calculated from the distance, which is based on the assumption that the SLI window was 47 cm above the surface and the nose was pointing 1.7 degrees up. For these images, I took WIC = WIE, only dependent on the imager: 1.0 for the SLI, 1.5 for the MRI, and 4.5 for the HRI. The main purpose of deconvolution is not a sharpening but a transformation into consistent PSFs. Since the original PSFs have FWHM near 1.5 over parts of the field of view, the resulting images are not sharper. However, circular features become circular again. This is most noticeable near the corners of the MRI where all small features have radial streaks in raw images. 19) I flipped the images right/left and up/down. 20) I set negative data numbers to zero. 21) I investigated saturation. Saturation in the processed images occurred at data numbers somewhat below 4000 due to flatfield division, dark current, photon accumulation during transfer, and due to the compressor. Saturation essentially also occurred at the point where the adaptive square-root scheme has a discontinuity in the slope. Sometimes, the slope changes by more than a factor of 50. If the signal-to-noise ratio was 200 just below that point, this ratio may be only 3 just above this point, which is a useless measurement. Thus, all data numbers above that point were reduced to the same DN, which is listed in the file header. 22) I multiplied the data numbers by eight and rounded them to integers. 23) I wrote the image in pgm format, which is the simplest of the standard image formats such as pgm, jpeg, tiff, and postscript. Calibration of DISR images Nov 13, 2005 Erich Karkoschka Additional processing for the 'G-Images'. 1) I defined a geometric solution which attaches an azimuth and nadir angle to the center of each pixel. The solution is based on laboratory images of a pattern and on laboratory images of a point source in specified directions. For the SLI and HRI, the solution was adjusted based on simultaneous descent images in overlapping areas between the SLI and MRI and between the MRI and SLI. This solution is estimated to be accurate to 0.03 degrees. It differs from the previous solution by about 1 degree or less. The improvements over the previous solution include correcting a processing error for the SLI, avoiding a blow-up of the solution towards the corners of the field of view, adjusting laboratory measurements for a periodic error and a backlash of the mount used in the laboratory, and using descent images to adjust the relative pointing. The latter correction was only 1-2 pixels. 2) I resampled all images according to the geometric solution by cubic interpolation. The calibrated images are gnomonic projections with the zero-azimuth meridian half way between the two central columns and the following nadir angles half way between the two central rows: NAc = 70.3 deg (SLI), 31.3 deg (MRI), 14.5 deg (HRI). The central scale in the calibrated images is: 0.22 deg/pix (SLI), 0.125 deg/pix (MRI), 0.062 deg/pix (HRI), or in radians: SC = 0.0038397/pix (SLI), 0.0021817/pix (MRI), 0.0010821/pix (HRI). 3) Equations to convert between column x, row y and azimuth AZ, nadir angle NA: xc and yc are the center pixels. For example, if x varies between 0 and 127 and y varies between 0 and 255, xc=63.5, yc=127.5. x = xc + sin AZ / [SC * (sin NAc cos AZ + cos NAc / tan NA)] y = yc + (cos AZ tan NA -tan NAc) / [SC * (cos AZ tan NA tan NAc +1)] tan AZ = (x - xc) / [(y - yc) cos NAc + sin NAc / SC] tan NA = SQRT{ (x - xc)^2 + [(y - yc) cos NAc + sin NAc / SC]^2} / [cos NAc / SC - (y - yc) sin NAc] (The azimuth AZ is positive towards the right) 4) The pixel interpolation routine gives data numbers for the edge of the original field of view if one asks for a pixel outside the original field of view. Thus, a few columns and rows near the edge of the calibrated images have such data numbers, which should be ignored. Therefore, for each of the imagers, an image is given outlining the usable field of view. The edge of the original field of view is where data numbers in that image are near 1000, with a slope of about 1000/pixel. 5) I scaled data numbers of MRI and HRI images so that simultaneous images give similar data numbers in overlapping areas. Based on my investigations of descent images, I multiplied HRI data numbers by a constant 0.424, and I divided MRI data numbers by 0.989+0.0000032*(T-180)^2 where T is the CCD temperature in K. Then the data numbers were converted in DN/ms by dividing by the exposure time in ms. Finally, I divided the data numbers by (742+0.13*T) to convert them to I/F where pi*F is the solar flux at the top of Titan's atmosphere. These numbers are multiplied by 50000 and rounded to integers. Thus, in the calibrated images, a data number of 10000 corresponds to I/F = 0.2. For the MRI and HRI images after landing, data numbers are scaled down by a factor of 10 in order to avoid saturation. The sensitivity estimate of 742+0.13*T derives mostly from laboratory measurements with a small adjustment for non-constant intensity of the integrating sphere. Note that new data numbers are pi times larger than in the May 8 version because the F of I/F was taken before as the solar flux (curly F), while now I/F really is the reflectivity. 6) The wavelength distribution of photons received from a gray surface at the top of Titan's atmosphere is estimated for two temperatures in the following table, normalized to a sum of unity. For other CCD temperatures, linear interpolation in temperature is sufficient. Wavl 170 K 270 K CCD temperature (nm) 610 0.00001 0.00000 620 0.00003 0.00002 630 0.00018 0.00006 640 0.00224 0.00049 650 0.01506 0.00444 660 0.03212 0.01740 670 0.04253 0.03223 680 0.04800 0.04045 690 0.05105 0.04470 700 0.05066 0.04523 710 0.05006 0.04521 720 0.04919 0.04479 730 0.05002 0.04596 740 0.04991 0.04648 750 0.05019 0.04763 760 0.04795 0.04665 770 0.04370 0.04344 780 0.04054 0.04089 790 0.03764 0.03846 800 0.03588 0.03705 810 0.03461 0.03632 820 0.03349 0.03573 830 0.03218 0.03509 840 0.03021 0.03367 850 0.02619 0.03000 860 0.02404 0.02792 870 0.01981 0.02394 880 0.01729 0.02147 890 0.01492 0.01911 900 0.01278 0.01692 910 0.01081 0.01475 920 0.00908 0.01287 930 0.00779 0.01147 940 0.00654 0.01000 950 0.00551 0.00877 960 0.00459 0.00769 970 0.00377 0.00669 980 0.00302 0.00577 990 0.00231 0.00485 1000 0.00167 0.00399 1010 0.00110 0.00312 1020 0.00067 0.00239 1030 0.00035 0.00176 1040 0.00015 0.00126 1050 0.00007 0.00087 1060 0.00005 0.00062 1070 0.00003 0.00044 1080 0.00001 0.00034 1090 0.00000 0.00026 1100 0.00000 0.00018 1110 0.00000 0.00011 1120 0.00000 0.00004 Mean 768.6 786.5 nm RMS 77.0 83.9 nm