The DATA QUALITY ID is a number which varies from 5 (best quality) to 0 (poorest quality) The DATA QUALITY ID is based upon analysis of LOG files built during OMEGA Data level-1B processing. This logfile is sequently checked against : 1- Missing science packets. 2- Missing science packets discovered are then consolidated : Two consecutive sequences of missing packets are joined if they are separated by less than eight science packets giving a list of holes and their corresponding size for each data cube. 3- Counter jumps in Apid numbering Note : Badly sequenced packets are automatically rearranged during level 1-A generation and are thus not considered as a factor to be retained for delivered cube quality level. 4- Decompressing errors : Previously discovered errors and inbuilt decompression errors are leading to side effects in the decompression process. Consequently, due to the algorithm and possibly to the onboard summation mechanism, the integrity of a largest data slice should be considered as degraded. These corrupted slices are listed and their size recorded. 5- The final data quality cube is generated based upon two remaining criteria : Big holes number : A hole is considered as a "big hole" starting from more than 40 consecutive consolidated missing packets. Bad slice number : Summation of missing packets outside "big holes" and degraded data slices due to inbuilt decompression errors. Final data cube calculus : |---------------------------------------------------------------------| | QUALITY | BIG HOLES | OP. | BAD SLICES | |---------------------------------------------------------------------| |q=5: perfect | bigHoles=0 | and | badSlice=0 | |q=4: one data gap | bigHoles=1 | and | badSlice<0 | |q=3: missing data | bigHoles=0 | and | 05 | or | badSlice>100 | |---------------------------------------------------------------------|