Introduction ============ This document describes some other measurements besides brightness that have an effect on deciding whether a pixel is showing water. These other measurements, solar zenith angle, sensor zenith angle, and pixel/grid-cell overlap area, can be used to improve the likelihood that a dim or dark pixel is, in fact, due to water and not some other factor. Also, these other measurements are handled differently depending on whether the PGE is run in the Near Real Time (NRT) system or the Science Production (OPS) system. Using Solar Zenith Angle ======================== The "solar zenith angle" is the angle between the sun’s rays and the vertical direction. It gives an indication of shadow, since the larger the solar zenith angle, the nearer the sun is to the horizon, and the larger and longer the shadow cast by any vertical object. When detecting water, it is important to distinguish between water and shadow because they have very similar reflectance characteristics. Both appear very "dark" -- neither reflects much light back to the satellite sensor, in the case of water, because water absorbs light, and in the case of shadow, because the light is blocked by vertical features. The result is that shadow can be easily mistaken for water. The basic idea for using solar zenith angle to reject false positive detections of shadow as water uses the solar zenith angle as a filter for deciding how likely a dark pixel is actually water. If the solar zenith angle is large at a given pixel, then it is more likely that if the pixel is dark, it is due to shadow (especially if there are tall vertical structures like mountains or clouds between the pixel and the sun). How it Could Work ================= SOLUTION #1: ----------- Using solar zenith angle as a filter can lead to rejecting all dark pixels at high latitudes if it is used just once per day by itself. Therefore, we make use of multiple sensor passes from two satellites over each high latitude pixel, at different times of day (which changes the solar zenith angle due to changes in the sun's position), to improve the odds that we really are rejecting a shadow pixel instead of water pixel. We count all the times the pixel was dark while the solar zenith angle was large, and compare that count to the total number of times the pixel was measured. If the pixel was dark at a large solar zenith angle for more than 50% of (dark?) measurements, then we declare the dark pixel as likely due to shadow. OR SOLUTION #2: -------------- Given a set of measurements from the same pixel, we choose the measurement with the smallest solar zenith angle as representative of that pixel's brightness. Using Sensor Zenith Angle ========================= The "sensor zenith angle" is the angle between a ray of light coming from earth to hit the satellite's sensor and the vertical direction. It gives an indication of light interference, since the larger the sensor zenith angle, the farther the origin of the ray of light is from nadir or directly beneath the satellite, the farther the photons need to travel through the atmosphere to reach the sensor, and the more likely they are to be interfered with (absorbed or deflected) by atmospheric effects such as clouds. Also, when the sensor makes a scan, each pixel is a different size due to the difference in sensor zenith angle for each pixel. Pixels at the edges of a scan cover more area on earth than pixels underneath the satellite. As a result, when these pixels are mapped onto a rectangular grid, pixels at the edges of scans cover more grid cells than pixels beneath the satellite, causing cells at the edges of a scan to be dimmer than those near the center of the scan, since photons are spread out more. Similar to solar zenith angle, large sensor zenith angles can be used to filter out dark pixels as more likely to be due to atmospheric disturbances or measurement dimness instead of water. How it Could Work ================= SOLUTION #1: ----------- Using sensor zenith angle as a filter can lead to rejecting all dark pixels at the edges of scans if it is used just once per day by itself. Therefore, we make use of multiple sensor passes from two satellites over each pixel, at different times of day (which changes the sensor zenith angle due to the different satellite position), to improve the odds that we really are rejecting a dark pixel due to interference instead of a water pixel. We count all the times the pixel was dark while the sensor zenith angle was large, and compare that count to the total number of times the pixel was measured. If the pixel was dark at a large sensor zenith angle for more than 50% of (dark?) measurements, then we declare the dark pixel as likely due to atmospheric interference. OR SOLUTION #2: -------------- Given a set of measurements from the same pixel, we choose the measurement with the smallest sensor zenith angle as representative of that pixel's brightness. Using Pixel/Grid-Cell Overlap Area ================================== When different sized oval pixels as measured by the sensor are mapped onto a fixed rectangular grid representing the earth, they overlap the grid cells by different amounts. It stands to reason that if we have two pixels covering the same grid cell by different amounts, the one that covers the cell the more should have a larger effect on any decisions based on the cell's measured value than the one that covers the cell the less. Knowing this, we can use the pixel area overlapping the grid cell to give pixels that have a small effect on a pixel's brightness less weight in the decision to declare a grid cell as water. This weighting of pixels needs to be handled carefully, though, because dim pixels at the edge of scans are more likely to completely cover a grid cell (due to their much larger size in general) than pixels near the center of the scan, so sensor zenith angle also needs to be considered when deciding the weight of a pixel for a given grid cell. How it Could Work ================= SOLUTION #1 ----------- For any pixel/grid-cell mapping, multiply the pixel's measured value by the percent of the grid cell it covers. Then apply one of the "sensor zenith angle" solutions to the result in order to choose the value that will represent that pixel at that grid cell location. OR SOLUTION #2 -------------- For any pixel/grid-cell mapping, multiply the pixel's measured value by the percent of the grid cell it covers and multiply that by 1/N * cos(sensor zenith angle) where N is the number of pixels covering that grid cell, which gives more weight to pixels near the center of the scan. Then add all the weighted values together in order to choose the value that will represent that pixel at that grid cell location. This solution would be used instead of the sensor zenith angle solutions proposed above. Differences Between NRT and OPS Processing ========================================== On all MODAPS systems, the system receives a package, or "granule", of pixels from each satellite at regularly spaced time intervals. The pixels are of different sizes and distances from the satellite, based on sensor zenith angles as discussed above. The system runs a set of Product Generation Executables (PGEs) that process each pixel, converting them to measurements of various types and mapping them onto equally spaced rectangular grids for ease of visualization and calculation. However, the way these PGEs do their calculations can be influenced by the type of system where the PGE is run. Near Real Time (NRT) systems have a goal of performing the calculations as near to real time as is reasonable, while the Science Production (OPS) systems have a goal of performing the calculations as accurately as is reasonable. These different goals lead to different rules that decide how and when pixels get processed. On NRT systems, all the flood PGEs run on each granule as it is received from the satellite. PGE152 calculates the likelihood that each granule pixel is water, then sends these calculations and a quality flag to PGE155 which maps each water pixel and quality flag pixel onto a cartesian (longitude/latitude) coordinate grid. PGE155 also creates "pointer" files that record the count and size of each granule pixel that overlaps each grid cell based on sensor zenith angle and pixel area. PGE155 also includes the solar and sensor zenith angles for each pixel at each grid cell so that these are available for later processing if needed. The result is then passed to PGE159, which decides whether to classify water pixels as flood. Since PGE155 and PGE159 are only dealing with one granule at a time, the number of calculations per pixel is small because not many pixels overlap each grid cell. PGE159 then creates as its output "update" files that record an aggregate of the flood pixels from the current granule and all previous granules for the past two days that overlap that part of the grid. The aggregate in the "update" file is updated each time new pixel information comes in, until the end of the day, when it is declared as the final output for the day and no more updates happen. OPS systems, however, wait to receive all granules for the day before beginning to process that day. PGE152 still processes each granule individually, but it processes each granule for the day before it sends its results on to PGE155. PGE155 now has multiple granules (from different orbits), and therefore multiple pixels, that can overlap the same set of grid cells. PGE155 still does the same thing, mapping pixels to grid cells, but the pointer files and layers can contain many more pixels per grid cell than in NRT processing. These outputs are then passed to PGE159, which must evaluate each pixel in each layer to make its final classification. It does not create "update" files because it does not need to save the aggregated pixels for later. So the main difference in these two scenarios is how aggregation takes place. In the NRT scenario, aggregation happens a little at a time, as each granule is received, using the PGE155 outputs and the PGE159 update file. In the OPS scenario, aggregation happens all at once, using only the multiple pixel layers in the PGE155 outputs. In each case, NRT and OPS, the same information is (or can be made) available to PGE159, but the processing is different due to the way aggregation is handled.