# EXTENDED RAIM (ERAIM) ION-GPS 1992

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.

# FUSION: MORE THAN MULTISENSOR INTEGRATION

In the early 1990s I recalled, in a manuscript (publication #49 – click here), advocacy from years ago that probably originated from within USAF.  A sharp distinction was to be drawn between “multisensor integration” for low-speed and low-volume versus “sensor fusion” for high-speed-high-volume processing.  Unfortunately, separate terminology never survived; the industry uses the same vocabulary for nav update “fusion” at leisurely retes and for fusion of images (e.g., with megapixels/frame with 3-byte RGB pixels at 30 frames/sec) requiring speeds expressed in GHz.
Terminology aside, a major task for the imaging field is to recognize and categorize the degradations. Combining tracks from different sensors is a major undertaking.  For obvious reasons (including but by no means limited to inertial nav error propagation and time-varying errors in imaging sensors), complexity is compounded by motion.  Immediately I’ll step back and consider that from a perspective unlike concepts in common acceptance.  For brevity I’ll just cite some major ingredients here:

• Inertial nav is used to provide a connection between frames taken in succession from sensors in
motion.  It needs to be recognized that INS error propagation for this short-term opertion cannot correctly be based on antiquated nmi/hr Schuler modeling.  There are literally dozens of gyro and accelerometer error contributors, most of which are motion-sensitive and not well covered by (in fact, often even excluded from) IMU specifications.  Overbounding is thus necessary for conservative design — which in  effect either compromises performance or increases demands for data rates.  For
further illustration see Section 4.B, pages 57-65 of GNSS Aided Navigation and Tracking
• Often there is motion of not only the sensors but also tracked objects, the masking of whose signatures — already potentially an issue even when stationary — will be further obscured if driven to thwart observation.
• It is very important not to attribute sensor stabilization errors to any tracked object’s estimated state (surprisingly many operational designs violate that principle).  Figure 9.3 on page 200 of GNSS Aided Navigation and Tracking shows a planar example for insight.
• Procedures for in-flight calibration, self-calibration, online temperature compensation, etc. are invoked.  Immediately all measurement errors are then redefined to include only residual amounts due to imperfect calibration.
• One caveat: any large discrete corrections should occur suddenly between — not within — frames (e.g., a SAR coherent integration time).
• The association problem, producing hypothetical tracks (e.g., due to crossing paths), defies perfect solution.  Thus a sensor response from object `A’ might be combined with subsequent response from object `B’ to produce an extraneous track characterizing neither.  Obviously this becomes more unwieldy with increasing density of objects responding to sensors.
• Ironically, many of the objects that complicate the association task are of little or no interest.  Rocks reflect radar transmissions.  Animals respond to IR sensors.  Metallic objects respond to both, which raises an opportunity for concentration on metallic objects: accept information only from pixels with both radar and IR responses.  Tracks formed after that will be far fewer and much more credible.
• Registration of image data from IR (Az/EL) and SAR (range/doppler) cells must account for big differences in pixel size, shape, and orientation.  Although by no means trivial, in principle it can be done.
• Even if all algorithm development and processing implementation issues are solved, unknown terrain slopes will degrade the results.  Also, undulations (as well as any structures present) in a swath will produce data gaps due to masking.  How long a gap is tolerable before dropping an old track and reallocating its resources for a new one will be another design decision.
• Imaging transformation (e.g., 4×4 affine group and/or thin plate spline) applicability will depend on operational specifics.
When I was more heavily involved in this area the processing requirements for image fusion while still in raster form would have been prohibitive.  Today that may no longer be true.