The number of runway incursions, as shown on an FAA URL  was nearly a thousand in FY 2011 and 1150 for FY 2012.  A subsequent article shows renewed interest in their prevention.

A hundredfold reduction in velocity error (from meters/sec to cm/sec) was shown in flight for squitter message transmission — but with measurement-based message content, as discussed in an accompanying blog.  A publication describing highly favorable results in air (3-D) could readily extend to 2-D (surface).

A 1996 crash that killed U.S. Commerce Secretary Ron Brown drew attention to a problem that has caused thousands of airline fatalities.  Controlled flight into terrain (CFIT) results from an autopilot driven by erroneous information regarding aircraft‘s flight path relative to its surroundings.  This writer narrowly escaped death in early January 1981 when an errant foreign airliner very nearly collided with the World Trade Center (that time it would have been accidental); an alert air traffic controller issued a turn directive just in time.  A highly informative IEEE-AES Systems Journal article by Swihart —
…. “Automatic Ground Collision Avoidance System Design, Integration, and Test” (May 2011, pp.-4-11)
addresses CFIT while envisioning, near the end, future extension to unmanned aircraft.  The authors correctly describe the effort as the beginning of a long-awaited development with a huge payoff in lives to be saved and, secondarily, in vehicles not destroyed.  As proof of my full concurrence — both with the intent and with the “long-awaited” characterization — I cite the following:

* a “GPS for Collision Avoidance” seminar I prepared in 2000 (hardly anyone attended — no funding, no interest — but safety shouldn’t take a back seat to economics).
* two coauthored papers (ICNS 2009 and ION-GNSS-2011) resulting from recent low-level support to Ohio Univ. by NASA.


It remains true to this day: much more needs to be done.  Without significant increase in development, life will be increasingly hazardous.  Both heavier traffic and unmanned aircraft will contribute to the increased danger.


Tracking acceleration dynamics by GNSS, radar, imaging

My 2007 book on GPS and GNSS (GNSS Aided Navigation & Tracking), as its title implies, involves both navigation and tracking. This discussion describes the latter, covered in the longest chapter of the book (Chapter 9).  In addition to the flight-validated algorithms for navigation (processing of inertial sensor data, integration with GPS/GNSS, integrity, etc.), this text offers extensive coverage of tracking. Formulations are given for a variety of modes, in 2-D (e.g., for runway incursion prevention or ships) and 3-D (in-air), using GPS/GNSS and/or other sensors (e.g., radar, optical).  Position and velocity vectors are formed, in some operations joined by some or all components of acceleration.

This author was fortunate to be “at-the-right-places at-the-right-times” when a need arose to address each of the topics covered.  As a result, the words of one reviewer — that the book is

…………….. “teeming with insights that are hard to find or unavailable elsewhere.”

applies to tracking as well as to navigation.  The book identifies subtleties that arise in specific applications (aircraft, ships, land vehicles, satellites, long-range or short-range projectiles, reentry vehicles, missiles, … ). In combination with a variety of possible conditions affecting sensor suite and location (air-to-air; air-to-ground; air-to-sea surface; surface-to-air, etc. — with measurements associated with distance or direction or both; shared or not shared among participants who may communicate from different positions), it is not surprising that striking contrasts can arise, influencing the characterization and approaches used.  The array of formulations offered, while fully accounting for marked differences among operations, nevertheless exploits an underlying commonality to the maximum possible extent.

Tracking dynamics of aircraft, missiles, ships, satellites, projectiles, …

Formulations described in Chapter 9 were used for tracking of both aircraft and missiles, concurrently, through usage of an agile beam radar.  For another example, air-to-surface operations subdivide into air-to-ground and vessel tracking from the air.  That latter case constrains tracked objects’ altitudes to mean sea level — a substantial benefit since it obviates the need for elevation measurements, which are subject to large errors from refraction (bearing and range measurements, much less severely degraded, suffice). Air-to-ground tracking, by contrast, further subdivides into stationary and moving targets; the former potentially involves imaging possibilities (by real or synthetic aperture) while the latter — if not being imaged by inverse SAR — separates its signature from clutter via doppler.

Reentry vehicles, quite different from other track operations, present a unique set of “do’s” and “don’ts” owing to high-precision range measurements combined with much larger cross-range errors (because of proportionality to extreme distances involved).  Pitfalls from uncertain axial direction of “pancake” shaped one-sigma error ellipsoids must be avoided.  A counterexample, having angle observations only (without distance measurements), is also addressed.  Orbit determination is unique in still another way, often permitting “patched-conic” modeling for its dynamics.  A program based on Lambert’s theorem provides initial trajectories from two position vectors with the time interval separating them.

Those operations and more are addressed with most observations from radar or other (e.g., infrared imaging) sensors rather than satellite measurements.  That of course applies to tracked objects carrying no squitters. Friendlies tracking one another, however, open the door for using GNSS data.  Those subjects plus numerous supporting functions are discussed at some length in Chapter 9.  Despite very different dynamics applicable to various operations, the underlying commonality (Chapter 2) connects the error propagation traits in their estimation algorithms and also — though widely unrecognized — short-term INS error propagation under cruise conditions (Chapters 2 and 5).  Support operations such as synthetic aperture radar (SAR) and transfer alignment are described in the chapter Addendum.

Book on GPS and GNSS


Check out a preview of “GNSS Aided Navigation & Tracking” (click here)

GNSS Aided Navigation & Tracking

– Inertially Augmented or Autonomous
By James L. Farrell
American Literary Press. 2007. Hardcover. 280 pages
ISBN-13: 978-1-56167-979-9

This text offers concise guidance on integrating inertial sensors with GPS and also its international version (global navigation satellite system; GNSS) receivers plus other aiding sources. Primary focus is on low-cost inertial measurement units (IMUs) with  frequent updates, but  other functions (e.g., tracking in numerous modes) and sensors (e.g., radar) are also addressed.

Price is: $100.00 Plus Shipping
(Sales Tax for Maryland residents only)
Click here to view purchase information

Dr. Farrell has many decades of experience in this subject area; in the words of one reviewer, the book is “teeming with insights that are hard to find or unavailable elsewhere.”

An engineer and former university instructor, Farrell has made a number of contributions to multiple facets of  navigation.  He is also the author of Integrated Aircraft Navigation (1976; five hard cover printings; now in paperback) plus over eighty journal or conference manuscripts and various columns.

Frequent aiding-source updates, in applications that require precise velocity rather than extreme precision in position, enables integration to be simplified. All aspects of integration are covered, all the way from  raw measurement pre-processing to final 3-D position/velocity/attitude, with far more thorough backup and integrity provisions.  Extensive experimental results  illustrate the attainable accuracies (cm/s RMS  velocities in three-dimensions) during flight under extreme vibration.

The book on GPS and GNSS provides several flight-validated formulations and algorithms not currently in use because of their originality. Considerable opportunity is therefore offered in multiple areas including
* full use of highly intermittent ambiguous carrier phase
* rigorous integrity for separate SVs
* unprecedented robustness and situation awareness
* high performance from low cost IMUs
* “cookbook” steps
* new interoperability features
* new insights for easier implementation.

Discussion of these traits can be seen in the excerpt (over 100 pages) from the  link at the top of this page.

Schuler cycles distorted — Here’s why

1999 publication I coauthored took dead aim at a characteristic that received far too little attention — and still continues to be widely overlooked: mechanical mounting misalignment of inertial instruments.  To make the point as clearly as possible I focused exclusively on gyro misalignment — e.g., the sensitive axes of roll, pitch, and yaw gyros aren’t quite perpendicular to one another.  It was easily shown that the effect in free-inertial coast (i.e., with no updates from GPS or other navaids) was serious, even if no other errors existed.

It’s important here to discuss why the message took so long to penetrate.  The main reason is historic; inertial navigation originated in the form of a gimbaled platform holding the gyros and accelerometers in a stable orientation.  When the vehicle carrying that assembly would rotate, the gimbal servos would automatically receive a command from the gyros, keeping the platform oriented along its reference directions (e.g., North/East/vertical for moderate latitudes).  Since angular rates experienced by the inertial instruments were low, gyro misalignment and scale factor errors were much more tolerable than they are with today’s strapdown systems.  I’ve been calling that the “Achilles’ heel” of strapdown for decades now.  The roots go all the way back to 1966 (publication #6) when simulation clearly showed how serious it is.  Not long thereafter another necessary departure from convention became quite clear: replacement of the omnipresent nmi/hr performance criteria for numerous operations.  That characteristic is an average over a period between 83 and 84 minutes.  It is practically irrelevant for a large and growing number of applications that depend on short-term accuracy. {e.g., synthetic aperture radar (SAR), inertial aiding of track loops, antenna stabilization, etc.}, Early assertions of that reality (publication #26 and mention of it in still earlier reports and publications involving SAR) were essentially lost in “that giant shouting match out there” until some realization crept in after publication #38.

Misalignment: mechanical mounting imprecision

Whenever this topic is discussed, certain points must be put to rest.  The first concerns terminology; much of the petinent literature uses the word misalignments to describe small-angle directional uncertainty components (e.g., error in perception of downward and North, which drive errors in velocity).  To avoid misinterpretation I refer to nav-axis direction uncertainty as misorientation.  In the presence of rotations, mounting misalignment contributes to misorientation.  Those effects, taking place promptly upon rotation of the strapdown inertial instrument assembly, stand in marked contrast to leisurely (nominal 84-minute) classical Schuler dynamics.

The second point, lab calibration, is instantly resolved by redefining each error as a residual amount remaining due to calibration imperfections plus post-cal aging and thermal effects — that amount is still (1) excessive in many cases, and (2) in any event, not covered by firm spec commitments.

A third point involves error propagation and a different kind of calibration (in-flight).  With the old (gimbal) mechanization, in-flight calibration could counteract much overall gyro drift effect.  Glib assessments in the 1990s promoted widespread belief that the same would likewise be true for  strapdown.  Changing that perspective motivated the investigation and publication mentioned at the top of this blog.

In that publication it was shown that, although the small-angle approximation is conservative for large changes in direction, it is not extremely so.  The last equation of its Appendix A shows a factor of (pi/2) for a 180-deg turn.  A more thorough discussion of that issue, and how it demands attentiveness to short-lived angular rates, appears on pages 98-99 of GNSS Aided Navigation and Tracking.  Appendix II on pages 239-258 of that same book also provides a program, with further supporting analysis, that supersedes the publication mentioned at the top of this blog.  That program can be downloaded from here.

The final point concerns the statistical distribution of errors.  Especially with safety involved (e.g., trusting free-inertial coast error propagation), it is clearly not enough to specify RMS errors.  For example, 2 arc-sec is better than 20 but what are the statistics?  Furthermore there is nothing to preclude unexpected extension of duration for free-inertial coast after a missed approach followed by a large change in direction.  A recent coauthored investigation (Farrell and vanGraas, ION-GNSS-2010 Proceedings) applies Extreme Value Theory (EVT) to outliers, showing unacceptably high incidences of large multiples (e.g., ten-sigma and beyond).  To substantiate that, there’s room here for an abbreviated explanation —  even in linear systems, gaussian inputs produce gaussian outputs only under very restrictive conditions.

A more complete assessment of misalignment accounts for further imperfections in mounting: the sensitive axis of each accelerometer deviates from that of its corresponding gyro.  As explained on page 72 of Integrated Aircraft Navigation, an IMU with a gyro-accelerometer combo for each of three nominally orthogonal directions has nine total misalignment components for instruments relative to each other.

GPS Carrier Phase for Dynamics ?

The practice of dead reckoning (a figurative phrase of uncertain origin) is five centuries old.   In its original form, incremental excursions were plotted on a mariner’s chart using dividers for distances, with directions obtained via compass (with corrections for magnetic variation and deviation). Those steps, based on perceived velocity over known time intervals, were accumulated until a correction became available (e.g., from a landmark or a star sighting).

Modern technology has produced more accurate means of dead reckoning, such as Doppler radar or inertial navigation systems.   Addressed here is an alternative means of dead reckoning, by exploiting sequential changes in highly accurate carrier phase. The method, successfully validated in flight with GPS, easily lends itself to operation with satellites from other GNSS constellations (GALILEO, GLONASS, etc.).  That interoperability is now one of the features attracting increased attention; sequential changes in carrier phase are far easier to mix than the phases themselves, and measurements formed that way are insensitive to ephemeris errors (even with satellite mislocation,  changes in satellite position are precise).

Even with usage of only one constellation (i.e., GPS for the flight test results reported here), changes in carrier phase over 1-second intervals provided important benefits. Advantages to be described now will be explained in terms of limitations in the way carrier phase information is used conventionally.   Phase measurements are normally expressed as a product of the L-band wavelength multiplied by a sum in the form (integer + fraction) wherein the fraction is precisely measured while the large integer must be determined. When that integer is known exactly the result is of course extremely accurate.  Even the most ingenious methods of integer extraction, however, occasionally produce a highly inaccurate result.   The outcome can be catastrophic and there can be an unacceptably long delay before correction is possible.   Elimination of that possibility provided strong motivation for the scheme described here.

Linear phase facilitates streaming velocity with GNSS interoperability

With formation of 1-sec changes, all carrier phases can be forever ambiguous, i.e., the integers can remain unknown; they cancel in forming the sequential differences. Furthermore, discontinuities can be tolerated; a reappearing signal is instantly acceptable as soon as two successive carrier phases differ by an amount satisfying the single-measurement RAIM test.   The technique is especially effective with receivers using FFT-based processing, which provides unconditional access, with no phase distortion, to all correlation cells (rather than a limited subset offered by a track loop).

Another benefit is subtle but highly significant: acceptability of sub-mask carrier phase changes. Ionospheric and tropospheric timing offsets change very little over a second. Conventional systems are designed to reject measurements from low elevation satellites. Especially in view of improved geometric spread, retention here prevents unnecessary loss of important information.   Demonstration of that occurred in flight when a satelllite dropped to horizon; submask pseudoranges of course had to be rejected, but all of the 1-sec carrier phase changes were perfectly acceptable until the satellite was no longer detectable.

One additional (deeper) topic, requiring much more rigorous analysis, arises from sequential correlations among 1-sec phase change observables. The issue is thoroughly addressed and put to rest in the later sections of the 5th chapter of GNSS Aided Navigation and Tracking.

Dead reckoning capability without-IMU was verified in flight, producing decimeter/sec RMS velocity errors outside of turn transients (Section 8.1.2, pages 154-162 of the book just cited). With a low-cost IMU, accuracy is illustrated in the table near the bottom of a 1-page description on this site (also appearing on page 104 of that book). All 1-sec phase increment residual magnitudes were zero or 1 cm for the seven satellites (six across-SV differences) observed at the time shown. Over almost an hour of flight at altitude (i.e., excluding takeoff, when heading uncertainty caused larger lever-arm vector errors), cm/sec RMS velocity accuracy was obtained.


In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.

GPS codes are chosen to produce a strong response if and only if a received signal and its anticipated pattern are closely aligned in time. Conventional designs thus use correlators to ascertain that alignment. Mechanization may take various forms (e.g., comparison of early-vs-late time-shifted replicas), but dependence on the correlation is fundamental. There is also the complicating factor of additional coding superimposed for satellite ephemeris and clock information but, again, various methods have long been known for handling both forms of modulation. Tracking of the carrier phase is likewise highly developed, with capability to provide sub-wavelength accuracies.

An alternative approach using FFT computation allows replacement of all correlators and track loops. The Wiener-Khintchine theorem is well over a half-century old (actually closer to a century), but using it in this application has become feasible only recently. To implement it for GPS a receiver input’s FFT is followed with term-by-term multiplication by the FFT of each separate anticipated pattern (again with optional insertion of fractional-millisecond time shifts for further refinement and again with various means of handling the added clock-&-ephemeris modulation). According to Wiener-Khintchine, multiplication in the frequency domain corresponds to convolution in time — so the inverse FFT of the product provides the needed correlation information.

FFT processing instantly yields a number of significant benefits. The correlations are obtained for all cells, not just the limited few that would be seen by a track loop. Furthermore all cell responses are unconditionally available. Also, FFTs are not only unconditionally stable but, as an all-zero filter bank (as opposed to a loop with poles as well as zeros), the FFT provides linear phase in the passband. Expressed alternatively, no distortion in the phase-vs-frequency characteristic means constant group delay over the signal spectrum.

The FFT processing approach adapts equally well with or without IMU integration. With it, the method (called deep integration here) goes significantly beyond ultratight coupling, which was previously regarded as the ultimate achievement. Reasons for deep integration’s superiority are just the traits succinctly noted in the preceding paragraph.

Finally it is acknowledged that this fundamental discussion touches very lightly on receiver configuration, only scratching the surface. Highly recommended are the following sources plus references cited therein:

* A very early analytical development by D. van Nee and A. Coenen,
“New fast GPS code-acquisition techniquee using FFT,” Electronics Letters, vol. 27, pp. 158–160, January 1991.

* The early pioneering work in mechanization by Prof. Frank van Graas et. al.,
“Comparison of two approaches for GNSS receiver algorithms: batch processing and sequential processing considerations,” ION GNSS-2005

* the book by Borre, Akos, Bertelsen, Rinder, and Jensen,
A software-defined GPS and Galileo receiver: A single-frequency approach (2007).

An early comment sent to this site raised a question as to how long I’ve been doing this kind of work.  Yes I’m an old-timer.  Some of my earlier Kalman filter studies are cited in books dating back to the 1970s — e.g., Jazwinski, Stochastic Processes and Filtering Theory, 1970 (page 267); Bryson & Ho, Applied Optimal Control, 1975 (page 374); Spilker, Digital Communication by Satellite, 1977 (page 636).  My first book, published by Academic Press, initially appeared in 1976.

In the early 1960s, not long after Kalman’s ASME breakthrough paper on optimal filtering, I was at work simulating its effectiveness for orbit determination (publication #4).  No formal recognition of EKF existed at that time, but nonlinearities in both dynamics and observables made that course of action an obvious choice.  In 1967 I applied it to attitude determination for my Ph.D. dissertation (publication #9). Shortly thereafter I wrote a program (publication #16) for application to deformations of a satellite so large (end-to-end length taller than the Empire State Building) that its flexural oscillations were too slow to allow decoupling from its rotational motion (publications #10, 11, 12, 14, 15, 27).  Within that same time period I analyzed and simulated strapdown inertial navigation (publications #6, 7, 8).

Early familiarizarion with Kalman filtering and inertial navigation paid huge dividends during subsequent efforts in other areas.  Those included, at first, doppler nav with a time-shared radar beam (publication #20), synthetic aperture radar (publications #21, 22, 38, 41), synchronization (publication #19), tracking (publications #23, 24, 28, 30, 32, 36, 39, 40, 48, 52, 54, 60, 61, 66, 67, 69), transfer alignment (publications #29, 41, 44), software validation (publications #34, 42), image fusion (publications #43, 49), optimal control (publication #33), plus a few others.  All these efforts made it quite clear to me — there’s much more to all this than sets of equations.

Involvement in all those fields had a side effect of delaying my entry into GPS work; I was a latecomer when the GPS pioneers were already established.  GPS/GNSS is heavily involved, however, in much of my later work (latter half of my publications) — and my work in other areas produced a major benefit:  The experience provided insights which, in the words of one reviewer quoted in the book description (click here) are either hard to find or unavailable anywhere else.  Recognizing opportunities for synergism — many still absent from today’s operational systems — enabled me to cross the line into advocacy (publications #26, 47, 55, 63, 66, 68, 73, 74, 77, 83, 84, 85, 86).  Innovations present in GNSS Aided Navigation and Tracking were either traceable to or enhanced by my earlier familiarization with techniques used in other areas.

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.