In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal. I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom. Motivation for those departures arose primarily from the need for robustness. In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening. Satellite aging and jamming, for example, have recently attracted increased attention. One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test. For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits). Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed. Recent illustrations of that realization include

* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)

* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced are particularly aimed toward usage of all available measurement data. One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site. Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed *in operation*. To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it. Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity *vector* **H** and a scalar variance of the form

**aaaaaaaap1flye aaaaaaaa**

**HPH’**+(

*measurement*error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic. Every measurement can now be used, each with its individual variance. This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking. What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (**QR**=**H**) originally proposed for parity. I’ll note here that, unless all data vector components are independent with equal variance, that original (**QR**=**H**) factorization will produce state estimates that *won’t* agree with Kalman. Immediately we have all the motivation we need for a better approach. I use the condition

**aaa**

**a****aaaap1flye**

**aaaap1flye aaa**

**aaaaaaaa**

**a**

**a****QR**=

**U**

**H**

where **U** is the inverse square root of the *measurement* covariance matrix. At this point we exploit the definition of *a priori* state estimates as perceived characterizations of actual state immediately before a measurement — thus the *perceived error state* is by definition a null vector. That provides a set of *N* equations in *N* unknowns to combine with each individual scalar measurement, where *N* is *4* (for the usual three unknowns in space and one in time) or *3 *(when across-satellite differences produce three unknowns in space only).

*N+1*equations in

*N*unknowns which, after factoring as noted above, enables determination of both state solution

*in agreement with Kalman*and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted. All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier. It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit. A clinching argument: Nothing prevents users from having

*both*the classical approach to RAIM

*and*this generalized method. Nothing has been sacrificed.

## No Comments

Be the first to start a conversation