In 2013 a phone presentation was arranged, for me to talk for an hour with a couple dozen engineers at Raytheon. The original plan was to scrutinize the many facets and ramifications of timing in avionics. The scope expanded about halfway through, to include topics of interest to any participant. I was gratified when others raised issues that have been of major concern to me for years (in some cases, even decades).  Receiving a reminder from another professional, that I’m not alone in these concerns, prompts me to reiterate at least some aspects of the ongoing struggle — but this time citing a recent report of flight test verification

The breadth of the struggle is breathtaking. The About panel of this site offers short summaries, all confirmed by authoritative sources cited therein, describing the impact on each of four areas (satnav + air safety + DoD + workforce preparation). Shortcomings in all four areas are made more severe by continuation of outdated methods, as unnecessary as they are fundamental, Not everyone wants to hear this but it’s self-evident: conformance to custom — using decades-old design concepts (e.g., TCAS) plus procedures (e.g., position reports) and conventions (e.g., interface standards — guarantees outmoded legacy systems. Again, while my writings on this site and elsewhere — advocating a different direction — go back decades, I’m clearly not alone (e.g., recall those authoritative sources just noted). Changing more minds, a few at a time, can eventually lead to correction of shortcomings in operation.

We’re not pondering minor improvements, but dramatic ones. To realize them, don’t communicate with massaged data; put raw data on the interface. Communicate in terms of measurements, not coordinates — that’s how DGPS became stunningly successful. Even while using all the best available protection against interference, (including anti-spoof capability), follow through and maximize your design for robustness;  expect occurrences of poor GDOP &/or less than a full set of SVs instantaneously visible. Often that occurrence doesn’t really constitute loss of satnav; when it’s accompanied by history of 1-sec changes in carrier phase, those high-accuracy measurements prevent buildup of position error. With 1-sec carrier phase changes coming in, the dynamics don’t veer toward any one consistent direction; only location veers during position data deficiencies (poor GDOP &/or incomplete fixes) and, even then, only within limits allowed by that continued accurate dynamic updating. Integrity checks also continue throughout.

So then, take into account the crucial importance of precise dynamic information when a full position fix isn’t instantaneously available. Take what’s there and stop discarding it. Redefine requirements to enable what ancient mariners did suboptimally for many centuries — and we’ve done optimally for over a half-century.  Covariances combined with monitored residuals can indicate quality in real time. Aircraft separation means maintaining a stipulated relative distance between them, irrespective of their absolute positions and errors in their absolute positions. None of this is either mysterious or proprietary, and none of this imposes demands for huge budgets or scientific breakthroughs — not even corrections from ground stations.

A compelling case arises from cumulative weight of all these considerations. Parts of the industry have begun to address it. Ohio University has done flight testing (mentioned in the opening paragraph here) that validates the concepts just summarized. Other investigations are likely to result from recent testing of ADSB. No claim is intended that all questions have been answered, but — clearly — enough has been raised to warrant a dialogue with those making decisions affecting the long term.

CONING in STRAPDOWN SYSTEMS

Free-inertial navigation uses accelerometers and gyros alone, unaided. For that purpose pioneers of yesteryear developed a variety of techniques, ranging from a 2-sample approach (NASA TND-5384, 1969) by Jordan to his and various others’ higher-order algorithms to reduce errors from noncommutativity of finite rotations in the presence of coning (and/or pseudoconing). The methods showed considerable insight and produced successful operation. Since it’s always good to have “another tool in the toolbox” I’ll mention here an alternative. What I describe here isn’t being used but, with today’s processing capabilities, could finally become practical. The explanation will require some background information; I’ll try to be brief.

a

A very old investigation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA Journal of Spacecraft and Rockets, Sept 1966, pp 1340-1347) used the usual small-angle representation for attitude error expressed in the vehicle frame. With that frame rotating at a rate omega the derivative of that vector therefore contains a cross product of itself crossed with omega.  One contributor to that product is a lag effect from omega premultiplied by a diagonal matrix consisting of delays (e.g., transport lags equated to reciprocals of gyro bandwidths). Mismatch among those diagonal elements produces drift components with nonzero average, e.g., the x-component of the cross product is easily seen to be
aaaaaaaaaaa    (difference between y and z lags) times (omega_y) times (omega_z)
Even with zero-average (e.g., oscillatory) angular rates, that product has nonzero average due to rectification.  I then characterized the lags as delays from computation rather than from the gyros, with the lag differences now proportional to nonuniformities among RMS angular rate components along vehicle axes, and average products proportional to cross-correlation coefficients of the angular rate components. That was easy; I had a simple model enabling me to calculate the error due to finite gyro sampling rates producing finite rotation increments that don’t commute.

a

A theoretical model is only that until it is validated. I had to come up with a validation method with mid-1960s computational limitations. Solution came from a basic realization: performance doesn’t degrade from what’s happening but from belief in occurrences that aren’t happening. The first-ever report of coning (Goodman and Robinson, ASME Trans, June 1958) came from a gimballed platform that was believed to be stable while it was actually coning. If the true coning motion they described had been known and taken into account, then their high drift rates never would have occurred. The reason they weren’t taken into account then was narrow gimbal servo bandwidth; the gyros responded to the coning frequency but the platform servos didn’t. Now consider strapdown with the inverse problem: pseudoconing — a vehicle believed to experience coning when it isn’t. That will fall victim to the same departure of perception from reality. If you gave the same Goodman and Robinson coning motion to their strapdown gyro triad and sampled them every nanosecond, the effect from noncommutativity wouldn’t be noticeable.

a

Armed with that insight I then chose rotational dynamics with a closed form solution. Although rotations about fixed vehicle axes produced no coning, the pseudoconing was severe, with the apparent (reported-from-gyros) rotation axis changing radically within fractions of a millisecond; too fast for the 10 kHz data rate used in that computation.  The cross product formulation was then validated by making extensive sets of runs, always comparing two time histories:

* a closed form solution for a true direction cosine matrix corresponding to a vehicle experiencing a sinusoidal omega
* an apparent direction cosine matrix, obtained by brute-force but meticulous formation from processing gyro outputs at finite rates with quantization, time lags, and a wide variety of error sources.

That “bull-by-the-horns” computation allowed extended runs (up to a million attitude iterations) to be made for a wide range of angular rate frequencies, axis directions, and combinations of gyro input errors (steady, random, motion-sensitive, etc.). Deviation of apparent attitude from closed-form truth was consistently in close conformance to the analytical model, for a host of error sources. I have to admit that this “bull-by-the-horns” approach gave me an advantage of finding out answers before I understood the reasons for them. The cross-product analytical model didn’t come from my vision; it came after much head-scratching with answers computed from dozens of runs. A breakthrough came from the sensitivity, completely unanticipated, to angular acceleration about gyro output axes — clear in retrospect but not initially. After these experiences it occurred to me: if cross-axis covariances were known, the dominant contributor to errors — including noncommutativity — could be counteracted. I noted that on page 1342 of that old AIAA paper.

a

Finally I can describe the alternative means of compensating the dominant computational error. Description begins with the reason why it would be useful. Earlier I mentioned that many authors developed very good algorithms to reduce errors from noncommutativity of finite rotations in the presence of coning and/or pseudoconing. All that history, plus more detailed presentation of everything discussed here, can be found in Chapters 3 and 4 of my 1976 book plus Addendum 7.A of my 2007 book. A supreme irony upstages much of the work from those brilliant authors: without accounting for gyro frequency response characteristics, the intended benefit can be lost — or the “compensation” can even become counterproductive (Mark and Tazartes, AIAA Journal of Guidance, Control, & Dynamics, Jul-Aug 2006, pp 641-647). As if those burdens weren’t enough, the adjustment’s complexity — as shown in that paper — can be extensive. So :  that motivates usage of a simpler procedure.

 a

By now I’ve put so much explanation into preparing its description that not much more is needed to define the method. Today’s signal-processing boards enable the requisite covariances to be repetitively computed. Then just form the vector cross product already described and subtract the result from the gyro increments ahead of attitude updating. So much for coning and pseudoconing — but I’m not quite finished yet. The paper just cited leads to another consideration: even if we successfully removed all of the error theoretically arising from inexact computation, significant improvement in free-inertial performance would require more. Operation in the presence of vibrations would necessitate reduction of other motion-sensitive errors. Gyro degradations from rotations, for example, would have to be compensated — and that includes a multitude of components. For that topic you can begin with the discussion of gyro mounting misalignment following that up with the tables in Chapter 4 of my 1976 book and Addendum 4.B of my 2007 book.

SINGLE-MEASUREMENT RAIM

In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.