Schuler cycles distorted — Here’s why

1999 publication I coauthored took dead aim at a characteristic that received far too little attention — and still continues to be widely overlooked: mechanical mounting misalignment of inertial instruments.  To make the point as clearly as possible I focused exclusively on gyro misalignment — e.g., the sensitive axes of roll, pitch, and yaw gyros aren’t quite perpendicular to one another.  It was easily shown that the effect in free-inertial coast (i.e., with no updates from GPS or other navaids) was serious, even if no other errors existed.

It’s important here to discuss why the message took so long to penetrate.  The main reason is historic; inertial navigation originated in the form of a gimbaled platform holding the gyros and accelerometers in a stable orientation.  When the vehicle carrying that assembly would rotate, the gimbal servos would automatically receive a command from the gyros, keeping the platform oriented along its reference directions (e.g., North/East/vertical for moderate latitudes).  Since angular rates experienced by the inertial instruments were low, gyro misalignment and scale factor errors were much more tolerable than they are with today’s strapdown systems.  I’ve been calling that the “Achilles’ heel” of strapdown for decades now.  The roots go all the way back to 1966 (publication #6) when simulation clearly showed how serious it is.  Not long thereafter another necessary departure from convention became quite clear: replacement of the omnipresent nmi/hr performance criteria for numerous operations.  That characteristic is an average over a period between 83 and 84 minutes.  It is practically irrelevant for a large and growing number of applications that depend on short-term accuracy. {e.g., synthetic aperture radar (SAR), inertial aiding of track loops, antenna stabilization, etc.}, Early assertions of that reality (publication #26 and mention of it in still earlier reports and publications involving SAR) were essentially lost in “that giant shouting match out there” until some realization crept in after publication #38.

Misalignment: mechanical mounting imprecision

Whenever this topic is discussed, certain points must be put to rest.  The first concerns terminology; much of the petinent literature uses the word misalignments to describe small-angle directional uncertainty components (e.g., error in perception of downward and North, which drive errors in velocity).  To avoid misinterpretation I refer to nav-axis direction uncertainty as misorientation.  In the presence of rotations, mounting misalignment contributes to misorientation.  Those effects, taking place promptly upon rotation of the strapdown inertial instrument assembly, stand in marked contrast to leisurely (nominal 84-minute) classical Schuler dynamics.

The second point, lab calibration, is instantly resolved by redefining each error as a residual amount remaining due to calibration imperfections plus post-cal aging and thermal effects — that amount is still (1) excessive in many cases, and (2) in any event, not covered by firm spec commitments.

A third point involves error propagation and a different kind of calibration (in-flight).  With the old (gimbal) mechanization, in-flight calibration could counteract much overall gyro drift effect.  Glib assessments in the 1990s promoted widespread belief that the same would likewise be true for  strapdown.  Changing that perspective motivated the investigation and publication mentioned at the top of this blog.

In that publication it was shown that, although the small-angle approximation is conservative for large changes in direction, it is not extremely so.  The last equation of its Appendix A shows a factor of (pi/2) for a 180-deg turn.  A more thorough discussion of that issue, and how it demands attentiveness to short-lived angular rates, appears on pages 98-99 of GNSS Aided Navigation and Tracking.  Appendix II on pages 239-258 of that same book also provides a program, with further supporting analysis, that supersedes the publication mentioned at the top of this blog.  That program can be downloaded from here.

The final point concerns the statistical distribution of errors.  Especially with safety involved (e.g., trusting free-inertial coast error propagation), it is clearly not enough to specify RMS errors.  For example, 2 arc-sec is better than 20 but what are the statistics?  Furthermore there is nothing to preclude unexpected extension of duration for free-inertial coast after a missed approach followed by a large change in direction.  A recent coauthored investigation (Farrell and vanGraas, ION-GNSS-2010 Proceedings) applies Extreme Value Theory (EVT) to outliers, showing unacceptably high incidences of large multiples (e.g., ten-sigma and beyond).  To substantiate that, there’s room here for an abbreviated explanation —  even in linear systems, gaussian inputs produce gaussian outputs only under very restrictive conditions.

A more complete assessment of misalignment accounts for further imperfections in mounting: the sensitive axis of each accelerometer deviates from that of its corresponding gyro.  As explained on page 72 of Integrated Aircraft Navigation, an IMU with a gyro-accelerometer combo for each of three nominally orthogonal directions has nine total misalignment components for instruments relative to each other.

GPS Carrier Phase for Dynamics ?

The practice of dead reckoning (a figurative phrase of uncertain origin) is five centuries old.   In its original form, incremental excursions were plotted on a mariner’s chart using dividers for distances, with directions obtained via compass (with corrections for magnetic variation and deviation). Those steps, based on perceived velocity over known time intervals, were accumulated until a correction became available (e.g., from a landmark or a star sighting).

Modern technology has produced more accurate means of dead reckoning, such as Doppler radar or inertial navigation systems.   Addressed here is an alternative means of dead reckoning, by exploiting sequential changes in highly accurate carrier phase. The method, successfully validated in flight with GPS, easily lends itself to operation with satellites from other GNSS constellations (GALILEO, GLONASS, etc.).  That interoperability is now one of the features attracting increased attention; sequential changes in carrier phase are far easier to mix than the phases themselves, and measurements formed that way are insensitive to ephemeris errors (even with satellite mislocation,  changes in satellite position are precise).

Even with usage of only one constellation (i.e., GPS for the flight test results reported here), changes in carrier phase over 1-second intervals provided important benefits. Advantages to be described now will be explained in terms of limitations in the way carrier phase information is used conventionally.   Phase measurements are normally expressed as a product of the L-band wavelength multiplied by a sum in the form (integer + fraction) wherein the fraction is precisely measured while the large integer must be determined. When that integer is known exactly the result is of course extremely accurate.  Even the most ingenious methods of integer extraction, however, occasionally produce a highly inaccurate result.   The outcome can be catastrophic and there can be an unacceptably long delay before correction is possible.   Elimination of that possibility provided strong motivation for the scheme described here.

Linear phase facilitates streaming velocity with GNSS interoperability

With formation of 1-sec changes, all carrier phases can be forever ambiguous, i.e., the integers can remain unknown; they cancel in forming the sequential differences. Furthermore, discontinuities can be tolerated; a reappearing signal is instantly acceptable as soon as two successive carrier phases differ by an amount satisfying the single-measurement RAIM test.   The technique is especially effective with receivers using FFT-based processing, which provides unconditional access, with no phase distortion, to all correlation cells (rather than a limited subset offered by a track loop).

Another benefit is subtle but highly significant: acceptability of sub-mask carrier phase changes. Ionospheric and tropospheric timing offsets change very little over a second. Conventional systems are designed to reject measurements from low elevation satellites. Especially in view of improved geometric spread, retention here prevents unnecessary loss of important information.   Demonstration of that occurred in flight when a satelllite dropped to horizon; submask pseudoranges of course had to be rejected, but all of the 1-sec carrier phase changes were perfectly acceptable until the satellite was no longer detectable.

One additional (deeper) topic, requiring much more rigorous analysis, arises from sequential correlations among 1-sec phase change observables. The issue is thoroughly addressed and put to rest in the later sections of the 5th chapter of GNSS Aided Navigation and Tracking.

Dead reckoning capability without-IMU was verified in flight, producing decimeter/sec RMS velocity errors outside of turn transients (Section 8.1.2, pages 154-162 of the book just cited). With a low-cost IMU, accuracy is illustrated in the table near the bottom of a 1-page description on this site (also appearing on page 104 of that book). All 1-sec phase increment residual magnitudes were zero or 1 cm for the seven satellites (six across-SV differences) observed at the time shown. Over almost an hour of flight at altitude (i.e., excluding takeoff, when heading uncertainty caused larger lever-arm vector errors), cm/sec RMS velocity accuracy was obtained.

SINGLE-MEASUREMENT RAIM

In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.