Schuler cycles distorted — Here’s why

1999 publication I coauthored took dead aim at a characteristic that received far too little attention — and still continues to be widely overlooked: mechanical mounting misalignment of inertial instruments.  To make the point as clearly as possible I focused exclusively on gyro misalignment — e.g., the sensitive axes of roll, pitch, and yaw gyros aren’t quite perpendicular to one another.  It was easily shown that the effect in free-inertial coast (i.e., with no updates from GPS or other navaids) was serious, even if no other errors existed.

It’s important here to discuss why the message took so long to penetrate.  The main reason is historic; inertial navigation originated in the form of a gimbaled platform holding the gyros and accelerometers in a stable orientation.  When the vehicle carrying that assembly would rotate, the gimbal servos would automatically receive a command from the gyros, keeping the platform oriented along its reference directions (e.g., North/East/vertical for moderate latitudes).  Since angular rates experienced by the inertial instruments were low, gyro misalignment and scale factor errors were much more tolerable than they are with today’s strapdown systems.  I’ve been calling that the “Achilles’ heel” of strapdown for decades now.  The roots go all the way back to 1966 (publication #6) when simulation clearly showed how serious it is.  Not long thereafter another necessary departure from convention became quite clear: replacement of the omnipresent nmi/hr performance criteria for numerous operations.  That characteristic is an average over a period between 83 and 84 minutes.  It is practically irrelevant for a large and growing number of applications that depend on short-term accuracy. {e.g., synthetic aperture radar (SAR), inertial aiding of track loops, antenna stabilization, etc.}, Early assertions of that reality (publication #26 and mention of it in still earlier reports and publications involving SAR) were essentially lost in “that giant shouting match out there” until some realization crept in after publication #38.

Misalignment: mechanical mounting imprecision

Whenever this topic is discussed, certain points must be put to rest.  The first concerns terminology; much of the petinent literature uses the word misalignments to describe small-angle directional uncertainty components (e.g., error in perception of downward and North, which drive errors in velocity).  To avoid misinterpretation I refer to nav-axis direction uncertainty as misorientation.  In the presence of rotations, mounting misalignment contributes to misorientation.  Those effects, taking place promptly upon rotation of the strapdown inertial instrument assembly, stand in marked contrast to leisurely (nominal 84-minute) classical Schuler dynamics.

The second point, lab calibration, is instantly resolved by redefining each error as a residual amount remaining due to calibration imperfections plus post-cal aging and thermal effects — that amount is still (1) excessive in many cases, and (2) in any event, not covered by firm spec commitments.

A third point involves error propagation and a different kind of calibration (in-flight).  With the old (gimbal) mechanization, in-flight calibration could counteract much overall gyro drift effect.  Glib assessments in the 1990s promoted widespread belief that the same would likewise be true for  strapdown.  Changing that perspective motivated the investigation and publication mentioned at the top of this blog.

In that publication it was shown that, although the small-angle approximation is conservative for large changes in direction, it is not extremely so.  The last equation of its Appendix A shows a factor of (pi/2) for a 180-deg turn.  A more thorough discussion of that issue, and how it demands attentiveness to short-lived angular rates, appears on pages 98-99 of GNSS Aided Navigation and Tracking.  Appendix II on pages 239-258 of that same book also provides a program, with further supporting analysis, that supersedes the publication mentioned at the top of this blog.  That program can be downloaded from here.

The final point concerns the statistical distribution of errors.  Especially with safety involved (e.g., trusting free-inertial coast error propagation), it is clearly not enough to specify RMS errors.  For example, 2 arc-sec is better than 20 but what are the statistics?  Furthermore there is nothing to preclude unexpected extension of duration for free-inertial coast after a missed approach followed by a large change in direction.  A recent coauthored investigation (Farrell and vanGraas, ION-GNSS-2010 Proceedings) applies Extreme Value Theory (EVT) to outliers, showing unacceptably high incidences of large multiples (e.g., ten-sigma and beyond).  To substantiate that, there’s room here for an abbreviated explanation —  even in linear systems, gaussian inputs produce gaussian outputs only under very restrictive conditions.

A more complete assessment of misalignment accounts for further imperfections in mounting: the sensitive axis of each accelerometer deviates from that of its corresponding gyro.  As explained on page 72 of Integrated Aircraft Navigation, an IMU with a gyro-accelerometer combo for each of three nominally orthogonal directions has nine total misalignment components for instruments relative to each other.

GPS Carrier Phase for Dynamics ?

The practice of dead reckoning (a figurative phrase of uncertain origin) is five centuries old.   In its original form, incremental excursions were plotted on a mariner’s chart using dividers for distances, with directions obtained via compass (with corrections for magnetic variation and deviation). Those steps, based on perceived velocity over known time intervals, were accumulated until a correction became available (e.g., from a landmark or a star sighting).

Modern technology has produced more accurate means of dead reckoning, such as Doppler radar or inertial navigation systems.   Addressed here is an alternative means of dead reckoning, by exploiting sequential changes in highly accurate carrier phase. The method, successfully validated in flight with GPS, easily lends itself to operation with satellites from other GNSS constellations (GALILEO, GLONASS, etc.).  That interoperability is now one of the features attracting increased attention; sequential changes in carrier phase are far easier to mix than the phases themselves, and measurements formed that way are insensitive to ephemeris errors (even with satellite mislocation,  changes in satellite position are precise).

Even with usage of only one constellation (i.e., GPS for the flight test results reported here), changes in carrier phase over 1-second intervals provided important benefits. Advantages to be described now will be explained in terms of limitations in the way carrier phase information is used conventionally.   Phase measurements are normally expressed as a product of the L-band wavelength multiplied by a sum in the form (integer + fraction) wherein the fraction is precisely measured while the large integer must be determined. When that integer is known exactly the result is of course extremely accurate.  Even the most ingenious methods of integer extraction, however, occasionally produce a highly inaccurate result.   The outcome can be catastrophic and there can be an unacceptably long delay before correction is possible.   Elimination of that possibility provided strong motivation for the scheme described here.

Linear phase facilitates streaming velocity with GNSS interoperability

With formation of 1-sec changes, all carrier phases can be forever ambiguous, i.e., the integers can remain unknown; they cancel in forming the sequential differences. Furthermore, discontinuities can be tolerated; a reappearing signal is instantly acceptable as soon as two successive carrier phases differ by an amount satisfying the single-measurement RAIM test.   The technique is especially effective with receivers using FFT-based processing, which provides unconditional access, with no phase distortion, to all correlation cells (rather than a limited subset offered by a track loop).

Another benefit is subtle but highly significant: acceptability of sub-mask carrier phase changes. Ionospheric and tropospheric timing offsets change very little over a second. Conventional systems are designed to reject measurements from low elevation satellites. Especially in view of improved geometric spread, retention here prevents unnecessary loss of important information.   Demonstration of that occurred in flight when a satelllite dropped to horizon; submask pseudoranges of course had to be rejected, but all of the 1-sec carrier phase changes were perfectly acceptable until the satellite was no longer detectable.

One additional (deeper) topic, requiring much more rigorous analysis, arises from sequential correlations among 1-sec phase change observables. The issue is thoroughly addressed and put to rest in the later sections of the 5th chapter of GNSS Aided Navigation and Tracking.

Dead reckoning capability without-IMU was verified in flight, producing decimeter/sec RMS velocity errors outside of turn transients (Section 8.1.2, pages 154-162 of the book just cited). With a low-cost IMU, accuracy is illustrated in the table near the bottom of a 1-page description on this site (also appearing on page 104 of that book). All 1-sec phase increment residual magnitudes were zero or 1 cm for the seven satellites (six across-SV differences) observed at the time shown. Over almost an hour of flight at altitude (i.e., excluding takeoff, when heading uncertainty caused larger lever-arm vector errors), cm/sec RMS velocity accuracy was obtained.


In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.

GPS codes are chosen to produce a strong response if and only if a received signal and its anticipated pattern are closely aligned in time. Conventional designs thus use correlators to ascertain that alignment. Mechanization may take various forms (e.g., comparison of early-vs-late time-shifted replicas), but dependence on the correlation is fundamental. There is also the complicating factor of additional coding superimposed for satellite ephemeris and clock information but, again, various methods have long been known for handling both forms of modulation. Tracking of the carrier phase is likewise highly developed, with capability to provide sub-wavelength accuracies.

An alternative approach using FFT computation allows replacement of all correlators and track loops. The Wiener-Khintchine theorem is well over a half-century old (actually closer to a century), but using it in this application has become feasible only recently. To implement it for GPS a receiver input’s FFT is followed with term-by-term multiplication by the FFT of each separate anticipated pattern (again with optional insertion of fractional-millisecond time shifts for further refinement and again with various means of handling the added clock-&-ephemeris modulation). According to Wiener-Khintchine, multiplication in the frequency domain corresponds to convolution in time — so the inverse FFT of the product provides the needed correlation information.

FFT processing instantly yields a number of significant benefits. The correlations are obtained for all cells, not just the limited few that would be seen by a track loop. Furthermore all cell responses are unconditionally available. Also, FFTs are not only unconditionally stable but, as an all-zero filter bank (as opposed to a loop with poles as well as zeros), the FFT provides linear phase in the passband. Expressed alternatively, no distortion in the phase-vs-frequency characteristic means constant group delay over the signal spectrum.

The FFT processing approach adapts equally well with or without IMU integration. With it, the method (called deep integration here) goes significantly beyond ultratight coupling, which was previously regarded as the ultimate achievement. Reasons for deep integration’s superiority are just the traits succinctly noted in the preceding paragraph.

Finally it is acknowledged that this fundamental discussion touches very lightly on receiver configuration, only scratching the surface. Highly recommended are the following sources plus references cited therein:

* A very early analytical development by D. van Nee and A. Coenen,
“New fast GPS code-acquisition techniquee using FFT,” Electronics Letters, vol. 27, pp. 158–160, January 1991.

* The early pioneering work in mechanization by Prof. Frank van Graas et. al.,
“Comparison of two approaches for GNSS receiver algorithms: batch processing and sequential processing considerations,” ION GNSS-2005

* the book by Borre, Akos, Bertelsen, Rinder, and Jensen,
A software-defined GPS and Galileo receiver: A single-frequency approach (2007).

An early comment sent to this site raised a question as to how long I’ve been doing this kind of work.  Yes I’m an old-timer.  Some of my earlier Kalman filter studies are cited in books dating back to the 1970s — e.g., Jazwinski, Stochastic Processes and Filtering Theory, 1970 (page 267); Bryson & Ho, Applied Optimal Control, 1975 (page 374); Spilker, Digital Communication by Satellite, 1977 (page 636).  My first book, published by Academic Press, initially appeared in 1976.

In the early 1960s, not long after Kalman’s ASME breakthrough paper on optimal filtering, I was at work simulating its effectiveness for orbit determination (publication #4).  No formal recognition of EKF existed at that time, but nonlinearities in both dynamics and observables made that course of action an obvious choice.  In 1967 I applied it to attitude determination for my Ph.D. dissertation (publication #9). Shortly thereafter I wrote a program (publication #16) for application to deformations of a satellite so large (end-to-end length taller than the Empire State Building) that its flexural oscillations were too slow to allow decoupling from its rotational motion (publications #10, 11, 12, 14, 15, 27).  Within that same time period I analyzed and simulated strapdown inertial navigation (publications #6, 7, 8).

Early familiarizarion with Kalman filtering and inertial navigation paid huge dividends during subsequent efforts in other areas.  Those included, at first, doppler nav with a time-shared radar beam (publication #20), synthetic aperture radar (publications #21, 22, 38, 41), synchronization (publication #19), tracking (publications #23, 24, 28, 30, 32, 36, 39, 40, 48, 52, 54, 60, 61, 66, 67, 69), transfer alignment (publications #29, 41, 44), software validation (publications #34, 42), image fusion (publications #43, 49), optimal control (publication #33), plus a few others.  All these efforts made it quite clear to me — there’s much more to all this than sets of equations.

Involvement in all those fields had a side effect of delaying my entry into GPS work; I was a latecomer when the GPS pioneers were already established.  GPS/GNSS is heavily involved, however, in much of my later work (latter half of my publications) — and my work in other areas produced a major benefit:  The experience provided insights which, in the words of one reviewer quoted in the book description (click here) are either hard to find or unavailable anywhere else.  Recognizing opportunities for synergism — many still absent from today’s operational systems — enabled me to cross the line into advocacy (publications #26, 47, 55, 63, 66, 68, 73, 74, 77, 83, 84, 85, 86).  Innovations present in GNSS Aided Navigation and Tracking were either traceable to or enhanced by my earlier familiarization with techniques used in other areas.

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.

In the early 1990s I recalled, in a manuscript (publication #49 – click here), advocacy from years ago that probably originated from within USAF.  A sharp distinction was to be drawn between “multisensor integration” for low-speed and low-volume versus “sensor fusion” for high-speed-high-volume processing.  Unfortunately, separate terminology never survived; the industry uses the same vocabulary for nav update “fusion” at leisurely retes and for fusion of images (e.g., with megapixels/frame with 3-byte RGB pixels at 30 frames/sec) requiring speeds expressed in GHz.
Terminology aside, a major task for the imaging field is to recognize and categorize the degradations. Combining tracks from different sensors is a major undertaking.  For obvious reasons (including but by no means limited to inertial nav error propagation and time-varying errors in imaging sensors), complexity is compounded by motion.  Immediately I’ll step back and consider that from a perspective unlike concepts in common acceptance.  For brevity I’ll just cite some major ingredients here:

  • Inertial nav is used to provide a connection between frames taken in succession from sensors in 
    motion.  It needs to be recognized that INS error propagation for this short-term opertion cannot correctly be based on antiquated nmi/hr Schuler modeling.  There are literally dozens of gyro and accelerometer error contributors, most of which are motion-sensitive and not well covered by (in fact, often even excluded from) IMU specifications.  Overbounding is thus necessary for conservative design — which in  effect either compromises performance or increases demands for data rates.  For
    further illustration see Section 4.B, pages 57-65 of GNSS Aided Navigation and Tracking
  • Often there is motion of not only the sensors but also tracked objects, the masking of whose signatures — already potentially an issue even when stationary — will be further obscured if driven to thwart observation.
  • It is very important not to attribute sensor stabilization errors to any tracked object’s estimated state (surprisingly many operational designs violate that principle).  Figure 9.3 on page 200 of GNSS Aided Navigation and Tracking shows a planar example for insight.
  • Procedures for in-flight calibration, self-calibration, online temperature compensation, etc. are invoked.  Immediately all measurement errors are then redefined to include only residual amounts due to imperfect calibration.
  • One caveat: any large discrete corrections should occur suddenly between — not within — frames (e.g., a SAR coherent integration time).
  • The association problem, producing hypothetical tracks (e.g., due to crossing paths), defies perfect solution.  Thus a sensor response from object `A’ might be combined with subsequent response from object `B’ to produce an extraneous track characterizing neither.  Obviously this becomes more unwieldy with increasing density of objects responding to sensors.
  • Ironically, many of the objects that complicate the association task are of little or no interest.  Rocks reflect radar transmissions.  Animals respond to IR sensors.  Metallic objects respond to both, which raises an opportunity for concentration on metallic objects: accept information only from pixels with both radar and IR responses.  Tracks formed after that will be far fewer and much more credible.
  • Registration of image data from IR (Az/EL) and SAR (range/doppler) cells must account for big differences in pixel size, shape, and orientation.  Although by no means trivial, in principle it can be done.
  • Even if all algorithm development and processing implementation issues are solved, unknown terrain slopes will degrade the results.  Also, undulations (as well as any structures present) in a swath will produce data gaps due to masking.  How long a gap is tolerable before dropping an old track and reallocating its resources for a new one will be another design decision.
  • Imaging transformation (e.g., 4×4 affine group and/or thin plate spline) applicability will depend on operational specifics.
When I was more heavily involved in this area the processing requirements for image fusion while still in raster form would have been prohibitive.  Today that may no longer be true.


This set of blogs will not be considered complete until at least seventy (or possibly a hundred) are available for visitors to download and/or print.  Each individual blog, with links to references (which in some cases can also be downloaded and printed from this site), summarizes a specific aspect from a chosen set of topics.  A smaller number of these “one-pagers” will address topics from my earlier, more fundamental, book Integrated Aircraft Navigation.  An additional few (very few) will deal with topics not covered in either of those two books.  An example of the latter publicizes some useful facets of the ultra-familiar classical low-pass filter which (believe it or not – after all these years) have remained obscure.

Over time, dozens more will be added from a wide span of topics (all firmly supported by experience as well as theory, ranging from elementary to advanced, in some cases relatively new and therefore largely unknown) that will include

  • Modern estimation in both block (weighted least squares) and sequential (Kalman filtering, with Battin’s derivation – much easier to follow than Kalman’s) form, with their interrelationship developed quite far, enabling “plant noise” levels to be prescribed in closed-form, also providing highly unusual insight into sequentially correlated measurement errors; chi-squared residuals; implications of optimality during transients; need for conservatism in modeling; sensitivity of matrix-vs-vector extrapolation (“do’s and don’ts”); application-dependence of commonality and uniqueness features; quantification of observability and effects of augmentation on it; duality among a wide scope of navigation modes; commonly overlooked duality between tracking and short-term inertial nav error propagation; when “correction-to-the adjustment” terms can and can’t be omitted; suboptimal (equal-eigenvalues) estimation with steady-state performance indistinguishable from optimal; all fully supported by theory and experience
  • Basic building-blocks for attitude expressions: superiority of quaternions and direction cosines over Euler angles, due to singularity (“gimbal lock” at 90-deg for x-y-z sequence) and at 0-deg for z-x-z sequences used for orbits
  • GPS issues related to the top-priority goal of robustness: beyond elementary (4-state and 8-state) formulations; duality of pseudorange and phase ambiguity; exploitation of modern processing capabilities in GPS/GNSS receivers; carrier phase as integrated doppler vs frequency data; 1-sec sequential phase changes (much easier to mix across constellations, negligible sequential changes in IONO/TROPO propagation, ambiguity resolution not needed, instant reacquisition, no-mask angle needed); streaming velocity for dead reckoning with segmentation of position fixes; differential operation – differencing across satellites, receivers, and time; handling correlations from differencing; orthogonalization for simple QR factorization; measurement relocation in time and lever-arm adjustment; E(Extended)RAIM;  D(Differential)RAIM; necessity of weighting in single-measurement RAIM with pseudoranges and carrier phases, concurrently; sample flight test results showing state-of-the-art accuracies in dynamics (e.g., cm/sec RMS velocity error and tenths-mrad leveling) with a low-cost IMU; revisit of the same flight segment, achieving decimeter/sec RMS velocity error without any IMU
  • Tracking (with subdivision into over a dozen topics including a littoral environment operation with hundreds of ships present; orbit determination; usage of Lambert’s laws; surface-to-air (subdivided into ground-to-air and tracking from ships), air-to-surface and surface-to-surface (again with the same subdivision),  air-to-air; reentry vehicles; usage of stable coordinate frames; linearity in both dynamics and measurements; Mode-S squitters for mutual surveillance and collision avoidance in crowded airspace; multiple track output usage (placement of gates, antenna steering, file maintenance); crucial importance of transmitting measurements rather than coordinates (publication #66); extension to noncooperative objects, critical distinction (often blurred) between errors in tracking and stabilization; sucessfully accomplished concurrent track of multiple objects with electronically steered beams; bistatic and multistatic operation; postprocessing to form familiar parameters from estimator outputs; short-range projectiles over “flat-earth” – plus many more)
  • Processing of inertial data – incrementing of position, velocity, attitude; straightforward state-of-the-art algorithms for complete metamorphosis from raw gyro and accelerometer samples into final 3-D position, velocity, and attitude; motion-sensitive inertial instrument errors; coning; sculling; critical distinction between misalignment (imperfect mechanical mounting) vs misorientation; adaptive accommodation of gyro scale factor and misalignment errors; instability of unaided vertical channel; azimuth pseudomeasurement; near-universal misconceptions connected to free-inertial coast
  • Support functions (transfer alignment; SAR motion compensation; stabilization of images; sensor control mechanizations; synchronization; determination of retention probability)
  • Vision-for-the-future with maximum situation awareness for all cooperating participants in a scenario; critical role of interfaces (implications of singularities, RAIM, Differential GPS, etc.), software modularity, reuse, coordination).  Full validation in GNSS Aided Navigation and Tracking.

For steady-state a suboptimal estimator can be designed with near-optimal performance.  A Kalman filter, though, optimizes accuracy during transients too – provided that the model is known and linear.  Immediately we’ll  invoke the “almost/most/if” qualification: an extended Kalman filter (EKF) is almost optimum, throughout most of its operation, if the model is almost linear and modeling errors are held in check via process noise.  Rather than presenting justification here I’ll cite a set of “do’s-and-dont’s” – validated by long experience – from Section 2.9 of GNSS Aided Navigation and Tracking, with GPS/INS flight test data included.  Eqs. (9.9)-(9.19) of that same reference provides simple design equations for alpha-beta and alpha-beta-gamma trackers that have consistently produced success in operation.

First we’ll note that suboptimal is not equated to “constant gain” – if for no other reason, the time between measurements will vary in many systems.  That’s quite easily accommodated by the alpha-beta[-gamma] designs just mentioned.  There are additional reasons, though, that can be illustrated by addressing a taxing situation for initiating a radar track file in close range air-to-air encounters between two fighter jets.  The target’s (i.e., tracked object’s) cross-range velocity at lock-on time is unknown.  It could be 800 ft/sec, for example, in which case the tracker’s initial velocity error has at least that 800-ft/sec component.  With any additional unknown component of along-range velocity the target may have at that instant (doppler, if observed, might not yet be trusted to represent range rate dynamics), the tracker’s initial velocity error will then exceed 800 ft/sec.  The transient at acquisition could easily be further complicated by acceleration.  Anyone familiar with servo pull-in dynamics will immediately see how the transient can reach significantly beyond the initial error – very fast – in multiple directions (e.g., East/West and North/South).  Since we’re not at all comfortable with velocity errors on the order of 1000 ft/sec, the task is to wash that out ASAP.

A Kalman filter having accurate knowledge of the initial [P] matrix would breeze through this challenge.  An excellent example of its role is provided by this transient behavior.  Knowledge of that matrix is tantamount to knowing whether – at initiation time – the tracker’s North velocity error is positively or negatively correlated (i.e., likely to have the same or opposite sign) as its North position error, likewise for East velocity error with same-vs-opposite sign of North acceleration error, and likewise … all combinations – not only the signs but also the RMS amounts.  Of course that’s completely unrealistic.  So now what?

Suboptimal gain sets phased in at the right times can handle this.  For a simple illustration, let a 3-dimensional tracker, divided into three separable 3-state (position/speed/acceleration) single-direction channels, have a 20-Hz update rate.  If the first few updates have gains of 0.5 or more for position only, even a huge position error can be quickly brought down near sensor-error levels before accompanying errors in dynamics have much time to propagate. Then after that many (“K1”) position corrections, a position-&-speed update phase can be initiated, using the alpha-beta tracker gains related as shown in Eq. (9.12) of the reference cited above.  Duration of that phase is devised to last only as long as necessary to reduce speed error to design levels (which will be proportional to measurement error divided by that duration).  After the total number of corrections has reached that intended design value (“K2”), the alpha-beta-gamma phase can start with gains related according to Eqs. (9.18-19) of that same reference.  That phase continues until the total corrections count reaches “K3” at which time acceleration error is reduced to an amount inversely proportional to the square of (K3-K2).  Gains thereafter may conform to Kalman filter weighting.

This example is not intended to advocate substituting suboptimal for optimal designs just anywhere.  Separation of 3-dimensional trackers into 3-state single-direction channels is often permissible (and sometimes even highly advisable), but – as shown in the cited reference – sometimes inappropriate.  Where it is permitted, use it; solving the unknown-P-zero problem is especially important in applications of this type.  A word to the wise: Do not (repeat: do not) make the update counts K1,K2,etc. programmable.  If you do, someone unfamiliar with the reasoning above will experiment, allowing resets to values producing very prolonged back-and-forth transfer of errors among position and dynamics (one gets worse as another improves; then vice-versa).  When that spectacle is seen by nontechnical administrators, your image in their minds will forever be indelibly painted with that long drawn-out transient veering back and forth between plus and minus extreme levels.

Another slice-of-advice: Even if inputs are extremely erratic, your tracker must maintain high responsiveness (for sensor sightline stabilization at short range and for range[/doppler] gate placement at any range) – but – the outside world doesn’t have to witness the results of that “hitchy-hatchy” from wildly erratic inputs.  So: don’t change the tracker but do low-pass filter what goes outside.  If “hiding the system’s warts” thereby produces criticism, the justification is: the filtered output, even with the resulting delay (and possibly an accompanying distortion), is easier to interpret.

As an alternative to TCAS in air and ASDE on ground, all facets of collision avoidance (see 9-minute video) can be supplanted with vast improvement:

  • INTEGRATION – one system for both 2-D (runway incursions) and 3-D (in-air)
  • AUTONOMY – no ground station corrections required
  • COMMUNICATION – interrogation/response replaced by ModeS squitter operation
  • COORDINATION – coordinated squitter scheduling eliminates garble
  • TRACKING – all tracks maintained with GPS pseudoranges in data packets
  • DYNAMICS – tracks provide optimally estimated velocity as well as position
  • TIMELINESS – history of dynamics with position counteracts latency
  • MULTITARGET HANDLING – every participant can track every other participant
  • CONTROL – collisions avoided by deceleration rather than climb/dive

My previous investigations (publication #61 and #66, combined with publication #85 as well as Chapter 9 of GNSS Aided Navigation and Tracking) provided in-depth analyses for all but the last of these items.  The control aspect of the problem is addressed here.  This introductory discussion involves only two participants, initially on a coaltitude collision course.  One (the “intruder”) continues with his path unchanged (so that the method could remain applicable for encounters between a participant and a non-participant tracked by radar or optical sensors).  The other (“evader”) decelerates to change projected miss distance to a chosen design value.  This simplest-of-all scenarios can readily be extended to encounters at different altitudes and, by reapplying the method to all users wherever projected miss distance falls below a designated threshold, to multiple-participant cases.

Considered here are simple scenarios with aircraft initially on a collision course at angles from 30 to 130 degrees between their velocity vectors.  Those limits can of course be changed but, the closer the paths are to collinear the more deceleration is required to prevent a collision (in the limit – direct head-on – no amount of deceleration can suffice; turns are required instead).  Turns can be addressed in the future; here we briefly discuss the 30-to-130 degree span.

In Coordinates Magazine and again as applied to UAVs it was shown that, over a wide combination of intruder speed, evader speed, and angles (within the 30-to-130 degree span just noted), the required amount of evader speed reduction is modest.  A linearized approximation can be derived intuitively from scenario parameter values.  The speeds and the angle determine a closing range rate, while closest approach time is near the initial time-to-go (ratio of initial distance to closing rate) though deceleration produces a difference.  The projection of evader speed reduction along the relative velocity vector direction has approximately that much time to build up 500 to 1000 meters of accumulated horizontal separation.  Initiation of the speed change that far in advance allows the dynamics to be gradual, in marked contrast to the sudden TCAS maneuver.  To avoid a wake problem, the evader’s aim point can be directed to a few hundred feet above the original coaltitude.  Continuous tracking of the intruder allows the evader to perform repetitive trim adjustments.

A program with results illustrating this scheme will not fit on a one-page summary, but it comes as no surprise that, with accurate tracks established well in advance (a minute or two prior to closest approach time), a modest deceleration can successfully avert collisions.