A comment challenged my video .  I’m glad it included an acknowledgment that some points might have been missed. To be frank that happened a bunch; bear with me while I explain. First, there’s the accuracy issue; doppler &/or deltarange info provided from many receivers is far less accurate than carrier phase (sometimes due to cutting corners in implementation — recall that carrier phase, as the integral of doppler, will be smoother if processing is done carefully). Next, preference for 20-msec intervals will backfire badly. If phase noise at L-band gives a respectable 7mm = 0.7cm, doppler velocity error [(current phase) – (previous phase)] / 1 sec is (1.414) (0.7) = 1 cm/sec RMS for a 1-sec sequential differencing interval.  Now use 20 msec: FIFTY times as much doppler error! Alternatively if division is implicit instead of overt, degradation is more complicated: sequential phase differences are highly correlated (with a correlation coefficient of -1/2, to be precise). That’s because the difference (current phase) – (previous phase) and the difference (next phase) – (current phase) both contain the common value of current phase. In a modern estimation algorithm, observations with sequentially correlated errors are far more difficult to process optimally.  That topic is a very deep one; Section 5.6 and Addendum 5.B of my 2007 book address it thoroughly. I’m not expecting everyone to go through all that but, to offer fortification for its credibility, let me cite a few items:

* agreement from other designers who abandoned efforts to use short intervals
* table near the bottom of a page on this site.

* phase residual plots from Chapter 8 of my 2007 book.

The latter two, it is recalled, came from flight test for an extended duration (until flight recorder was full), under severe test aircraft (DC-3) vibration.

For doppler updating from sources other than satnav, my point is stronger still. Doppler from radar (which lacks the advantage of passive operation) won’t get velocity error much below a meter/sec — and even that is an improvement over unaided inertial nav (we won’t see INS velocity specs expressed in cm/sec within our lifetime).

Additional advantages of what the video offers include (1) no requirement for a mask angle (2) GNSS interoperability, and (3) robustness. A brief explanation:

(1) Virtually the whole world discards all measurements from low-elevation satellites because of propagation errors. But ionospheric and tropospheric effects change very little over a second; 1-sec phase differences are great for velocity information. Furthermore they offer a major geometry advantage while occurrence of multipath would stick out like a sore thumb, easily edited out.
(2) 1-sec differences from various constellations are much easier to mix than the phases themselves. 
(3) For receivers exploiting FFT capability  even short fragments of data, not sufficiently continuous for conventional mechanizations (track loops), are made available for discrete updates.
The whole “big picture” is a major improvement is robust operation 

The challenger isn’t the only one who missed these points; much of our industry, in fact, is missing the boat in crucial areas. Again I understand skepticism, but consider the “conventional wisdom” regarding ADSB: Velocity errors expressed in meters per second — you can hear speculative values as high as ten. GRADE SCHOOL ARITHMETIC shows how scary that is; collision avoidance extrapolates ahead. Consider the vast error volume resulting from doing that 90 seconds ahead of closest approach time with several meters per second of velocity error. So — rely on see-and-avoid? There are beaucoup videos that show how futile that is (and many more videos that show how often near misses occur — in addition there are about a thousand runway incursions each year). That justifies the effort for dramatic reduction of errors in tracking dynamics — to cm/sec relative velocity accuracy.

It’s perfectly logical for people to question my claims if they seem too good to be true. All I ask is follow through, with visits to URLs cited here.

LORAN REVISITED

Now that a few years have passed since the LORAN-C budget was killed, it might be a good time to revisit that decision. Unlike other decisions, this one might conceivably be undone; there hasn’t been the widespread demolition of resources (e.g., towers, transmitters) followed by restoration of sites. Something else, though, did occur: recent success achieved by cooperative effort between the Coast Guard and UrsaNav Inc.

For brevity here it suffices to make a few surface-scratching notes. The vast majority of us in the navigation community recognized the potential benefit of LORAN (and an extended form eLORAN) as a crucial backup — at extremely low cost — to be used when GPS is unavailable.  Many of us, furthermore, anxiously pressed for sanity (e.g., my “2-cents worth” written, to no avail, in 2009).

What’s different now, conceivably, is a combined effect of multiple factors:
* The USCG/UrsaNav success surpassed goals that had been stated earlier.
* Awareness of GPS vulnerability (therefore need for backup) has increased.
* Delay in follow-through (site restoration) offers the chance for a remedy.

An utterance appearing in Coordinates Magazine’s March 2012 cover story was reached from a different context, but its importance prompted me to cite it in the April 2012 cover story — and to repeat it here: “Do we really need to wait for a catastrophe before taking action against GNSS vulnerabilities?”

Once again I’m adding my voice to the chorus of those speaking out before it’s too late.

CHECK LIST for DESIGNERS

Questions submitted by members of various forums, understandably, frequently involve one or more of the following topics:
 * some or all facets of inertial navigation
 * means of updating and reinitializing the drifting inertial solution
 * satellite navigation (GPS/GNSS) for providilng the updates
 * other means of updating (radar, laser, optics, VOR, DME, hyperbolic, … )
 * best ways to use what’s available for various applications.
 
The pool of literature that might be offered can be vast, partly due to a vast array of operations – each with application-dependent requirements.  Finding just the relevant information from a mountain of available references can be a daunting task, especially for young designers.  I’ll try to make their search easier, by offering a list they can ask themselves early in the design process:
 * do you need a lat/lon/altitude Earth reference or just a designated point?
 * is the path determined from provisions onboard (nav) or remote (track)?
 * what’s your required accuracy for “absolute” (geolocation) position?
 * what’s your required accuracy for relative position (e.g., from a runway)?
 * do you need precise incremental position history (SAR motion compensation)?
 * do you need precise angular orientation (e.g., laser pointing)?
 * do you need precise angular rates (for image or antenna stabilization)?
 * for direction do you use a North reference or just along-track/cross-track?
 * will you have dependable access to updating information (GPS, radar, …)?
 * if not, how irregular will dynamics be over active parts of your mission?
 * if so, how irregular will the dynamics be during inter-update periods?
 * also if so, what data rate? Longest expected “blind period” between updates?
 * also if so, will measurements need averaging to meet your required accuracy?
 * also if so, how accurate are your measurements AND their time stamps?
 * also if so, can you use postprocessing or do you need everything real-time?
 * are you willing to accept partial updates (some but not all directions)?
 * do you need just position or derivatives too (velocity, acceleration)?
 * if so, how long can your dynamics be trusted to conform to model fidelity?
 * are you doing INS update (e.g., replacing acceleration with tilt states)?
 * if so, will you need to deduce drift rates – and how long will those hold?
 * do your sensors measure distances, angles, doppler, differences of those?
 * for how long does your sensor information content provide observability?
 * how’s your sensor integrity (bad readings at least detectable if present)?
 * for safety-critical operations — what are your backup provisions?
 * are you accommodating multiple modes with time-shared sensing resources?
 * do you need to perform image registration with different imaging sensors?
etc.etc. — the list goes on.  I won’t even try to claim thoroughness; you get the idea.  Designers with new tasks dumped in their lap can understandably feel overwhelmed.  Searching for references can become a trip through a maze of half-relevant sources.
 
A first step, then, is to separate the relevant (what you need) from the irrelevant (what you don’t need), instantly dismiss any thought of the latter, and do the opposite with the former (nail it).
 
Brief examples — the first two items from the above list —
 * If you just need to know your location relative to a designated point, irrespective of its latitude and lingitude — this might help.
 * If you’re tracking instead of navigating — check these out —
and one from the last item from that list —
Again, you get the idea — volumes have been written on all facets.  Many won’t apply to your immediate task; disregard those.
 
The good news is — paths to logical solutions are known and documented.  To avoid abandoning you to an enormous maze of references I’ll point out some fundamental and advanced (state-of-the-art) tracts that address all issues just cited and more.  Several blogs and short “1-pagers” will help individual designers to choose, based on their specific tasks, passages from available references.
 
Before GPS we struggled hard for accurate measurements in enough places.  That actually produced a benefit — we had to be resourceful.  My biggest challenge was to understand subjects (Kalman filtering, strapdown inertial navigation) then considered exotic.  Again a benefit; pulling information from 1950s books and papers forced me to understand, focus, and reduce concepts to whatever level became necessary.  The experience prompted me to write the first of my two books on navigation.
 
That first book has been used in myriad courses, including one currently taught by Prof. Hablani who wrote the most recent testimonial shown on that URL .
 
Some topics that earlier book explained in detail recently came up in another discussion — http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=44646633&gid=160643&commentID=68798460&trk=view_disc
&ut=0XsoCju0nA5B81
For example, slow (“W” radian/sec) oscillations with “W” corresponding to the Schuler period (between 83 and 84 minutes). In that case position error from accelerometer bias, propagating as (1 – cos Wt), rises much sooner than gyro drift, propagating as (t – sin Wt/W). Page 80 of that book sketches an example of behavior over a cycle.  Development offered beyond there expands as far as many analysts wish to go (other natural frequencies of error propagation, rectification of vibration-sensitive errors, etc.).
 
Not long after that first book appeared, GPS became operational — and I was a newcomer to that.  By the time I understood it there were many experts.  Once again I had to catch up, and the process was gradual.  With an exceptionally strong client interested in my inertial background, a synergism was formed. That led to a flight test producing state-of-the-art accuracy in dynamics; see the table describing several innovations also resulting from the work just described.
 
That second book, after a review chapter, begins where the first (pre-GPS) one left off.  It also (1)is used in tutorials and (2)has received testimonials from other instructors, as the URL shows.  Sources cited here, plus an online 1.5-hr tutorial, free to Inst-of-Navigation members, plus a “try-before-you-buy” 100-page excerpt available from this site, should be helpful to many.

GPS Carrier Phase for Dynamics ?

The practice of dead reckoning (a figurative phrase of uncertain origin) is five centuries old.   In its original form, incremental excursions were plotted on a mariner’s chart using dividers for distances, with directions obtained via compass (with corrections for magnetic variation and deviation). Those steps, based on perceived velocity over known time intervals, were accumulated until a correction became available (e.g., from a landmark or a star sighting).

Modern technology has produced more accurate means of dead reckoning, such as Doppler radar or inertial navigation systems.   Addressed here is an alternative means of dead reckoning, by exploiting sequential changes in highly accurate carrier phase. The method, successfully validated in flight with GPS, easily lends itself to operation with satellites from other GNSS constellations (GALILEO, GLONASS, etc.).  That interoperability is now one of the features attracting increased attention; sequential changes in carrier phase are far easier to mix than the phases themselves, and measurements formed that way are insensitive to ephemeris errors (even with satellite mislocation,  changes in satellite position are precise).

Even with usage of only one constellation (i.e., GPS for the flight test results reported here), changes in carrier phase over 1-second intervals provided important benefits. Advantages to be described now will be explained in terms of limitations in the way carrier phase information is used conventionally.   Phase measurements are normally expressed as a product of the L-band wavelength multiplied by a sum in the form (integer + fraction) wherein the fraction is precisely measured while the large integer must be determined. When that integer is known exactly the result is of course extremely accurate.  Even the most ingenious methods of integer extraction, however, occasionally produce a highly inaccurate result.   The outcome can be catastrophic and there can be an unacceptably long delay before correction is possible.   Elimination of that possibility provided strong motivation for the scheme described here.

Linear phase facilitates streaming velocity with GNSS interoperability

With formation of 1-sec changes, all carrier phases can be forever ambiguous, i.e., the integers can remain unknown; they cancel in forming the sequential differences. Furthermore, discontinuities can be tolerated; a reappearing signal is instantly acceptable as soon as two successive carrier phases differ by an amount satisfying the single-measurement RAIM test.   The technique is especially effective with receivers using FFT-based processing, which provides unconditional access, with no phase distortion, to all correlation cells (rather than a limited subset offered by a track loop).

Another benefit is subtle but highly significant: acceptability of sub-mask carrier phase changes. Ionospheric and tropospheric timing offsets change very little over a second. Conventional systems are designed to reject measurements from low elevation satellites. Especially in view of improved geometric spread, retention here prevents unnecessary loss of important information.   Demonstration of that occurred in flight when a satelllite dropped to horizon; submask pseudoranges of course had to be rejected, but all of the 1-sec carrier phase changes were perfectly acceptable until the satellite was no longer detectable.

One additional (deeper) topic, requiring much more rigorous analysis, arises from sequential correlations among 1-sec phase change observables. The issue is thoroughly addressed and put to rest in the later sections of the 5th chapter of GNSS Aided Navigation and Tracking.

Dead reckoning capability without-IMU was verified in flight, producing decimeter/sec RMS velocity errors outside of turn transients (Section 8.1.2, pages 154-162 of the book just cited). With a low-cost IMU, accuracy is illustrated in the table near the bottom of a 1-page description on this site (also appearing on page 104 of that book). All 1-sec phase increment residual magnitudes were zero or 1 cm for the seven satellites (six across-SV differences) observed at the time shown. Over almost an hour of flight at altitude (i.e., excluding takeoff, when heading uncertainty caused larger lever-arm vector errors), cm/sec RMS velocity accuracy was obtained.

SINGLE-MEASUREMENT RAIM

In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.

TONS OF INFORMATION

This set of blogs will not be considered complete until at least seventy (or possibly a hundred) are available for visitors to download and/or print.  Each individual blog, with links to references (which in some cases can also be downloaded and printed from this site), summarizes a specific aspect from a chosen set of topics.  A smaller number of these “one-pagers” will address topics from my earlier, more fundamental, book Integrated Aircraft Navigation.  An additional few (very few) will deal with topics not covered in either of those two books.  An example of the latter publicizes some useful facets of the ultra-familiar classical low-pass filter which (believe it or not – after all these years) have remained obscure.

Over time, dozens more will be added from a wide span of topics (all firmly supported by experience as well as theory, ranging from elementary to advanced, in some cases relatively new and therefore largely unknown) that will include

  • Modern estimation in both block (weighted least squares) and sequential (Kalman filtering, with Battin’s derivation – much easier to follow than Kalman’s) form, with their interrelationship developed quite far, enabling “plant noise” levels to be prescribed in closed-form, also providing highly unusual insight into sequentially correlated measurement errors; chi-squared residuals; implications of optimality during transients; need for conservatism in modeling; sensitivity of matrix-vs-vector extrapolation (“do’s and don’ts”); application-dependence of commonality and uniqueness features; quantification of observability and effects of augmentation on it; duality among a wide scope of navigation modes; commonly overlooked duality between tracking and short-term inertial nav error propagation; when “correction-to-the adjustment” terms can and can’t be omitted; suboptimal (equal-eigenvalues) estimation with steady-state performance indistinguishable from optimal; all fully supported by theory and experience
  • Basic building-blocks for attitude expressions: superiority of quaternions and direction cosines over Euler angles, due to singularity (“gimbal lock” at 90-deg for x-y-z sequence) and at 0-deg for z-x-z sequences used for orbits
  • GPS issues related to the top-priority goal of robustness: beyond elementary (4-state and 8-state) formulations; duality of pseudorange and phase ambiguity; exploitation of modern processing capabilities in GPS/GNSS receivers; carrier phase as integrated doppler vs frequency data; 1-sec sequential phase changes (much easier to mix across constellations, negligible sequential changes in IONO/TROPO propagation, ambiguity resolution not needed, instant reacquisition, no-mask angle needed); streaming velocity for dead reckoning with segmentation of position fixes; differential operation – differencing across satellites, receivers, and time; handling correlations from differencing; orthogonalization for simple QR factorization; measurement relocation in time and lever-arm adjustment; E(Extended)RAIM;  D(Differential)RAIM; necessity of weighting in single-measurement RAIM with pseudoranges and carrier phases, concurrently; sample flight test results showing state-of-the-art accuracies in dynamics (e.g., cm/sec RMS velocity error and tenths-mrad leveling) with a low-cost IMU; revisit of the same flight segment, achieving decimeter/sec RMS velocity error without any IMU
  • Tracking (with subdivision into over a dozen topics including a littoral environment operation with hundreds of ships present; orbit determination; usage of Lambert’s laws; surface-to-air (subdivided into ground-to-air and tracking from ships), air-to-surface and surface-to-surface (again with the same subdivision),  air-to-air; reentry vehicles; usage of stable coordinate frames; linearity in both dynamics and measurements; Mode-S squitters for mutual surveillance and collision avoidance in crowded airspace; multiple track output usage (placement of gates, antenna steering, file maintenance); crucial importance of transmitting measurements rather than coordinates (publication #66); extension to noncooperative objects, critical distinction (often blurred) between errors in tracking and stabilization; sucessfully accomplished concurrent track of multiple objects with electronically steered beams; bistatic and multistatic operation; postprocessing to form familiar parameters from estimator outputs; short-range projectiles over “flat-earth” – plus many more)
  • Processing of inertial data – incrementing of position, velocity, attitude; straightforward state-of-the-art algorithms for complete metamorphosis from raw gyro and accelerometer samples into final 3-D position, velocity, and attitude; motion-sensitive inertial instrument errors; coning; sculling; critical distinction between misalignment (imperfect mechanical mounting) vs misorientation; adaptive accommodation of gyro scale factor and misalignment errors; instability of unaided vertical channel; azimuth pseudomeasurement; near-universal misconceptions connected to free-inertial coast
  • Support functions (transfer alignment; SAR motion compensation; stabilization of images; sensor control mechanizations; synchronization; determination of retention probability)
  • Vision-for-the-future with maximum situation awareness for all cooperating participants in a scenario; critical role of interfaces (implications of singularities, RAIM, Differential GPS, etc.), software modularity, reuse, coordination).  Full validation in GNSS Aided Navigation and Tracking.