Once again I am privileged to work with Ohio University Prof. Frank vanGraas, in presenting tutorial sessions at the Institute of Navigation’s GNSS-17 conference. For 2017, as in several consecutive previous years, two sessions will cover integrated navigation with Kalman filtering.  Descriptions of the part 1 session and part 2 session are now available online.

By way of background: The first session is introductory.  Each attendee will be given a book with a development aimed at those learning inertial navigation and/or Kalman filtering for the first time.  Prior to the course, my free-to-members online tutorial is recommended.  Also my three-part matrix tutorial video will be made freely available to attendees.

Prof. vanGraas sponsored, and provided the flight data that enabled, the successful validation of 1-cm/sec RMS velocity vector accuracy obtained from 1-second sequential changes in carrier phase.  Those results for almost an hour in air are provided, with the algorithms used to obtain them, in a more recent book that is given to those attending the second session. attending the second session.

 

 

A recent video describes a pair of long-awaited developments that promise dramatic benefits in achievable navigation and tracking performance.  Marked improvements will occur, not only in accuracy and availability; over four decades this topic has arisen in connection with myriad operations, many documented in material cited from other blogs here. 

 A comment challenged my video .  I’m glad it included an acknowledgment that some points might have been missed. To be frank that happened a bunch; bear with me while I explain. First, there’s the accuracy issue; doppler &/or deltarange info provided from many receivers is far less accurate than carrier phase (sometimes due to cutting corners in implementation — recall that carrier phase, as the integral of doppler, will be smoother if processing is done carefully). Next, preference for 20-msec intervals will backfire badly. If phase noise at L-band gives a respectable 7mm = 0.7cm, doppler velocity error [(current phase) – (previous phase)] / 1 sec is (1.414) (0.7) = 1 cm/sec RMS for a 1-sec sequential differencing interval.  Now use 20 msec: FIFTY times as much doppler error! Alternatively if division is implicit instead of overt, degradation is more complicated: sequential phase differences are highly correlated (with a correlation coefficient of -1/2, to be precise). That’s because the difference (current phase) – (previous phase) and the difference (next phase) – (current phase) both contain the common value of current phase. In a modern estimation algorithm, observations with sequentially correlated errors are far more difficult to process optimally.  That topic is a very deep one; Section 5.6 and Addendum 5.B of my 2007 book address it thoroughly. I’m not expecting everyone to go through all that but, to offer fortification for its credibility, let me cite a few items:

* agreement from other designers who abandoned efforts to use short intervals
* table near the bottom of a page on this site.

* phase residual plots from Chapter 8 of my 2007 book.

The latter two, it is recalled, came from flight test for an extended duration (until flight recorder was full), under severe test aircraft (DC-3) vibration.

For doppler updating from sources other than satnav, my point is stronger still. Doppler from radar (which lacks the advantage of passive operation) won’t get velocity error much below a meter/sec — and even that is an improvement over unaided inertial nav (we won’t see INS velocity specs expressed in cm/sec within our lifetime).

Additional advantages of what the video offers include (1) no requirement for a mask angle (2) GNSS interoperability, and (3) robustness. A brief explanation:

(1) Virtually the whole world discards all measurements from low-elevation satellites because of propagation errors. But ionospheric and tropospheric effects change very little over a second; 1-sec phase differences are great for velocity information. Furthermore they offer a major geometry advantage while occurrence of multipath would stick out like a sore thumb, easily edited out.
(2) 1-sec differences from various constellations are much easier to mix than the phases themselves. 
(3) For receivers exploiting FFT capability  even short fragments of data, not sufficiently continuous for conventional mechanizations (track loops), are made available for discrete updates.
The whole “big picture” is a major improvement is robust operation 

The challenger isn’t the only one who missed these points; much of our industry, in fact, is missing the boat in crucial areas. Again I understand skepticism, but consider the “conventional wisdom” regarding ADSB: Velocity errors expressed in meters per second — you can hear speculative values as high as ten. GRADE SCHOOL ARITHMETIC shows how scary that is; collision avoidance extrapolates ahead. Consider the vast error volume resulting from doing that 90 seconds ahead of closest approach time with several meters per second of velocity error. So — rely on see-and-avoid? There are beaucoup videos that show how futile that is (and many more videos that show how often near misses occur — in addition there are about a thousand runway incursions each year). That justifies the effort for dramatic reduction of errors in tracking dynamics — to cm/sec relative velocity accuracy.

It’s perfectly logical for people to question my claims if they seem too good to be true. All I ask is follow through, with visits to URLs cited here.

Avionics Commonality

A LinkedIn discussion centered on the Future Airborne Capability Environment (FACE) standard contained an important observation concerning certification.  Granted — requirements for validation, with acceptance by governing agencies, definitely are essential for safety. What follows here is advocacy for a proposed way to realize the common avionics benefits offered by FACE while retaining (and in fact, improving) the process of certification. Reasoning is based on three major items:

* CHANGE. In many respects this has necessitated improved standards. 

* HISTORY. Spectacular failures in what we have now are widely documented.

* COST. The status quo is (and, for a long time, has been) unaffordable.

In regard to the first item: the pace of change in so many areas (hardware, software, operating systems, data communication, etc., etc., etc.) — and the effects on procurement cycles — are well known. How can certification remain unchanged when nothing else does? That argument would be undercut if the process had a rock solid track record — but that theme would not be supported by the second item — history:

Myriad shortcomings of existing operational systems are so pervasive that no one is considered a “loose cannon” for openly discussing them. Any of my horror stories — too strange and too numerous to be revisited here — would be trumped anyway by a document from the government itself. GAO-08-467SP, in 2008, described outlandish cost overruns, schedule delays, and deficient technical performance in the defense industry. That 3-way combination speaks for itself. Now a significant addition: the certification process has not been at all immune to serious flaws. The first-ever certified GPS receiver is now well known to have failed spectacularly in multiple facets of integrity testing by another manufacturer. It is readily acknowledged that correction of those early problems is quite credible, but one issue is inescapable: Historical proof of flightworthiness improperly bestowed — with proprietary rights accepted for algorithms and tests –- happened,  and that was not widely known until much later.

There is still more, including integrity failure probability limits missed by orders-of-magnitude in certified GPS receivers, severe limitations of GO/NO-GO testing, and failed attempts to gain approval to set requirements for correcting those plus other deficiencies. For brevity here, those issues are covered by citing the fifth page from another related reference.

The final item is, after years of fruitless talk about cost reduction, being acknowledged — we can’t do what we’ve been doing any more.  With dollars being the ultimate driver of so many decisions, we might finally see the necessary break from ingrained habits. FACE already addresses the issues and the requisite justifications. To make it all happen, two essential ingredients are

* raw-data-across-the-board, and

* nonproprietary software, with standardization under government control.
Flight-validated algorithms already in existence can be converted (e.g., from proof-of-concept to in-flight real-time form) according to government specification, by small groups more interested in engineering than in dollars (yes, that does exist). The payoff in cost savings can be huge.

Significant momentum is evolving toward a role for Open System Architecture (OSA) applied to radar. My observations in connection with that, voiced in a short LinkedIn discussion, seem worth repeating here.

One step could add major impact to this development: Rather than position (or relative position) outputs, provide measured range, azimuth, elevation (doppler could optionally be added if applicable) on the output interface. That simple step would vastly improve effectiveness of track file maintenance. Before attempting to describe all reasons for improved performance, two obvious benefits can be identified first:
* ability to use partial information (e.g., range-only or, for passive operation, angle-only)
* proper weighting of data for updating track state estimates.
The first item is self-evident. The second arises from common-sense attachment of greater value to the most accurate information. An explanation:

One-sigma error ellipsoids for individual radar fixes are not spherical (not a beachball shape but more like a flattened beachball), even at close range. At longer distances the shape progresses from a frisbee to a pancake to a DVD. Kalman filtering has enabled us to capitalize on that feature for over a half-century. Without exploiting it, we effectively treat separate radar-derived “coordinates” by intersecting volumes in space that are common to overlapping spheres. Resulting uncertainty volume is enormously larger than it should be.

The feature just noted shows up dramatically when mixing data among multiple platforms. Consider cooperative engagement whereby participants, all tracking each other via network-transmitted GPS observations, share radar observations from an unknown non-participant. Share measurements or coordinates? No contest; multiple lines crossing from different directions can offer best (i.e., along-range) accuracies applicable in 3-D.

That fact (i.e., combining data from different sensors and different platforms further dramatizes available improvements) doesn’t diminish the basic issue; even with a time history of data from only one radar, a designer with direct measurements available — instead of, not in addition to, coordinates — can provide incomparably superior performance.

“Send Measurements not Coordinates” (1999; #66 from the “Published Articles” panel, opening with eight rock-solid reasons) was aimed at GPS rather than radar. Many of the principles are the same when mixing data with information from other platforms — and from other sensors such as GPS. There is no reason, in fact, why data from satellite navigation and radar can’t be combined in the same estimation algorithm. That practice hasn’t evolved but the historical separation of operations (e.g., navigation and surveillance), arising from old equipment limitations, should no longer be a constraint. Moreover, with focus shifted from tracking to navigation, integration with additional (e.g., inertial) data offers still more reasons for using direct measurements. Rather than loose integration, superior benefits are widely known to result as the sophistication progresses forward (tight. ultratight, and deep integration).

Further elaborations on “casting off our old habits” appear from different perspectives in various blogs, one-pagers, and a few manuscripts available at this site. If your library has a copy of GNSS Aided Navigation & Tracking  pages 203-4 show a way to implement the cooperative sharing of radar data obtained from a non-participant, among participants tracking each other via the mutual surveillance and tracking approach defined earlier in that same chapter.

Because so many operational systems (in fact, a vast majority) use reports in the form of coordinates, reiteration is warranted. The central issue is the content, not the amount, of data. Rather than coordinates, provide accurately time-stamped direct measurements with links connected to whichever platform observed the data (e.g., for satnav — pseudoranges; for radar — range, azimuth, elevation). Those links are automatically attached when Mode-S extended squiter (e.g., chosen for ADSB) is the means for conveying the data.  For message content, strictly disallow “massaging the data beyond the light of day” (e.g., by unknown processes, with uncertain timing, … ) which invariably results in enormous loss of performance in common occurrence today.

LORAN REVISITED

Now that a few years have passed since the LORAN-C budget was killed, it might be a good time to revisit that decision. Unlike other decisions, this one might conceivably be undone; there hasn’t been the widespread demolition of resources (e.g., towers, transmitters) followed by restoration of sites. Something else, though, did occur: recent success achieved by cooperative effort between the Coast Guard and UrsaNav Inc.

For brevity here it suffices to make a few surface-scratching notes. The vast majority of us in the navigation community recognized the potential benefit of LORAN (and an extended form eLORAN) as a crucial backup — at extremely low cost — to be used when GPS is unavailable.  Many of us, furthermore, anxiously pressed for sanity (e.g., my “2-cents worth” written, to no avail, in 2009).

What’s different now, conceivably, is a combined effect of multiple factors:
* The USCG/UrsaNav success surpassed goals that had been stated earlier.
* Awareness of GPS vulnerability (therefore need for backup) has increased.
* Delay in follow-through (site restoration) offers the chance for a remedy.

An utterance appearing in Coordinates Magazine’s March 2012 cover story was reached from a different context, but its importance prompted me to cite it in the April 2012 cover story — and to repeat it here: “Do we really need to wait for a catastrophe before taking action against GNSS vulnerabilities?”

Once again I’m adding my voice to the chorus of those speaking out before it’s too late.

CHECK LIST for DESIGNERS

Questions submitted by members of various forums, understandably, frequently involve one or more of the following topics:
 * some or all facets of inertial navigation
 * means of updating and reinitializing the drifting inertial solution
 * satellite navigation (GPS/GNSS) for providilng the updates
 * other means of updating (radar, laser, optics, VOR, DME, hyperbolic, … )
 * best ways to use what’s available for various applications.
 
The pool of literature that might be offered can be vast, partly due to a vast array of operations – each with application-dependent requirements.  Finding just the relevant information from a mountain of available references can be a daunting task, especially for young designers.  I’ll try to make their search easier, by offering a list they can ask themselves early in the design process:
 * do you need a lat/lon/altitude Earth reference or just a designated point?
 * is the path determined from provisions onboard (nav) or remote (track)?
 * what’s your required accuracy for “absolute” (geolocation) position?
 * what’s your required accuracy for relative position (e.g., from a runway)?
 * do you need precise incremental position history (SAR motion compensation)?
 * do you need precise angular orientation (e.g., laser pointing)?
 * do you need precise angular rates (for image or antenna stabilization)?
 * for direction do you use a North reference or just along-track/cross-track?
 * will you have dependable access to updating information (GPS, radar, …)?
 * if not, how irregular will dynamics be over active parts of your mission?
 * if so, how irregular will the dynamics be during inter-update periods?
 * also if so, what data rate? Longest expected “blind period” between updates?
 * also if so, will measurements need averaging to meet your required accuracy?
 * also if so, how accurate are your measurements AND their time stamps?
 * also if so, can you use postprocessing or do you need everything real-time?
 * are you willing to accept partial updates (some but not all directions)?
 * do you need just position or derivatives too (velocity, acceleration)?
 * if so, how long can your dynamics be trusted to conform to model fidelity?
 * are you doing INS update (e.g., replacing acceleration with tilt states)?
 * if so, will you need to deduce drift rates – and how long will those hold?
 * do your sensors measure distances, angles, doppler, differences of those?
 * for how long does your sensor information content provide observability?
 * how’s your sensor integrity (bad readings at least detectable if present)?
 * for safety-critical operations — what are your backup provisions?
 * are you accommodating multiple modes with time-shared sensing resources?
 * do you need to perform image registration with different imaging sensors?
etc.etc. — the list goes on.  I won’t even try to claim thoroughness; you get the idea.  Designers with new tasks dumped in their lap can understandably feel overwhelmed.  Searching for references can become a trip through a maze of half-relevant sources.
 
A first step, then, is to separate the relevant (what you need) from the irrelevant (what you don’t need), instantly dismiss any thought of the latter, and do the opposite with the former (nail it).
 
Brief examples — the first two items from the above list —
 * If you just need to know your location relative to a designated point, irrespective of its latitude and lingitude — this might help.
 * If you’re tracking instead of navigating — check these out —
and one from the last item from that list —
Again, you get the idea — volumes have been written on all facets.  Many won’t apply to your immediate task; disregard those.
 
The good news is — paths to logical solutions are known and documented.  To avoid abandoning you to an enormous maze of references I’ll point out some fundamental and advanced (state-of-the-art) tracts that address all issues just cited and more.  Several blogs and short “1-pagers” will help individual designers to choose, based on their specific tasks, passages from available references.
 
Before GPS we struggled hard for accurate measurements in enough places.  That actually produced a benefit — we had to be resourceful.  My biggest challenge was to understand subjects (Kalman filtering, strapdown inertial navigation) then considered exotic.  Again a benefit; pulling information from 1950s books and papers forced me to understand, focus, and reduce concepts to whatever level became necessary.  The experience prompted me to write the first of my two books on navigation.
 
That first book has been used in myriad courses, including one currently taught by Prof. Hablani who wrote the most recent testimonial shown on that URL .
 
Some topics that earlier book explained in detail recently came up in another discussion — http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=44646633&gid=160643&commentID=68798460&trk=view_disc
&ut=0XsoCju0nA5B81
For example, slow (“W” radian/sec) oscillations with “W” corresponding to the Schuler period (between 83 and 84 minutes). In that case position error from accelerometer bias, propagating as (1 – cos Wt), rises much sooner than gyro drift, propagating as (t – sin Wt/W). Page 80 of that book sketches an example of behavior over a cycle.  Development offered beyond there expands as far as many analysts wish to go (other natural frequencies of error propagation, rectification of vibration-sensitive errors, etc.).
 
Not long after that first book appeared, GPS became operational — and I was a newcomer to that.  By the time I understood it there were many experts.  Once again I had to catch up, and the process was gradual.  With an exceptionally strong client interested in my inertial background, a synergism was formed. That led to a flight test producing state-of-the-art accuracy in dynamics; see the table describing several innovations also resulting from the work just described.
 
That second book, after a review chapter, begins where the first (pre-GPS) one left off.  It also (1)is used in tutorials and (2)has received testimonials from other instructors, as the URL shows.  Sources cited here, plus an online 1.5-hr tutorial, free to Inst-of-Navigation members, plus a “try-before-you-buy” 100-page excerpt available from this site, should be helpful to many.

Life before GPS

Before GPS took over so many operations by storm (e.g., navigation,tracking, timing, surveying, etc.), designers had access to other — far less capable — provisions.  That condition forced our hands; to derive maximum benefit from what was available, we had to extract full information content from those provisions.  Now that GPS is subjected to challenges (aging, jamming, spoofing, etc.), some of those older methods are receiving increased scrutiny.  Recently I’ve received renewed interest in areas I analyzed decades ago.  Old publications from two of those areas are discussed here: 1) attitude determination and 2) nav integration.

“Attitude Determination by Kalman Filtering” is the title of three documents I had published.  In reverse sequence they are:
1) Automatica (IFAC Journal), v6 1970, pp. 419-430,
2) my Ph.D. dissertation (Univ. of Maryland, 1967),
3) NASA CR-598, Sept., 1966.
As indicated by the last reference, the work was the result of a contractual study sponsored by NASA (specifically Goddard Space Flight Center – GSFC – in Greenbelt Maryland).  I was working for Wetinghouse Defense and Space Center at the time.  The proposal I had written to win this contract cited my work prior to then, in both modern estimation (“Simulation of a Minimum Variance OrbitalNavigation System,” AIAA JSR v 3 Jan 1966 pp. 91-98) and attitude computation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA JSR v 3 Sept 1966, pp. 1340-1347).  Let me hasten to explain the dates of those Journal publications: each followed its inclusion at an AIAA-sponsored conference, about a year earlier.

By the mid-1960s there was an appreciable amount of validation for Kalmen filtering applied to determination of orbits (that track record was convincing) but not yet for attitude.  A GSFC-sponsored investigation was then planned — the very first one for attitude using modern estimation methods.  GSFC management understandably wanted that contractual investigation to be performed by someone with demonstrable experience in both Kalman filtering and rotational dynamics.  In those days that combination was rare; the Westinghouse proposal was chosen as the winner.  At the time of that study, provisions realistically available for attitude updating consisted of mediocre-accuracy items such as magnetometers and horizon scanners– not bad but not spectacular either.
All that was of course before GPS weighed in, with its opportunity to reveal attitude from phase differences between antennas spaced at known distances apart.  That vastly superior capability effectively reduced earlier crude measurements to relative obscurity.  A directly parallel situation occurred in connection with navigation; the book that first tied together several facets of advancement in that field (integration, strapdown inertial, modern estimation with  acceptance of all data sources, multimode operation, extension to tracking, clear exposition of all commonly used representations of attitude, etc.) was”pre-GPS” (1976), and consequently regarded as less relevant. Timing can be decisive — that’s no one’s fault.

The item just noted — attitude representation — is worth further discussion here.  Unlike many other sources, the 1976 book offered an opportunity to use quaternion properties without any need to learn a specialized quaternion algebra.  A literature search, however, will point primarily to various sources (of necessity, later than 1976).that benefit from the superior performance offered through GPS usage. Again, in view of GPS as a game-changer, that is not necessarily improper.  Most publications on attitude determination don’t cite the first-ever investigation, sponsored by GSFC, for that innocent reason.

The word beginning that last sentence (“Most”) has an exception.  One author, widely quoted as an authority (especially on quaternions), did cite the original work — dismissing it as “ad-hoc” — while using an exact copy of the sensitivity matrix elements pubished in my original investigation (the three references cited at the start of this blog).
While I obviously didn’t invent either quaternions or the Kalman filter, there was another thing I didn’t do: fail to credit, in my publications, pre-existing sources that contributed to my findings. Publication of the material cited here, I’ve been told, paved the way for understanding and insight to many who followed. No one owes me anything for that; an analyst’s work, truthfully and realistically presented, is what the analyst has to offer.

It is worth pointing out that both the attitude determination study and the 1976 book cover another facet of rotational analysis absent from many other related publications: dynamics — in the sense of physics.  Whereas modern estimation lumps time-variations of the state together into one all-encompassing “dynamic” model, classical physics makes a separation: Kinematics defines the relation between position, rates, and accelerations.  Dynamics determines translational accelerations resulting from forces or rotational accelerations resulting from torques.

Despite absence of GPS from my early (1960s/70s) investigations, one feature that can still make them useful for today’s analysts is the detailed characterization of torques acting — in very different ways — on spinning and gravity-gradient satellites, plus their effects on rotational motion. Many of the later studies focused on the rotational kinematics, irrespective of those torques and their consequences. Similarly, the “minimal-math”approach to explaining integrated navigation has enabled many to grasp the concepts.  Printed testimony to that effect, from courses I taught decades ago, is augmented by more recent source noted near the end of another page shown on this site.

GPS Carrier Phase for Dynamics ?

The practice of dead reckoning (a figurative phrase of uncertain origin) is five centuries old.   In its original form, incremental excursions were plotted on a mariner’s chart using dividers for distances, with directions obtained via compass (with corrections for magnetic variation and deviation). Those steps, based on perceived velocity over known time intervals, were accumulated until a correction became available (e.g., from a landmark or a star sighting).

Modern technology has produced more accurate means of dead reckoning, such as Doppler radar or inertial navigation systems.   Addressed here is an alternative means of dead reckoning, by exploiting sequential changes in highly accurate carrier phase. The method, successfully validated in flight with GPS, easily lends itself to operation with satellites from other GNSS constellations (GALILEO, GLONASS, etc.).  That interoperability is now one of the features attracting increased attention; sequential changes in carrier phase are far easier to mix than the phases themselves, and measurements formed that way are insensitive to ephemeris errors (even with satellite mislocation,  changes in satellite position are precise).

Even with usage of only one constellation (i.e., GPS for the flight test results reported here), changes in carrier phase over 1-second intervals provided important benefits. Advantages to be described now will be explained in terms of limitations in the way carrier phase information is used conventionally.   Phase measurements are normally expressed as a product of the L-band wavelength multiplied by a sum in the form (integer + fraction) wherein the fraction is precisely measured while the large integer must be determined. When that integer is known exactly the result is of course extremely accurate.  Even the most ingenious methods of integer extraction, however, occasionally produce a highly inaccurate result.   The outcome can be catastrophic and there can be an unacceptably long delay before correction is possible.   Elimination of that possibility provided strong motivation for the scheme described here.

Linear phase facilitates streaming velocity with GNSS interoperability

With formation of 1-sec changes, all carrier phases can be forever ambiguous, i.e., the integers can remain unknown; they cancel in forming the sequential differences. Furthermore, discontinuities can be tolerated; a reappearing signal is instantly acceptable as soon as two successive carrier phases differ by an amount satisfying the single-measurement RAIM test.   The technique is especially effective with receivers using FFT-based processing, which provides unconditional access, with no phase distortion, to all correlation cells (rather than a limited subset offered by a track loop).

Another benefit is subtle but highly significant: acceptability of sub-mask carrier phase changes. Ionospheric and tropospheric timing offsets change very little over a second. Conventional systems are designed to reject measurements from low elevation satellites. Especially in view of improved geometric spread, retention here prevents unnecessary loss of important information.   Demonstration of that occurred in flight when a satelllite dropped to horizon; submask pseudoranges of course had to be rejected, but all of the 1-sec carrier phase changes were perfectly acceptable until the satellite was no longer detectable.

One additional (deeper) topic, requiring much more rigorous analysis, arises from sequential correlations among 1-sec phase change observables. The issue is thoroughly addressed and put to rest in the later sections of the 5th chapter of GNSS Aided Navigation and Tracking.

Dead reckoning capability without-IMU was verified in flight, producing decimeter/sec RMS velocity errors outside of turn transients (Section 8.1.2, pages 154-162 of the book just cited). With a low-cost IMU, accuracy is illustrated in the table near the bottom of a 1-page description on this site (also appearing on page 104 of that book). All 1-sec phase increment residual magnitudes were zero or 1 cm for the seven satellites (six across-SV differences) observed at the time shown. Over almost an hour of flight at altitude (i.e., excluding takeoff, when heading uncertainty caused larger lever-arm vector errors), cm/sec RMS velocity accuracy was obtained.

GPS codes are chosen to produce a strong response if and only if a received signal and its anticipated pattern are closely aligned in time. Conventional designs thus use correlators to ascertain that alignment. Mechanization may take various forms (e.g., comparison of early-vs-late time-shifted replicas), but dependence on the correlation is fundamental. There is also the complicating factor of additional coding superimposed for satellite ephemeris and clock information but, again, various methods have long been known for handling both forms of modulation. Tracking of the carrier phase is likewise highly developed, with capability to provide sub-wavelength accuracies.

An alternative approach using FFT computation allows replacement of all correlators and track loops. The Wiener-Khintchine theorem is well over a half-century old (actually closer to a century), but using it in this application has become feasible only recently. To implement it for GPS a receiver input’s FFT is followed with term-by-term multiplication by the FFT of each separate anticipated pattern (again with optional insertion of fractional-millisecond time shifts for further refinement and again with various means of handling the added clock-&-ephemeris modulation). According to Wiener-Khintchine, multiplication in the frequency domain corresponds to convolution in time — so the inverse FFT of the product provides the needed correlation information.

FFT processing instantly yields a number of significant benefits. The correlations are obtained for all cells, not just the limited few that would be seen by a track loop. Furthermore all cell responses are unconditionally available. Also, FFTs are not only unconditionally stable but, as an all-zero filter bank (as opposed to a loop with poles as well as zeros), the FFT provides linear phase in the passband. Expressed alternatively, no distortion in the phase-vs-frequency characteristic means constant group delay over the signal spectrum.

The FFT processing approach adapts equally well with or without IMU integration. With it, the method (called deep integration here) goes significantly beyond ultratight coupling, which was previously regarded as the ultimate achievement. Reasons for deep integration’s superiority are just the traits succinctly noted in the preceding paragraph.

Finally it is acknowledged that this fundamental discussion touches very lightly on receiver configuration, only scratching the surface. Highly recommended are the following sources plus references cited therein:

* A very early analytical development by D. van Nee and A. Coenen,
“New fast GPS code-acquisition techniquee using FFT,” Electronics Letters, vol. 27, pp. 158–160, January 1991.

* The early pioneering work in mechanization by Prof. Frank van Graas et. al.,
“Comparison of two approaches for GNSS receiver algorithms: batch processing and sequential processing considerations,” ION GNSS-2005

* the book by Borre, Akos, Bertelsen, Rinder, and Jensen,
A software-defined GPS and Galileo receiver: A single-frequency approach (2007).