A recent video describes a pair of long-awaited developments that promise dramatic benefits in achievable navigation and tracking performance. Marked improvements will occur, not only in accuracy and availability; over four decades this topic has arisen in connection with myriad operations, many documented in material cited from other blogs here.
A video completed recently provides just enough matrix theory needed for Kalman filtering. It’s available for (1) purchase or 72-hour rent at low cost or (2) free to those attending courses I teach in 2014 or after (because the short durations don’t allow time to cover it). The one-hour presentation is divided into three sections. Each section has a preview, freely viewable.
The first section, with almost NO math, begins by explaining why matrices are needed — and then immediately emphasizes that MATH ALONE IS NOT ENOUGH, To drive home that point, a dramatic illustration was chosen. Complex motions of a satellite, though represented in a MATHEMATICALLY correct way, were not fully understood by its designers nor by the first team of analysts contracted to characterize it. From those motions, shown with amplitudes enlarged (e.g., doubled or possibly more) for easy visualization, it becomes clear why insight is every bit as important as the math.
For some viewers the importance of insight alone will be of sufficient interest with no need for the latter two sections. Others, particularly novices aspiring to be designers, will find the math presentation extremely helpful. Straight to the point for each step where matrices are applied, it is just the type of information I was earnestly seeking years ago, whole “pulling teeth” to extract clarification of ONLY NECESSARY theory without OVER simplification.
The presentation supplies matrix theory prerequisites that will assist aspiring designers in formulating linear(ized) estimation algorithms in block (weighted least squares) or sequential (recursive Kalman/EKF) form. Familiar matrix types (e.g., orthogonal, symmetric), their properties, how they are used — and why they are useful — with interpretation of physical examples, enable important operations both powerful and versatile. An enormous variety of applications involving systems of any order can be solved in terms of familiar expressions we saw as teenagers in college.
Useful for either introduction or review, there is no better way to summarize this material than to repeat one word that matters beyond all else — INSIGHT.
The ago-old Interrogation/Response method for air surveillance was aptly summarized in an important 1996 GPSWorld article by Garth van Sickle: Response from an unidentified IFF transponder is useful only to the interrogator that triggered it. That author, who served as Arabian Gulf Battle Force operations officer during Desert Storm, described transponders flooding the air with signals. Hundreds of interrogations per minute in that crowded environment produced a glut of r-f energy – but still no adequate friendly air assessment.
The first step toward solving that problem is a no-brainer: Allocate a brief transmit duration to every participant, each separate from all others. Replace the Interrogation/Response approach with spontaneous transmissions. Immediately, then, one user’s information is no longer everyone else’s interference; quite the opposite: each participant can receive every other participant’s transmissions. In the limit (with no interrogations at all), literally hundreds of participants could be accommodated. Garble nonexistent. Bingo.
Sometimes there a catch to an improvement that dramatic. Fortunately that isn’t true of this one. A successful demo was performed at Logan Airport – using existing transponders with accepted data formatting (extended squitter), in the early 1990s, by Lincoln Labs. I then (first in January 1998) made two presentations, one for military operation (publication #60- click here) and another one for commercial aviation (publication #61-click here), advocating adoption of that method with one important change. Transmitting GPS pseudoranges rather than coordinates would enable an enormous increase in performance. Reasons include cancellation of major errors – which happens when two users subtract scalar measurements from the same satellite, but not coordinates formed from different sets of satellites. That, however, only begins to describe the benefit of using measurements (publication #66); continue below:
With each participant receiving every other participant’s transmissions, each has the ability to track all others. That is easily done because
(1) every extended squitter message includes unique source identification, and (2) multiple trackers maintained in tandem have been feasible for years; hundredsof tracks would not tax today’s computing capability at all. Tracks can be formed by ad hoc stitching together coordinate differences, but accuracy will not be impressive. A Kalman tracker fed by those coordinate differences would not only contain the uncancelled errors just noted, but nonuniform sensitivities, unequal accuracies, and cross-axis correlations among the coordinate pseudomeasurement errors would not be taken into account. Furthermore, the dynamics (velocity and acceleration) – as derivatives – would degrade even more – and dynamic accuracy is absolutely crucial for ability to anticipate near-future position (e.g., for collision avoidance).
The sheer weight of all the considerations just noted should be more than enough to motivate the industry towards preparing to exploit this capability. But, wait – there’s more. Much more, in fact. For how many years have we been talking about consolidating various systems, so that we wouldn’t need so many different ones? Well, here’s a chance to provide both 2-dimensional (runway incursion) and 3-dimensional (in-air) collision avoidance with the same system. The performance benefits alone are substantial but that plan would also overcome a fundamental limitation for each –
* Ground: ASDE won’t be available at smaller airports
* In-air: TCAS doesn’t provide adequate bearing information; conflict resolution is performed with climb/dive.
The latter item doesn’t make passengers happy, especially since that absence of timely and accurate azimuth information prompts some unnecessary “just-in-case” maneuvers.
No criticism is aimed here toward the designers of TCAS; they made use of what was available to them, pre-GPS. Today we have not just GPS but differential GPS. Double differencing, which revolutionized surveying two decades ago, could do the same for this 2-D and 3-D tracking. The only difference would be absence of any requirement for a stationary reference. All positions and velocities are relative – exactly what the doctor ordered for this application.
OK, I promised – not just more but MUCH more. Now consider what happens when there aren’t enough satellites instantaneously available to provide a full position fix meeting all demands (geometry, integrity validation): Partial data that cannot provide instantaneous position to be transmitted is wasted (no place to go). But ancient mariners used partial information centuries ago. If we’re willing to do that ourselves, I’ve shown a rigorously derived but easily used means to validate each separate measurement according to individual circumstances. A specific satellite might give an acceptable measurement to one user but a multipath-degraded measurement to another. At each instant of time, any user could choose to reject some data without being forced to reject it all. My methods are applicable for any frequency from any constellation (GPS, GLONASS, GALILEO, COMPASS, QZSS, … ).
While we’re at it, once we open our minds to sharing and comparing scalar observations, we can go beyond satellite data and include whatever our sensors provide. Since for a half-century we’ve known how to account for all the nonuniform sensitivities, unequal accuracies, and cross-axis correlations previously mentioned, all incoming data common to multiple participants (TOA, DME, etc.) would be welcome.
So we can derive accurate cross-range as well as along-range relative dynamics as well as position, with altitude significantly improved to boot. Many scenarios (those with appreciable crossing geometry) will allow conflict resolution in a horizontal plane via deceleration – well ahead of time rather than requiring a sudden maneuver. GPS and Mode-S require no breakthrough in inventions, and track algorithms already in public domain carry no proprietary claims. Obviously, all this aircraft-to-aircraft tracking (with participants in air or on the ground) can be accomplished without data transmitted from any ground station. All these benefits can be had just by using Mode-S squitter messages with the right content.
There’s still more. Suppose one participant uses a different datum than the others. Admittedly that’s unlikely but, for prevention of a calamity, we need to err on the side of caution; “unlikely” isn’t good enough. With each participant operating in his own world-view, comparing scalar measurements would be safe in any coordinate reference. Comparing vectors with an unknown mismatch in the reference frame, though, would be a prescription for disaster. Finally, in Chapter 9 of GNSS Aided Navigation & Tracking I extend the approach to enable sharing observations of nonparticipants.
In the About panel of this site I pledged to substantiate a claim of dramatic improvements afforded by methods to be presented. This operation is submitted as one example satisfying that claim. Many would agree (and many have agreed) that the combined reasons given for the above plan is compelling. Despite that, there is no commitment by the industry to pursue it. ADSB is moving inexorably in a direction that was set years ago. That’s a reality – but it isn’t the only reality. The world has its own model; it doesn’t depend on how we characterize it. It’s up to us to pattern our plans in conformance to the real world, not the other way around. Given the stakes I feel compelled to advocate moving forward with a pilot program of modest size – call it “Post-Nextgen” – having the robustness to recover from severe adversity. Let’s get prepared.
Click the image to view the full article now!
I perform functional formulations and algorithm generation plus validation for both simulation and operational purposes in system integration. Specific areas include navigation, communication, data integrity, and tracking for aerospace, applying modern estimation to data from various sources (COMM, gyros, accelerometers, GPS/GNSS, radar, optical, etc.).
Complete Viewable & printable Resume Click Here …
(opens in new window)
Before GPS took over so many operations by storm (e.g., navigation,tracking, timing, surveying, etc.), designers had access to other — far less capable — provisions. That condition forced our hands; to derive maximum benefit from what was available, we had to extract full information content from those provisions. Now that GPS is subjected to challenges (aging, jamming, spoofing, etc.), some of those older methods are receiving increased scrutiny. Recently I’ve received renewed interest in areas I analyzed decades ago. Old publications from two of those areas are discussed here: 1) attitude determination and 2) nav integration.
“Attitude Determination by Kalman Filtering” is the title of three documents I had published. In reverse sequence they are:
1) Automatica (IFAC Journal), v6 1970, pp. 419-430,
2) my Ph.D. dissertation (Univ. of Maryland, 1967),
3) NASA CR-598, Sept., 1966.
As indicated by the last reference, the work was the result of a contractual study sponsored by NASA (specifically Goddard Space Flight Center – GSFC – in Greenbelt Maryland). I was working for Wetinghouse Defense and Space Center at the time. The proposal I had written to win this contract cited my work prior to then, in both modern estimation (“Simulation of a Minimum Variance OrbitalNavigation System,” AIAA JSR v 3 Jan 1966 pp. 91-98) and attitude computation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA JSR v 3 Sept 1966, pp. 1340-1347). Let me hasten to explain the dates of those Journal publications: each followed its inclusion at an AIAA-sponsored conference, about a year earlier.
By the mid-1960s there was an appreciable amount of validation for Kalmen filtering applied to determination of orbits (that track record was convincing) but not yet for attitude. A GSFC-sponsored investigation was then planned — the very first one for attitude using modern estimation methods. GSFC management understandably wanted that contractual investigation to be performed by someone with demonstrable experience in both Kalman filtering and rotational dynamics. In those days that combination was rare; the Westinghouse proposal was chosen as the winner. At the time of that study, provisions realistically available for attitude updating consisted of mediocre-accuracy items such as magnetometers and horizon scanners– not bad but not spectacular either.
All that was of course before GPS weighed in, with its opportunity to reveal attitude from phase differences between antennas spaced at known distances apart. That vastly superior capability effectively reduced earlier crude measurements to relative obscurity. A directly parallel situation occurred in connection with navigation; the book that first tied together several facets of advancement in that field (integration, strapdown inertial, modern estimation with acceptance of all data sources, multimode operation, extension to tracking, clear exposition of all commonly used representations of attitude, etc.) was”pre-GPS” (1976), and consequently regarded as less relevant. Timing can be decisive — that’s no one’s fault.
The item just noted — attitude representation — is worth further discussion here. Unlike many other sources, the 1976 book offered an opportunity to use quaternion properties without any need to learn a specialized quaternion algebra. A literature search, however, will point primarily to various sources (of necessity, later than 1976).that benefit from the superior performance offered through GPS usage. Again, in view of GPS as a game-changer, that is not necessarily improper. Most publications on attitude determination don’t cite the first-ever investigation, sponsored by GSFC, for that innocent reason.
The word beginning that last sentence (“Most”) has an exception. One author, widely quoted as an authority (especially on quaternions), did cite the original work — dismissing it as “ad-hoc” — while using an exact copy of the sensitivity matrix elements pubished in my original investigation (the three references cited at the start of this blog).
While I obviously didn’t invent either quaternions or the Kalman filter, there was another thing I didn’t do: fail to credit, in my publications, pre-existing sources that contributed to my findings. Publication of the material cited here, I’ve been told, paved the way for understanding and insight to many who followed. No one owes me anything for that; an analyst’s work, truthfully and realistically presented, is what the analyst has to offer.
It is worth pointing out that both the attitude determination study and the 1976 book cover another facet of rotational analysis absent from many other related publications: dynamics — in the sense of physics. Whereas modern estimation lumps time-variations of the state together into one all-encompassing “dynamic” model, classical physics makes a separation: Kinematics defines the relation between position, rates, and accelerations. Dynamics determines translational accelerations resulting from forces or rotational accelerations resulting from torques.
Despite absence of GPS from my early (1960s/70s) investigations, one feature that can still make them useful for today’s analysts is the detailed characterization of torques acting — in very different ways — on spinning and gravity-gradient satellites, plus their effects on rotational motion. Many of the later studies focused on the rotational kinematics, irrespective of those torques and their consequences. Similarly, the “minimal-math”approach to explaining integrated navigation has enabled many to grasp the concepts. Printed testimony to that effect, from courses I taught decades ago, is augmented by more recent source noted near the end of another page shown on this site.
GPS and GNSS
Check out a preview of “GNSS Aided Navigation & Tracking” (click here)
GNSS Aided Navigation & Tracking
– Inertially Augmented or Autonomous
By James L. Farrell
American Literary Press. 2007. Hardcover. 280 pages
This text offers concise guidance on integrating inertial sensors with GPS and also its international version (global navigation satellite system; GNSS) receivers plus other aiding sources. Primary focus is on low-cost inertial measurement units (IMUs) with frequent updates, but other functions (e.g., tracking in numerous modes) and sensors (e.g., radar) are also addressed.
Price is: $100.00 Plus Shipping
(Sales Tax for Maryland residents only)
Dr. Farrell has many decades of experience in this subject area; in the words of one reviewer, the book is “teeming with insights that are hard to find or unavailable elsewhere.”
An engineer and former university instructor, Farrell has made a number of contributions to multiple facets of navigation. He is also the author of Integrated Aircraft Navigation (1976; five hard cover printings; now in paperback) plus over eighty journal or conference manuscripts and various columns.
Frequent aiding-source updates, in applications that require precise velocity rather than extreme precision in position, enables integration to be simplified. All aspects of integration are covered, all the way from raw measurement pre-processing to final 3-D position/velocity/attitude, with far more thorough backup and integrity provisions. Extensive experimental results illustrate the attainable accuracies (cm/s RMS velocities in three-dimensions) during flight under extreme vibration.
The book on GPS and GNSS provides several flight-validated formulations and algorithms not currently in use because of their originality. Considerable opportunity is therefore offered in multiple areas including
* full use of highly intermittent ambiguous carrier phase
* rigorous integrity for separate SVs
* unprecedented robustness and situation awareness
* high performance from low cost IMUs
* “cookbook” steps
* new interoperability features
* new insights for easier implementation.
Discussion of these traits can be seen in the excerpt (over 100 pages) from the link at the top of this page.
In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal. I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).
Methods I use for processing GPS data include many sharp departures from custom. Motivation for those departures arose primarily from the need for robustness. In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening. Satellite aging and jamming, for example, have recently attracted increased attention. One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.
Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test. For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits). Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed. Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.
Among my previously noted departures from custom, two steps I’ve introduced are particularly aimed toward usage of all available measurement data. One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site. Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.
While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed in operation. To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it. Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form
The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic. Every measurement can now be used, each with its individual variance. This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking. What follows is an optional explanation, not needed for operational usage.
The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity. I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman. Immediately we have all the motivation we need for a better approach. I use the condition
where U is the inverse square root of the measurement covariance matrix. At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector. That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).
GPS codes are chosen to produce a strong response if and only if a received signal and its anticipated pattern are closely aligned in time. Conventional designs thus use correlators to ascertain that alignment. Mechanization may take various forms (e.g., comparison of early-vs-late time-shifted replicas), but dependence on the correlation is fundamental. There is also the complicating factor of additional coding superimposed for satellite ephemeris and clock information but, again, various methods have long been known for handling both forms of modulation. Tracking of the carrier phase is likewise highly developed, with capability to provide sub-wavelength accuracies.
An alternative approach using FFT computation allows replacement of all correlators and track loops. The Wiener-Khintchine theorem is well over a half-century old (actually closer to a century), but using it in this application has become feasible only recently. To implement it for GPS a receiver input’s FFT is followed with term-by-term multiplication by the FFT of each separate anticipated pattern (again with optional insertion of fractional-millisecond time shifts for further refinement and again with various means of handling the added clock-&-ephemeris modulation). According to Wiener-Khintchine, multiplication in the frequency domain corresponds to convolution in time — so the inverse FFT of the product provides the needed correlation information.
FFT processing instantly yields a number of significant benefits. The correlations are obtained for all cells, not just the limited few that would be seen by a track loop. Furthermore all cell responses are unconditionally available. Also, FFTs are not only unconditionally stable but, as an all-zero filter bank (as opposed to a loop with poles as well as zeros), the FFT provides linear phase in the passband. Expressed alternatively, no distortion in the phase-vs-frequency characteristic means constant group delay over the signal spectrum.
The FFT processing approach adapts equally well with or without IMU integration. With it, the method (called deep integration here) goes significantly beyond ultratight coupling, which was previously regarded as the ultimate achievement. Reasons for deep integration’s superiority are just the traits succinctly noted in the preceding paragraph.
Finally it is acknowledged that this fundamental discussion touches very lightly on receiver configuration, only scratching the surface. Highly recommended are the following sources plus references cited therein:
* A very early analytical development by D. van Nee and A. Coenen,
“New fast GPS code-acquisition techniquee using FFT,” Electronics Letters, vol. 27, pp. 158–160, January 1991.
* The early pioneering work in mechanization by Prof. Frank van Graas et. al.,
“Comparison of two approaches for GNSS receiver algorithms: batch processing and sequential processing considerations,” ION GNSS-2005
* the book by Borre, Akos, Bertelsen, Rinder, and Jensen,
A software-defined GPS and Galileo receiver: A single-frequency approach (2007).