In 2013 a phone presentation was arranged, for me to talk for an hour with a couple dozen engineers at Raytheon. The original plan was to scrutinize the many facets and ramifications of timing in avionics. The scope expanded about halfway through, to include topics of interest to any participant. I was gratified when others raised issues that have been of major concern to me for years (in some cases, even decades).  Receiving a reminder from another professional, that I’m not alone in these concerns, prompts me to reiterate at least some aspects of the ongoing struggle — but this time citing a recent report of flight test verification

The breadth of the struggle is breathtaking. The About panel of this site offers short summaries, all confirmed by authoritative sources cited therein, describing the impact on each of four areas (satnav + air safety + DoD + workforce preparation). Shortcomings in all four areas are made more severe by continuation of outdated methods, as unnecessary as they are fundamental, Not everyone wants to hear this but it’s self-evident: conformance to custom — using decades-old design concepts (e.g., TCAS) plus procedures (e.g., position reports) and conventions (e.g., interface standards — guarantees outmoded legacy systems. Again, while my writings on this site and elsewhere — advocating a different direction — go back decades, I’m clearly not alone (e.g., recall those authoritative sources just noted). Changing more minds, a few at a time, can eventually lead to correction of shortcomings in operation.

We’re not pondering minor improvements, but dramatic ones. To realize them, don’t communicate with massaged data; put raw data on the interface. Communicate in terms of measurements, not coordinates — that’s how DGPS became stunningly successful. Even while using all the best available protection against interference, (including anti-spoof capability), follow through and maximize your design for robustness;  expect occurrences of poor GDOP &/or less than a full set of SVs instantaneously visible. Often that occurrence doesn’t really constitute loss of satnav; when it’s accompanied by history of 1-sec changes in carrier phase, those high-accuracy measurements prevent buildup of position error. With 1-sec carrier phase changes coming in, the dynamics don’t veer toward any one consistent direction; only location veers during position data deficiencies (poor GDOP &/or incomplete fixes) and, even then, only within limits allowed by that continued accurate dynamic updating. Integrity checks also continue throughout.

So then, take into account the crucial importance of precise dynamic information when a full position fix isn’t instantaneously available. Take what’s there and stop discarding it. Redefine requirements to enable what ancient mariners did suboptimally for many centuries — and we’ve done optimally for over a half-century.  Covariances combined with monitored residuals can indicate quality in real time. Aircraft separation means maintaining a stipulated relative distance between them, irrespective of their absolute positions and errors in their absolute positions. None of this is either mysterious or proprietary, and none of this imposes demands for huge budgets or scientific breakthroughs — not even corrections from ground stations.

A compelling case arises from cumulative weight of all these considerations. Parts of the industry have begun to address it. Ohio University has done flight testing (mentioned in the opening paragraph here) that validates the concepts just summarized. Other investigations are likely to result from recent testing of ADSB. No claim is intended that all questions have been answered, but — clearly — enough has been raised to warrant a dialogue with those making decisions affecting the long term.

Avionics Commonality

A LinkedIn discussion centered on the Future Airborne Capability Environment (FACE) standard contained an important observation concerning certification.  Granted — requirements for validation, with acceptance by governing agencies, definitely are essential for safety. What follows here is advocacy for a proposed way to realize the common avionics benefits offered by FACE while retaining (and in fact, improving) the process of certification. Reasoning is based on three major items:

* CHANGE. In many respects this has necessitated improved standards. 

* HISTORY. Spectacular failures in what we have now are widely documented.

* COST. The status quo is (and, for a long time, has been) unaffordable.

In regard to the first item: the pace of change in so many areas (hardware, software, operating systems, data communication, etc., etc., etc.) — and the effects on procurement cycles — are well known. How can certification remain unchanged when nothing else does? That argument would be undercut if the process had a rock solid track record — but that theme would not be supported by the second item — history:

Myriad shortcomings of existing operational systems are so pervasive that no one is considered a “loose cannon” for openly discussing them. Any of my horror stories — too strange and too numerous to be revisited here — would be trumped anyway by a document from the government itself. GAO-08-467SP, in 2008, described outlandish cost overruns, schedule delays, and deficient technical performance in the defense industry. That 3-way combination speaks for itself. Now a significant addition: the certification process has not been at all immune to serious flaws. The first-ever certified GPS receiver is now well known to have failed spectacularly in multiple facets of integrity testing by another manufacturer. It is readily acknowledged that correction of those early problems is quite credible, but one issue is inescapable: Historical proof of flightworthiness improperly bestowed — with proprietary rights accepted for algorithms and tests –- happened,  and that was not widely known until much later.

There is still more, including integrity failure probability limits missed by orders-of-magnitude in certified GPS receivers, severe limitations of GO/NO-GO testing, and failed attempts to gain approval to set requirements for correcting those plus other deficiencies. For brevity here, those issues are covered by citing the fifth page from another related reference.

The final item is, after years of fruitless talk about cost reduction, being acknowledged — we can’t do what we’ve been doing any more.  With dollars being the ultimate driver of so many decisions, we might finally see the necessary break from ingrained habits. FACE already addresses the issues and the requisite justifications. To make it all happen, two essential ingredients are

* raw-data-across-the-board, and

* nonproprietary software, with standardization under government control.
Flight-validated algorithms already in existence can be converted (e.g., from proof-of-concept to in-flight real-time form) according to government specification, by small groups more interested in engineering than in dollars (yes, that does exist). The payoff in cost savings can be huge.

Significant momentum is evolving toward a role for Open System Architecture (OSA) applied to radar. My observations in connection with that, voiced in a short LinkedIn discussion, seem worth repeating here.

One step could add major impact to this development: Rather than position (or relative position) outputs, provide measured range, azimuth, elevation (doppler could optionally be added if applicable) on the output interface. That simple step would vastly improve effectiveness of track file maintenance. Before attempting to describe all reasons for improved performance, two obvious benefits can be identified first:
* ability to use partial information (e.g., range-only or, for passive operation, angle-only)
* proper weighting of data for updating track state estimates.
The first item is self-evident. The second arises from common-sense attachment of greater value to the most accurate information. An explanation:

One-sigma error ellipsoids for individual radar fixes are not spherical (not a beachball shape but more like a flattened beachball), even at close range. At longer distances the shape progresses from a frisbee to a pancake to a DVD. Kalman filtering has enabled us to capitalize on that feature for over a half-century. Without exploiting it, we effectively treat separate radar-derived “coordinates” by intersecting volumes in space that are common to overlapping spheres. Resulting uncertainty volume is enormously larger than it should be.

The feature just noted shows up dramatically when mixing data among multiple platforms. Consider cooperative engagement whereby participants, all tracking each other via network-transmitted GPS observations, share radar observations from an unknown non-participant. Share measurements or coordinates? No contest; multiple lines crossing from different directions can offer best (i.e., along-range) accuracies applicable in 3-D.

That fact (i.e., combining data from different sensors and different platforms further dramatizes available improvements) doesn’t diminish the basic issue; even with a time history of data from only one radar, a designer with direct measurements available — instead of, not in addition to, coordinates — can provide incomparably superior performance.

“Send Measurements not Coordinates” (1999; #66 from the “Published Articles” panel, opening with eight rock-solid reasons) was aimed at GPS rather than radar. Many of the principles are the same when mixing data with information from other platforms — and from other sensors such as GPS. There is no reason, in fact, why data from satellite navigation and radar can’t be combined in the same estimation algorithm. That practice hasn’t evolved but the historical separation of operations (e.g., navigation and surveillance), arising from old equipment limitations, should no longer be a constraint. Moreover, with focus shifted from tracking to navigation, integration with additional (e.g., inertial) data offers still more reasons for using direct measurements. Rather than loose integration, superior benefits are widely known to result as the sophistication progresses forward (tight. ultratight, and deep integration).

Further elaborations on “casting off our old habits” appear from different perspectives in various blogs, one-pagers, and a few manuscripts available at this site. If your library has a copy of GNSS Aided Navigation & Tracking  pages 203-4 show a way to implement the cooperative sharing of radar data obtained from a non-participant, among participants tracking each other via the mutual surveillance and tracking approach defined earlier in that same chapter.

Because so many operational systems (in fact, a vast majority) use reports in the form of coordinates, reiteration is warranted. The central issue is the content, not the amount, of data. Rather than coordinates, provide accurately time-stamped direct measurements with links connected to whichever platform observed the data (e.g., for satnav — pseudoranges; for radar — range, azimuth, elevation). Those links are automatically attached when Mode-S extended squiter (e.g., chosen for ADSB) is the means for conveying the data.  For message content, strictly disallow “massaging the data beyond the light of day” (e.g., by unknown processes, with uncertain timing, … ) which invariably results in enormous loss of performance in common occurrence today.

LORAN REVISITED

Now that a few years have passed since the LORAN-C budget was killed, it might be a good time to revisit that decision. Unlike other decisions, this one might conceivably be undone; there hasn’t been the widespread demolition of resources (e.g., towers, transmitters) followed by restoration of sites. Something else, though, did occur: recent success achieved by cooperative effort between the Coast Guard and UrsaNav Inc.

For brevity here it suffices to make a few surface-scratching notes. The vast majority of us in the navigation community recognized the potential benefit of LORAN (and an extended form eLORAN) as a crucial backup — at extremely low cost — to be used when GPS is unavailable.  Many of us, furthermore, anxiously pressed for sanity (e.g., my “2-cents worth” written, to no avail, in 2009).

What’s different now, conceivably, is a combined effect of multiple factors:
* The USCG/UrsaNav success surpassed goals that had been stated earlier.
* Awareness of GPS vulnerability (therefore need for backup) has increased.
* Delay in follow-through (site restoration) offers the chance for a remedy.

An utterance appearing in Coordinates Magazine’s March 2012 cover story was reached from a different context, but its importance prompted me to cite it in the April 2012 cover story — and to repeat it here: “Do we really need to wait for a catastrophe before taking action against GNSS vulnerabilities?”

Once again I’m adding my voice to the chorus of those speaking out before it’s too late.

CHECK LIST for DESIGNERS

Questions submitted by members of various forums, understandably, frequently involve one or more of the following topics:
 * some or all facets of inertial navigation
 * means of updating and reinitializing the drifting inertial solution
 * satellite navigation (GPS/GNSS) for providilng the updates
 * other means of updating (radar, laser, optics, VOR, DME, hyperbolic, … )
 * best ways to use what’s available for various applications.
 
The pool of literature that might be offered can be vast, partly due to a vast array of operations – each with application-dependent requirements.  Finding just the relevant information from a mountain of available references can be a daunting task, especially for young designers.  I’ll try to make their search easier, by offering a list they can ask themselves early in the design process:
 * do you need a lat/lon/altitude Earth reference or just a designated point?
 * is the path determined from provisions onboard (nav) or remote (track)?
 * what’s your required accuracy for “absolute” (geolocation) position?
 * what’s your required accuracy for relative position (e.g., from a runway)?
 * do you need precise incremental position history (SAR motion compensation)?
 * do you need precise angular orientation (e.g., laser pointing)?
 * do you need precise angular rates (for image or antenna stabilization)?
 * for direction do you use a North reference or just along-track/cross-track?
 * will you have dependable access to updating information (GPS, radar, …)?
 * if not, how irregular will dynamics be over active parts of your mission?
 * if so, how irregular will the dynamics be during inter-update periods?
 * also if so, what data rate? Longest expected “blind period” between updates?
 * also if so, will measurements need averaging to meet your required accuracy?
 * also if so, how accurate are your measurements AND their time stamps?
 * also if so, can you use postprocessing or do you need everything real-time?
 * are you willing to accept partial updates (some but not all directions)?
 * do you need just position or derivatives too (velocity, acceleration)?
 * if so, how long can your dynamics be trusted to conform to model fidelity?
 * are you doing INS update (e.g., replacing acceleration with tilt states)?
 * if so, will you need to deduce drift rates – and how long will those hold?
 * do your sensors measure distances, angles, doppler, differences of those?
 * for how long does your sensor information content provide observability?
 * how’s your sensor integrity (bad readings at least detectable if present)?
 * for safety-critical operations — what are your backup provisions?
 * are you accommodating multiple modes with time-shared sensing resources?
 * do you need to perform image registration with different imaging sensors?
etc.etc. — the list goes on.  I won’t even try to claim thoroughness; you get the idea.  Designers with new tasks dumped in their lap can understandably feel overwhelmed.  Searching for references can become a trip through a maze of half-relevant sources.
 
A first step, then, is to separate the relevant (what you need) from the irrelevant (what you don’t need), instantly dismiss any thought of the latter, and do the opposite with the former (nail it).
 
Brief examples — the first two items from the above list —
 * If you just need to know your location relative to a designated point, irrespective of its latitude and lingitude — this might help.
 * If you’re tracking instead of navigating — check these out —
and one from the last item from that list —
Again, you get the idea — volumes have been written on all facets.  Many won’t apply to your immediate task; disregard those.
 
The good news is — paths to logical solutions are known and documented.  To avoid abandoning you to an enormous maze of references I’ll point out some fundamental and advanced (state-of-the-art) tracts that address all issues just cited and more.  Several blogs and short “1-pagers” will help individual designers to choose, based on their specific tasks, passages from available references.
 
Before GPS we struggled hard for accurate measurements in enough places.  That actually produced a benefit — we had to be resourceful.  My biggest challenge was to understand subjects (Kalman filtering, strapdown inertial navigation) then considered exotic.  Again a benefit; pulling information from 1950s books and papers forced me to understand, focus, and reduce concepts to whatever level became necessary.  The experience prompted me to write the first of my two books on navigation.
 
That first book has been used in myriad courses, including one currently taught by Prof. Hablani who wrote the most recent testimonial shown on that URL .
 
Some topics that earlier book explained in detail recently came up in another discussion — http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=44646633&gid=160643&commentID=68798460&trk=view_disc
&ut=0XsoCju0nA5B81
For example, slow (“W” radian/sec) oscillations with “W” corresponding to the Schuler period (between 83 and 84 minutes). In that case position error from accelerometer bias, propagating as (1 – cos Wt), rises much sooner than gyro drift, propagating as (t – sin Wt/W). Page 80 of that book sketches an example of behavior over a cycle.  Development offered beyond there expands as far as many analysts wish to go (other natural frequencies of error propagation, rectification of vibration-sensitive errors, etc.).
 
Not long after that first book appeared, GPS became operational — and I was a newcomer to that.  By the time I understood it there were many experts.  Once again I had to catch up, and the process was gradual.  With an exceptionally strong client interested in my inertial background, a synergism was formed. That led to a flight test producing state-of-the-art accuracy in dynamics; see the table describing several innovations also resulting from the work just described.
 
That second book, after a review chapter, begins where the first (pre-GPS) one left off.  It also (1)is used in tutorials and (2)has received testimonials from other instructors, as the URL shows.  Sources cited here, plus an online 1.5-hr tutorial, free to Inst-of-Navigation members, plus a “try-before-you-buy” 100-page excerpt available from this site, should be helpful to many.

Life before GPS

Before GPS took over so many operations by storm (e.g., navigation,tracking, timing, surveying, etc.), designers had access to other — far less capable — provisions.  That condition forced our hands; to derive maximum benefit from what was available, we had to extract full information content from those provisions.  Now that GPS is subjected to challenges (aging, jamming, spoofing, etc.), some of those older methods are receiving increased scrutiny.  Recently I’ve received renewed interest in areas I analyzed decades ago.  Old publications from two of those areas are discussed here: 1) attitude determination and 2) nav integration.

“Attitude Determination by Kalman Filtering” is the title of three documents I had published.  In reverse sequence they are:
1) Automatica (IFAC Journal), v6 1970, pp. 419-430,
2) my Ph.D. dissertation (Univ. of Maryland, 1967),
3) NASA CR-598, Sept., 1966.
As indicated by the last reference, the work was the result of a contractual study sponsored by NASA (specifically Goddard Space Flight Center – GSFC – in Greenbelt Maryland).  I was working for Wetinghouse Defense and Space Center at the time.  The proposal I had written to win this contract cited my work prior to then, in both modern estimation (“Simulation of a Minimum Variance OrbitalNavigation System,” AIAA JSR v 3 Jan 1966 pp. 91-98) and attitude computation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA JSR v 3 Sept 1966, pp. 1340-1347).  Let me hasten to explain the dates of those Journal publications: each followed its inclusion at an AIAA-sponsored conference, about a year earlier.

By the mid-1960s there was an appreciable amount of validation for Kalmen filtering applied to determination of orbits (that track record was convincing) but not yet for attitude.  A GSFC-sponsored investigation was then planned — the very first one for attitude using modern estimation methods.  GSFC management understandably wanted that contractual investigation to be performed by someone with demonstrable experience in both Kalman filtering and rotational dynamics.  In those days that combination was rare; the Westinghouse proposal was chosen as the winner.  At the time of that study, provisions realistically available for attitude updating consisted of mediocre-accuracy items such as magnetometers and horizon scanners– not bad but not spectacular either.
All that was of course before GPS weighed in, with its opportunity to reveal attitude from phase differences between antennas spaced at known distances apart.  That vastly superior capability effectively reduced earlier crude measurements to relative obscurity.  A directly parallel situation occurred in connection with navigation; the book that first tied together several facets of advancement in that field (integration, strapdown inertial, modern estimation with  acceptance of all data sources, multimode operation, extension to tracking, clear exposition of all commonly used representations of attitude, etc.) was”pre-GPS” (1976), and consequently regarded as less relevant. Timing can be decisive — that’s no one’s fault.

The item just noted — attitude representation — is worth further discussion here.  Unlike many other sources, the 1976 book offered an opportunity to use quaternion properties without any need to learn a specialized quaternion algebra.  A literature search, however, will point primarily to various sources (of necessity, later than 1976).that benefit from the superior performance offered through GPS usage. Again, in view of GPS as a game-changer, that is not necessarily improper.  Most publications on attitude determination don’t cite the first-ever investigation, sponsored by GSFC, for that innocent reason.

The word beginning that last sentence (“Most”) has an exception.  One author, widely quoted as an authority (especially on quaternions), did cite the original work — dismissing it as “ad-hoc” — while using an exact copy of the sensitivity matrix elements pubished in my original investigation (the three references cited at the start of this blog).
While I obviously didn’t invent either quaternions or the Kalman filter, there was another thing I didn’t do: fail to credit, in my publications, pre-existing sources that contributed to my findings. Publication of the material cited here, I’ve been told, paved the way for understanding and insight to many who followed. No one owes me anything for that; an analyst’s work, truthfully and realistically presented, is what the analyst has to offer.

It is worth pointing out that both the attitude determination study and the 1976 book cover another facet of rotational analysis absent from many other related publications: dynamics — in the sense of physics.  Whereas modern estimation lumps time-variations of the state together into one all-encompassing “dynamic” model, classical physics makes a separation: Kinematics defines the relation between position, rates, and accelerations.  Dynamics determines translational accelerations resulting from forces or rotational accelerations resulting from torques.

Despite absence of GPS from my early (1960s/70s) investigations, one feature that can still make them useful for today’s analysts is the detailed characterization of torques acting — in very different ways — on spinning and gravity-gradient satellites, plus their effects on rotational motion. Many of the later studies focused on the rotational kinematics, irrespective of those torques and their consequences. Similarly, the “minimal-math”approach to explaining integrated navigation has enabled many to grasp the concepts.  Printed testimony to that effect, from courses I taught decades ago, is augmented by more recent source noted near the end of another page shown on this site.

SINGLE-MEASUREMENT RAIM

In January of 2005 I presented a paper “Full Integrity Test for GPS/INS” at ION NTM that later appeared in the Spring 2006 ION Journal.  I’ve adapted the method to operation (1) with and (2) without IMU, obtaining RMS velocity accuracy of a centimeter/sec and a decimeter/sec, respectively, over about an hour in flight (until the flight recorder was full).

Methods I use for processing GPS data include many sharp departures from custom.  Motivation for those departures arose primarily from the need for robustness.  In addition to the common degradations we’ve come to expect (due to various propagation effects, planned and unplanned outages, masking or other forms of obscuration and attenuation), some looming vulnerabilities have become more threatening.  Satellite aging and jamming, for example, have recently attracted increased attention.  One of the means I use to achieve enhanced robustness is acceptance-testing of every GNSS observable, regardless of what other measurements may or may not be available.

Classical (Parkinson-Axelrad) RAIM testing (see, for example, my ERAIM blog‘s background discussion) imposes requirements for supporting geometry; measurements from each satellite were validated only if more satellites with enough geometric spread enabled a sufficiently conclusive test.  For many years that requirement was supported by a wealth of satellites in view, and availability was judged largely by GDOP with its various ramifications (protection limits).  Even with future prospects for a multitude of GNSS satellites, however, it is now widely acknowledged that acceptable geometries cannot be guaranteed.  Recent illustrations of that realization include
* use of subfilters to exploit incomplete data (Young & McGraw, ION Journal, 2003)
* Prof. Brad Parkinson’s observation at the ION-GNSS10 plenary — GNSS should have interoperability to the extent of interchangeability, enabling a fix composed of one satellite from each of four different constellations.

Among my previously noted departures from custom, two steps I’ve introduced  are particularly aimed toward usage of all available measurement data.  One step, dead reckoning via sequential differences in carrier phase, is addressed in another blog on this site.  Described here is a summary of validation for each individual data point — whether a sequential change in carrier phase or a pseudorange — irrespective of presence or absence of any other measurement.

While matrix decompositions were used in its derivation, only simple (in fact, intuitive) computations are needed  in operation.  To exphasize that here, I’ll put “the cart before the horse” — readers can see the answer now and optionally omit the subsequent description of how I formed it.  Here’s all you need to do: From basic Kalman filter expressions it is recalled that each scalar residual has a sensitivity vector H and a scalar variance of the form

aaaaaaaap1flye aaaaaaaa HPH’+(measurement error variance)

The ratio of each independent scalar residual to the square root of that variance is used as a normalized dimensionless test statistic.  Every measurement can now be used, each with its individual variance.  This almost looks too good to be true and too simple to be useful, but conformance to rigor is established on pages 121-126 and 133 of GNSS Aided Navigation and Tracking.  What follows is an optional explanation, not needed for operational usage.

The key to my single-measurement RAIM approach begins with a fundamental departure from the classical matrix factorization (QR=H) originally proposed for parity.  I’ll note here that, unless all data vector components are independent with equal variance, that original (QR=H) factorization will produce state estimates that won’t agree with Kalman.  Immediately we have all the motivation we need for a better approach.  I use the condition

aaaaaaaap1flyeaaaap1flye aaa aaaaaaaaaa QR=UH

where U is the inverse square root of the measurement covariance matrix.  At this point we exploit the definition of a priori state estimates as perceived characterizations of actual state immediately before a measurement — thus the perceived error state is by definition a null vector.  That provides a set of N equations in N unknowns to combine with each individual scalar measurement, where N is 4 (for the usual three unknowns in space and one in time) or 3 (when across-satellite differences produce three unknowns in space only).

In either case we have N+1 equations in N unknowns which, after factoring as noted above, enables determination of both state solution in agreement with Kalman and the parity scalar in full correspondence to formation of the normalzed dimensionless test statistic already noted.  All further details pertinent to this development, plus extension to the ERAIM formulation, plus further extension to the correlated observations arising from differential operation, are given in the book cited earlier.  It is rigorously shown therein that this single-measurement RAIM is the final stage of the subfilter approach (Young & McGraw reference, previously cited above), carried to the limit.  A clinching argument: Nothing prevents users from having both the classical approach to RAIM and this generalized method.  Nothing has been sacrificed.

An early comment sent to this site raised a question as to how long I’ve been doing this kind of work.  Yes I’m an old-timer.  Some of my earlier Kalman filter studies are cited in books dating back to the 1970s — e.g., Jazwinski, Stochastic Processes and Filtering Theory, 1970 (page 267); Bryson & Ho, Applied Optimal Control, 1975 (page 374); Spilker, Digital Communication by Satellite, 1977 (page 636).  My first book, published by Academic Press, initially appeared in 1976.

In the early 1960s, not long after Kalman’s ASME breakthrough paper on optimal filtering, I was at work simulating its effectiveness for orbit determination (publication #4).  No formal recognition of EKF existed at that time, but nonlinearities in both dynamics and observables made that course of action an obvious choice.  In 1967 I applied it to attitude determination for my Ph.D. dissertation (publication #9). Shortly thereafter I wrote a program (publication #16) for application to deformations of a satellite so large (end-to-end length taller than the Empire State Building) that its flexural oscillations were too slow to allow decoupling from its rotational motion (publications #10, 11, 12, 14, 15, 27).  Within that same time period I analyzed and simulated strapdown inertial navigation (publications #6, 7, 8).

Early familiarizarion with Kalman filtering and inertial navigation paid huge dividends during subsequent efforts in other areas.  Those included, at first, doppler nav with a time-shared radar beam (publication #20), synthetic aperture radar (publications #21, 22, 38, 41), synchronization (publication #19), tracking (publications #23, 24, 28, 30, 32, 36, 39, 40, 48, 52, 54, 60, 61, 66, 67, 69), transfer alignment (publications #29, 41, 44), software validation (publications #34, 42), image fusion (publications #43, 49), optimal control (publication #33), plus a few others.  All these efforts made it quite clear to me — there’s much more to all this than sets of equations.

Involvement in all those fields had a side effect of delaying my entry into GPS work; I was a latecomer when the GPS pioneers were already established.  GPS/GNSS is heavily involved, however, in much of my later work (latter half of my publications) — and my work in other areas produced a major benefit:  The experience provided insights which, in the words of one reviewer quoted in the book description (click here) are either hard to find or unavailable anywhere else.  Recognizing opportunities for synergism — many still absent from today’s operational systems — enabled me to cross the line into advocacy (publications #26, 47, 55, 63, 66, 68, 73, 74, 77, 83, 84, 85, 86).  Innovations present in GNSS Aided Navigation and Tracking were either traceable to or enhanced by my earlier familiarization with techniques used in other areas.

Almost two decades ago an idea struck me: — the way GPS measurements are validated could be changed in a straightforward way, to provide more information without affecting the navigation solutions at all.

First, some background: The Receiver Autonomous Integrity Monitoring (RAIM) concept began with a detection scheme whereby an extra satellite (five instead of four, to correct three spatial dimensions plus time) enables consistency checks.  Five different navigation solutions are available, each obtained with one of the satellites omitted.  A wide dispersion among those solutions indicates an error somewhere, without identifying which of the five has a problem.  Alternatively only one solution formed with any satellite omitted can be used to calculate what the unused measurement should theoretically be.  Comparison vs the observed value then provides a basis for detection.

A choice then emerged between two candidate detection methods, i.e., chi-squared and parity.  The latter choice produced five equations in four unknowns, expressed with a 5×4 matrix rewritten as the product of two matrices (one orthogonal and the other upper triangular).  That subdivision separated the navigation solution from a scalar containing the information needed to assess the degree of inconsistency. Systematic testing of that scalar according to probability theory was quickly developed and extended to add a sixth satellite, enabling selection of which satellite to leave out of the navigation solution.

The modified strategy I devised can now be described.  First, instead of a 5×4 matrix for detection, let each solution contain five unknowns – the usual four plus one measurement bias.  Five solutions are obtained, now with each satellite taking its turn as the suspect.  My 1992 manuscript (publication #53 – click here) shows why the resulting navigation solutions are identical to those produced by the original RAIM approach (i.e., with each satellite taking its turn “sitting out this dance”).

The real power of this strategy comes in expansion to six satellites.  A set of 6×5 matrices results, and the subdivision into orthogonal and upper triangular factors now produces six parity scalars (rather than 2×1 parity vectors – everyone understands a  zero-mean random scalar).  Not only are estimates obtained for measurement biases but the interpretation is crystal clear: With one biased measurement, every parity scalar has nonzero mean except the one with the faulty satellite in the suspect role.  Again all navigation solutions match those produced by the original RAIM method, and fully developed probability calculations are immediately applicable.  Content of the 1992 manuscript was then included with further adaptation to the common practice of using measurement differences for error cancellation, in Chapter 6 of GNSS Aided Navigation and Tracking.  Additional extensions include rigorous adaptation to each individual observation independent of all others (“single-mesurement RAIM”) and to multiple flawed satellites.

In the early 1990s I recalled, in a manuscript (publication #49 – click here), advocacy from years ago that probably originated from within USAF.  A sharp distinction was to be drawn between “multisensor integration” for low-speed and low-volume versus “sensor fusion” for high-speed-high-volume processing.  Unfortunately, separate terminology never survived; the industry uses the same vocabulary for nav update “fusion” at leisurely retes and for fusion of images (e.g., with megapixels/frame with 3-byte RGB pixels at 30 frames/sec) requiring speeds expressed in GHz.
Terminology aside, a major task for the imaging field is to recognize and categorize the degradations. Combining tracks from different sensors is a major undertaking.  For obvious reasons (including but by no means limited to inertial nav error propagation and time-varying errors in imaging sensors), complexity is compounded by motion.  Immediately I’ll step back and consider that from a perspective unlike concepts in common acceptance.  For brevity I’ll just cite some major ingredients here:

  • Inertial nav is used to provide a connection between frames taken in succession from sensors in 
    motion.  It needs to be recognized that INS error propagation for this short-term opertion cannot correctly be based on antiquated nmi/hr Schuler modeling.  There are literally dozens of gyro and accelerometer error contributors, most of which are motion-sensitive and not well covered by (in fact, often even excluded from) IMU specifications.  Overbounding is thus necessary for conservative design — which in  effect either compromises performance or increases demands for data rates.  For
    further illustration see Section 4.B, pages 57-65 of GNSS Aided Navigation and Tracking
  • Often there is motion of not only the sensors but also tracked objects, the masking of whose signatures — already potentially an issue even when stationary — will be further obscured if driven to thwart observation.
  • It is very important not to attribute sensor stabilization errors to any tracked object’s estimated state (surprisingly many operational designs violate that principle).  Figure 9.3 on page 200 of GNSS Aided Navigation and Tracking shows a planar example for insight.
  • Procedures for in-flight calibration, self-calibration, online temperature compensation, etc. are invoked.  Immediately all measurement errors are then redefined to include only residual amounts due to imperfect calibration.
  • One caveat: any large discrete corrections should occur suddenly between — not within — frames (e.g., a SAR coherent integration time).
  • The association problem, producing hypothetical tracks (e.g., due to crossing paths), defies perfect solution.  Thus a sensor response from object `A’ might be combined with subsequent response from object `B’ to produce an extraneous track characterizing neither.  Obviously this becomes more unwieldy with increasing density of objects responding to sensors.
  • Ironically, many of the objects that complicate the association task are of little or no interest.  Rocks reflect radar transmissions.  Animals respond to IR sensors.  Metallic objects respond to both, which raises an opportunity for concentration on metallic objects: accept information only from pixels with both radar and IR responses.  Tracks formed after that will be far fewer and much more credible.
  • Registration of image data from IR (Az/EL) and SAR (range/doppler) cells must account for big differences in pixel size, shape, and orientation.  Although by no means trivial, in principle it can be done.
  • Even if all algorithm development and processing implementation issues are solved, unknown terrain slopes will degrade the results.  Also, undulations (as well as any structures present) in a swath will produce data gaps due to masking.  How long a gap is tolerable before dropping an old track and reallocating its resources for a new one will be another design decision.
  • Imaging transformation (e.g., 4×4 affine group and/or thin plate spline) applicability will depend on operational specifics.
When I was more heavily involved in this area the processing requirements for image fusion while still in raster form would have been prohibitive.  Today that may no longer be true.