Once again I am privileged to work with Ohio University Prof. Frank vanGraas, in presenting tutorial sessions at the Institute of Navigation’s GNSS-17 conference. For 2017, as in several consecutive previous years, two sessions will cover integrated navigation with Kalman filtering.  Descriptions of the part 1 session and part 2 session are now available online.

By way of background: The first session is introductory.  Each attendee will be given a book with a development aimed at those learning inertial navigation and/or Kalman filtering for the first time.  Prior to the course, my free-to-members online tutorial is recommended.  Also my three-part matrix tutorial video will be made freely available to attendees.

Prof. vanGraas sponsored, and provided the flight data that enabled, the successful validation of 1-cm/sec RMS velocity vector accuracy obtained from 1-second sequential changes in carrier phase.  Those results for almost an hour in air are provided, with the algorithms used to obtain them, in a more recent book that is given to those attending the second session. attending the second session.

 

 

Changes in coordinates at stations affected by earthquakes have been monitored successfully, for years, with precision using satellite navigation.  Results of interest have then been produced in the past by processing the outcome, e.g., investigating the history of triangles formed thereby.  The first application to earthquakes of entirely different criteria (affine deformation states) has produced results with encouraging prospects for prediction, both in time (more than two weeks prior to the 2011 Tohoku quake) and spatially (departures from the affine model at the station nearest epicenter).

The fifteen independent states of a standard 3-D 4×4 affine transformation can be categorized in five sets of three, each set having x-, y-, and z-components for translation, rotation, perspective, scaling, and shear.  Immediately the three degrees-of-freedom associated with perspective are irrelevant for purposes here.  In addition both translation and rotation, clearly having no effect on shape, can be analyzed separately — and the same is likewise true of uniform scaling.  It is thus widely known that there are five “shape states” involved in 3-D affine deformation, three for shear and two for nonuniform scaling.  One way to describe shape states is to note their effects in 2-D, where there is only one for nonuniform scaling (which deforms a square into a rectangle) and one for shear (which deforms a rectangle into a parallelogram).  Therefore it is noted here that added insight into earthquake investigation can be obtained by analyzing affine features – with specific attention given to their individual traits (degrees-of-freedom).

For investigating earthquakes from affine degrees-of-freedom, methodology of another very different field — anatomy — is highly relevant but ironically lacking a crucial feature.  As currently practiced, physiological studies of affine deformations concentrate heavily on two-dimensional representations.  While full affine representation is very old, its inversion  — i.e., optimal estimation of shape states from a given overdetermined coordinate set — has previously been limited to 2-D.

Immediately then, extension was required for adaptation.  The fundamentals, however, still remain applicable. Instead of designated landmark sets coming from a group of patients, here they are associated with a series of days (e.g., from several days before to several days after a quake).  Each landmark set is then subjected to a series of procedures (centroiding, normalization, rotation) for fitting landmark sets from one day to another.

The Procrustes representation from procedural steps just described provides sequences of centroid shifts in each direction, rotations about each axis, and amounts of uniform scaling needed for each separate day.  In addition to those seven time histories, there are five more that offer potential for greater insight (again, shape states — three shear and two nonuniform scaling).  All were obtained for landmark coordinate sets reported before and after the 2011 Tohoku quake.  From sample recorded coordinates provided along with shape state values, readers of the manuscript are enabled to verify results.

In 2013 a phone presentation was arranged, for me to talk for an hour with a couple dozen engineers at Raytheon. The original plan was to scrutinize the many facets and ramifications of timing in avionics. The scope expanded about halfway through, to include topics of interest to any participant. I was gratified when others raised issues that have been of major concern to me for years (in some cases, even decades).  Receiving a reminder from another professional, that I’m not alone in these concerns, prompts me to reiterate at least some aspects of the ongoing struggle — but this time citing a recent report of flight test verification

The breadth of the struggle is breathtaking. The About panel of this site offers short summaries, all confirmed by authoritative sources cited therein, describing the impact on each of four areas (satnav + air safety + DoD + workforce preparation). Shortcomings in all four areas are made more severe by continuation of outdated methods, as unnecessary as they are fundamental, Not everyone wants to hear this but it’s self-evident: conformance to custom — using decades-old design concepts (e.g., TCAS) plus procedures (e.g., position reports) and conventions (e.g., interface standards — guarantees outmoded legacy systems. Again, while my writings on this site and elsewhere — advocating a different direction — go back decades, I’m clearly not alone (e.g., recall those authoritative sources just noted). Changing more minds, a few at a time, can eventually lead to correction of shortcomings in operation.

We’re not pondering minor improvements, but dramatic ones. To realize them, don’t communicate with massaged data; put raw data on the interface. Communicate in terms of measurements, not coordinates — that’s how DGPS became stunningly successful. Even while using all the best available protection against interference, (including anti-spoof capability), follow through and maximize your design for robustness;  expect occurrences of poor GDOP &/or less than a full set of SVs instantaneously visible. Often that occurrence doesn’t really constitute loss of satnav; when it’s accompanied by history of 1-sec changes in carrier phase, those high-accuracy measurements prevent buildup of position error. With 1-sec carrier phase changes coming in, the dynamics don’t veer toward any one consistent direction; only location veers during position data deficiencies (poor GDOP &/or incomplete fixes) and, even then, only within limits allowed by that continued accurate dynamic updating. Integrity checks also continue throughout.

So then, take into account the crucial importance of precise dynamic information when a full position fix isn’t instantaneously available. Take what’s there and stop discarding it. Redefine requirements to enable what ancient mariners did suboptimally for many centuries — and we’ve done optimally for over a half-century.  Covariances combined with monitored residuals can indicate quality in real time. Aircraft separation means maintaining a stipulated relative distance between them, irrespective of their absolute positions and errors in their absolute positions. None of this is either mysterious or proprietary, and none of this imposes demands for huge budgets or scientific breakthroughs — not even corrections from ground stations.

A compelling case arises from cumulative weight of all these considerations. Parts of the industry have begun to address it. Ohio University has done flight testing (mentioned in the opening paragraph here) that validates the concepts just summarized. Other investigations are likely to result from recent testing of ADSB. No claim is intended that all questions have been answered, but — clearly — enough has been raised to warrant a dialogue with those making decisions affecting the long term.

John Bortz

In February of this year the navigation community lost a major contributor to navigation — John Bortz. To many his name is best known in connection with “the Bortz equation” which easily deserves a note here to highlight its significance in development of strapdown inertial nav. Before his work in the early 1970s, strapdown was widely considered as something with possible promise “maybe, if only it could ever come out of the lab-&-theory realm” and into operation. Technological capabilities we take for granted today were far less advanced then; among the many state-of-the-art limitations of that time, processing speed is a glaringly obvious example. To make a long story short, John Bortz made it all happen anyway. Applying the previously mentioned equation (outgrowth of an early investigation of Draper Lab’s Dr. J.H. Laning) was only part of his achievement. Working with 1960s hardware and those old computers, he made a historic mark in the annals of strapdown.  Still, importance of that accomplishment should not obscure his other credentials. For example, he also made significant contributions to radio navigation — and he spent the lst two decades of his life as a deacon.

 A comment challenged my video .  I’m glad it included an acknowledgment that some points might have been missed. To be frank that happened a bunch; bear with me while I explain. First, there’s the accuracy issue; doppler &/or deltarange info provided from many receivers is far less accurate than carrier phase (sometimes due to cutting corners in implementation — recall that carrier phase, as the integral of doppler, will be smoother if processing is done carefully). Next, preference for 20-msec intervals will backfire badly. If phase noise at L-band gives a respectable 7mm = 0.7cm, doppler velocity error [(current phase) – (previous phase)] / 1 sec is (1.414) (0.7) = 1 cm/sec RMS for a 1-sec sequential differencing interval.  Now use 20 msec: FIFTY times as much doppler error! Alternatively if division is implicit instead of overt, degradation is more complicated: sequential phase differences are highly correlated (with a correlation coefficient of -1/2, to be precise). That’s because the difference (current phase) – (previous phase) and the difference (next phase) – (current phase) both contain the common value of current phase. In a modern estimation algorithm, observations with sequentially correlated errors are far more difficult to process optimally.  That topic is a very deep one; Section 5.6 and Addendum 5.B of my 2007 book address it thoroughly. I’m not expecting everyone to go through all that but, to offer fortification for its credibility, let me cite a few items:

* agreement from other designers who abandoned efforts to use short intervals
* table near the bottom of a page on this site.

* phase residual plots from Chapter 8 of my 2007 book.

The latter two, it is recalled, came from flight test for an extended duration (until flight recorder was full), under severe test aircraft (DC-3) vibration.

For doppler updating from sources other than satnav, my point is stronger still. Doppler from radar (which lacks the advantage of passive operation) won’t get velocity error much below a meter/sec — and even that is an improvement over unaided inertial nav (we won’t see INS velocity specs expressed in cm/sec within our lifetime).

Additional advantages of what the video offers include (1) no requirement for a mask angle (2) GNSS interoperability, and (3) robustness. A brief explanation:

(1) Virtually the whole world discards all measurements from low-elevation satellites because of propagation errors. But ionospheric and tropospheric effects change very little over a second; 1-sec phase differences are great for velocity information. Furthermore they offer a major geometry advantage while occurrence of multipath would stick out like a sore thumb, easily edited out.
(2) 1-sec differences from various constellations are much easier to mix than the phases themselves. 
(3) For receivers exploiting FFT capability  even short fragments of data, not sufficiently continuous for conventional mechanizations (track loops), are made available for discrete updates.
The whole “big picture” is a major improvement is robust operation 

The challenger isn’t the only one who missed these points; much of our industry, in fact, is missing the boat in crucial areas. Again I understand skepticism, but consider the “conventional wisdom” regarding ADSB: Velocity errors expressed in meters per second — you can hear speculative values as high as ten. GRADE SCHOOL ARITHMETIC shows how scary that is; collision avoidance extrapolates ahead. Consider the vast error volume resulting from doing that 90 seconds ahead of closest approach time with several meters per second of velocity error. So — rely on see-and-avoid? There are beaucoup videos that show how futile that is (and many more videos that show how often near misses occur — in addition there are about a thousand runway incursions each year). That justifies the effort for dramatic reduction of errors in tracking dynamics — to cm/sec relative velocity accuracy.

It’s perfectly logical for people to question my claims if they seem too good to be true. All I ask is follow through, with visits to URLs cited here.

Avionics Commonality

A LinkedIn discussion centered on the Future Airborne Capability Environment (FACE) standard contained an important observation concerning certification.  Granted — requirements for validation, with acceptance by governing agencies, definitely are essential for safety. What follows here is advocacy for a proposed way to realize the common avionics benefits offered by FACE while retaining (and in fact, improving) the process of certification. Reasoning is based on three major items:

* CHANGE. In many respects this has necessitated improved standards. 

* HISTORY. Spectacular failures in what we have now are widely documented.

* COST. The status quo is (and, for a long time, has been) unaffordable.

In regard to the first item: the pace of change in so many areas (hardware, software, operating systems, data communication, etc., etc., etc.) — and the effects on procurement cycles — are well known. How can certification remain unchanged when nothing else does? That argument would be undercut if the process had a rock solid track record — but that theme would not be supported by the second item — history:

Myriad shortcomings of existing operational systems are so pervasive that no one is considered a “loose cannon” for openly discussing them. Any of my horror stories — too strange and too numerous to be revisited here — would be trumped anyway by a document from the government itself. GAO-08-467SP, in 2008, described outlandish cost overruns, schedule delays, and deficient technical performance in the defense industry. That 3-way combination speaks for itself. Now a significant addition: the certification process has not been at all immune to serious flaws. The first-ever certified GPS receiver is now well known to have failed spectacularly in multiple facets of integrity testing by another manufacturer. It is readily acknowledged that correction of those early problems is quite credible, but one issue is inescapable: Historical proof of flightworthiness improperly bestowed — with proprietary rights accepted for algorithms and tests –- happened,  and that was not widely known until much later.

There is still more, including integrity failure probability limits missed by orders-of-magnitude in certified GPS receivers, severe limitations of GO/NO-GO testing, and failed attempts to gain approval to set requirements for correcting those plus other deficiencies. For brevity here, those issues are covered by citing the fifth page from another related reference.

The final item is, after years of fruitless talk about cost reduction, being acknowledged — we can’t do what we’ve been doing any more.  With dollars being the ultimate driver of so many decisions, we might finally see the necessary break from ingrained habits. FACE already addresses the issues and the requisite justifications. To make it all happen, two essential ingredients are

* raw-data-across-the-board, and

* nonproprietary software, with standardization under government control.
Flight-validated algorithms already in existence can be converted (e.g., from proof-of-concept to in-flight real-time form) according to government specification, by small groups more interested in engineering than in dollars (yes, that does exist). The payoff in cost savings can be huge.

Significant momentum is evolving toward a role for Open System Architecture (OSA) applied to radar. My observations in connection with that, voiced in a short LinkedIn discussion, seem worth repeating here.

One step could add major impact to this development: Rather than position (or relative position) outputs, provide measured range, azimuth, elevation (doppler could optionally be added if applicable) on the output interface. That simple step would vastly improve effectiveness of track file maintenance. Before attempting to describe all reasons for improved performance, two obvious benefits can be identified first:
* ability to use partial information (e.g., range-only or, for passive operation, angle-only)
* proper weighting of data for updating track state estimates.
The first item is self-evident. The second arises from common-sense attachment of greater value to the most accurate information. An explanation:

One-sigma error ellipsoids for individual radar fixes are not spherical (not a beachball shape but more like a flattened beachball), even at close range. At longer distances the shape progresses from a frisbee to a pancake to a DVD. Kalman filtering has enabled us to capitalize on that feature for over a half-century. Without exploiting it, we effectively treat separate radar-derived “coordinates” by intersecting volumes in space that are common to overlapping spheres. Resulting uncertainty volume is enormously larger than it should be.

The feature just noted shows up dramatically when mixing data among multiple platforms. Consider cooperative engagement whereby participants, all tracking each other via network-transmitted GPS observations, share radar observations from an unknown non-participant. Share measurements or coordinates? No contest; multiple lines crossing from different directions can offer best (i.e., along-range) accuracies applicable in 3-D.

That fact (i.e., combining data from different sensors and different platforms further dramatizes available improvements) doesn’t diminish the basic issue; even with a time history of data from only one radar, a designer with direct measurements available — instead of, not in addition to, coordinates — can provide incomparably superior performance.

“Send Measurements not Coordinates” (1999; #66 from the “Published Articles” panel, opening with eight rock-solid reasons) was aimed at GPS rather than radar. Many of the principles are the same when mixing data with information from other platforms — and from other sensors such as GPS. There is no reason, in fact, why data from satellite navigation and radar can’t be combined in the same estimation algorithm. That practice hasn’t evolved but the historical separation of operations (e.g., navigation and surveillance), arising from old equipment limitations, should no longer be a constraint. Moreover, with focus shifted from tracking to navigation, integration with additional (e.g., inertial) data offers still more reasons for using direct measurements. Rather than loose integration, superior benefits are widely known to result as the sophistication progresses forward (tight. ultratight, and deep integration).

Further elaborations on “casting off our old habits” appear from different perspectives in various blogs, one-pagers, and a few manuscripts available at this site. If your library has a copy of GNSS Aided Navigation & Tracking  pages 203-4 show a way to implement the cooperative sharing of radar data obtained from a non-participant, among participants tracking each other via the mutual surveillance and tracking approach defined earlier in that same chapter.

Because so many operational systems (in fact, a vast majority) use reports in the form of coordinates, reiteration is warranted. The central issue is the content, not the amount, of data. Rather than coordinates, provide accurately time-stamped direct measurements with links connected to whichever platform observed the data (e.g., for satnav — pseudoranges; for radar — range, azimuth, elevation). Those links are automatically attached when Mode-S extended squiter (e.g., chosen for ADSB) is the means for conveying the data.  For message content, strictly disallow “massaging the data beyond the light of day” (e.g., by unknown processes, with uncertain timing, … ) which invariably results in enormous loss of performance in common occurrence today.

CONING in STRAPDOWN SYSTEMS

Free-inertial navigation uses accelerometers and gyros alone, unaided. For that purpose pioneers of yesteryear developed a variety of techniques, ranging from a 2-sample approach (NASA TND-5384, 1969) by Jordan to his and various others’ higher-order algorithms to reduce errors from noncommutativity of finite rotations in the presence of coning (and/or pseudoconing). The methods showed considerable insight and produced successful operation. Since it’s always good to have “another tool in the toolbox” I’ll mention here an alternative. What I describe here isn’t being used but, with today’s processing capabilities, could finally become practical. The explanation will require some background information; I’ll try to be brief.

a

A very old investigation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA Journal of Spacecraft and Rockets, Sept 1966, pp 1340-1347) used the usual small-angle representation for attitude error expressed in the vehicle frame. With that frame rotating at a rate omega the derivative of that vector therefore contains a cross product of itself crossed with omega.  One contributor to that product is a lag effect from omega premultiplied by a diagonal matrix consisting of delays (e.g., transport lags equated to reciprocals of gyro bandwidths). Mismatch among those diagonal elements produces drift components with nonzero average, e.g., the x-component of the cross product is easily seen to be
aaaaaaaaaaa    (difference between y and z lags) times (omega_y) times (omega_z)
Even with zero-average (e.g., oscillatory) angular rates, that product has nonzero average due to rectification.  I then characterized the lags as delays from computation rather than from the gyros, with the lag differences now proportional to nonuniformities among RMS angular rate components along vehicle axes, and average products proportional to cross-correlation coefficients of the angular rate components. That was easy; I had a simple model enabling me to calculate the error due to finite gyro sampling rates producing finite rotation increments that don’t commute.

a

A theoretical model is only that until it is validated. I had to come up with a validation method with mid-1960s computational limitations. Solution came from a basic realization: performance doesn’t degrade from what’s happening but from belief in occurrences that aren’t happening. The first-ever report of coning (Goodman and Robinson, ASME Trans, June 1958) came from a gimballed platform that was believed to be stable while it was actually coning. If the true coning motion they described had been known and taken into account, then their high drift rates never would have occurred. The reason they weren’t taken into account then was narrow gimbal servo bandwidth; the gyros responded to the coning frequency but the platform servos didn’t. Now consider strapdown with the inverse problem: pseudoconing — a vehicle believed to experience coning when it isn’t. That will fall victim to the same departure of perception from reality. If you gave the same Goodman and Robinson coning motion to their strapdown gyro triad and sampled them every nanosecond, the effect from noncommutativity wouldn’t be noticeable.

a

Armed with that insight I then chose rotational dynamics with a closed form solution. Although rotations about fixed vehicle axes produced no coning, the pseudoconing was severe, with the apparent (reported-from-gyros) rotation axis changing radically within fractions of a millisecond; too fast for the 10 kHz data rate used in that computation.  The cross product formulation was then validated by making extensive sets of runs, always comparing two time histories:

* a closed form solution for a true direction cosine matrix corresponding to a vehicle experiencing a sinusoidal omega
* an apparent direction cosine matrix, obtained by brute-force but meticulous formation from processing gyro outputs at finite rates with quantization, time lags, and a wide variety of error sources.

That “bull-by-the-horns” computation allowed extended runs (up to a million attitude iterations) to be made for a wide range of angular rate frequencies, axis directions, and combinations of gyro input errors (steady, random, motion-sensitive, etc.). Deviation of apparent attitude from closed-form truth was consistently in close conformance to the analytical model, for a host of error sources. I have to admit that this “bull-by-the-horns” approach gave me an advantage of finding out answers before I understood the reasons for them. The cross-product analytical model didn’t come from my vision; it came after much head-scratching with answers computed from dozens of runs. A breakthrough came from the sensitivity, completely unanticipated, to angular acceleration about gyro output axes — clear in retrospect but not initially. After these experiences it occurred to me: if cross-axis covariances were known, the dominant contributor to errors — including noncommutativity — could be counteracted. I noted that on page 1342 of that old AIAA paper.

a

Finally I can describe the alternative means of compensating the dominant computational error. Description begins with the reason why it would be useful. Earlier I mentioned that many authors developed very good algorithms to reduce errors from noncommutativity of finite rotations in the presence of coning and/or pseudoconing. All that history, plus more detailed presentation of everything discussed here, can be found in Chapters 3 and 4 of my 1976 book plus Addendum 7.A of my 2007 book. A supreme irony upstages much of the work from those brilliant authors: without accounting for gyro frequency response characteristics, the intended benefit can be lost — or the “compensation” can even become counterproductive (Mark and Tazartes, AIAA Journal of Guidance, Control, & Dynamics, Jul-Aug 2006, pp 641-647). As if those burdens weren’t enough, the adjustment’s complexity — as shown in that paper — can be extensive. So :  that motivates usage of a simpler procedure.

 a

By now I’ve put so much explanation into preparing its description that not much more is needed to define the method. Today’s signal-processing boards enable the requisite covariances to be repetitively computed. Then just form the vector cross product already described and subtract the result from the gyro increments ahead of attitude updating. So much for coning and pseudoconing — but I’m not quite finished yet. The paper just cited leads to another consideration: even if we successfully removed all of the error theoretically arising from inexact computation, significant improvement in free-inertial performance would require more. Operation in the presence of vibrations would necessitate reduction of other motion-sensitive errors. Gyro degradations from rotations, for example, would have to be compensated — and that includes a multitude of components. For that topic you can begin with the discussion of gyro mounting misalignment following that up with the tables in Chapter 4 of my 1976 book and Addendum 4.B of my 2007 book.

Life before GPS

Before GPS took over so many operations by storm (e.g., navigation,tracking, timing, surveying, etc.), designers had access to other — far less capable — provisions.  That condition forced our hands; to derive maximum benefit from what was available, we had to extract full information content from those provisions.  Now that GPS is subjected to challenges (aging, jamming, spoofing, etc.), some of those older methods are receiving increased scrutiny.  Recently I’ve received renewed interest in areas I analyzed decades ago.  Old publications from two of those areas are discussed here: 1) attitude determination and 2) nav integration.

“Attitude Determination by Kalman Filtering” is the title of three documents I had published.  In reverse sequence they are:
1) Automatica (IFAC Journal), v6 1970, pp. 419-430,
2) my Ph.D. dissertation (Univ. of Maryland, 1967),
3) NASA CR-598, Sept., 1966.
As indicated by the last reference, the work was the result of a contractual study sponsored by NASA (specifically Goddard Space Flight Center – GSFC – in Greenbelt Maryland).  I was working for Wetinghouse Defense and Space Center at the time.  The proposal I had written to win this contract cited my work prior to then, in both modern estimation (“Simulation of a Minimum Variance OrbitalNavigation System,” AIAA JSR v 3 Jan 1966 pp. 91-98) and attitude computation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA JSR v 3 Sept 1966, pp. 1340-1347).  Let me hasten to explain the dates of those Journal publications: each followed its inclusion at an AIAA-sponsored conference, about a year earlier.

By the mid-1960s there was an appreciable amount of validation for Kalmen filtering applied to determination of orbits (that track record was convincing) but not yet for attitude.  A GSFC-sponsored investigation was then planned — the very first one for attitude using modern estimation methods.  GSFC management understandably wanted that contractual investigation to be performed by someone with demonstrable experience in both Kalman filtering and rotational dynamics.  In those days that combination was rare; the Westinghouse proposal was chosen as the winner.  At the time of that study, provisions realistically available for attitude updating consisted of mediocre-accuracy items such as magnetometers and horizon scanners– not bad but not spectacular either.
All that was of course before GPS weighed in, with its opportunity to reveal attitude from phase differences between antennas spaced at known distances apart.  That vastly superior capability effectively reduced earlier crude measurements to relative obscurity.  A directly parallel situation occurred in connection with navigation; the book that first tied together several facets of advancement in that field (integration, strapdown inertial, modern estimation with  acceptance of all data sources, multimode operation, extension to tracking, clear exposition of all commonly used representations of attitude, etc.) was”pre-GPS” (1976), and consequently regarded as less relevant. Timing can be decisive — that’s no one’s fault.

The item just noted — attitude representation — is worth further discussion here.  Unlike many other sources, the 1976 book offered an opportunity to use quaternion properties without any need to learn a specialized quaternion algebra.  A literature search, however, will point primarily to various sources (of necessity, later than 1976).that benefit from the superior performance offered through GPS usage. Again, in view of GPS as a game-changer, that is not necessarily improper.  Most publications on attitude determination don’t cite the first-ever investigation, sponsored by GSFC, for that innocent reason.

The word beginning that last sentence (“Most”) has an exception.  One author, widely quoted as an authority (especially on quaternions), did cite the original work — dismissing it as “ad-hoc” — while using an exact copy of the sensitivity matrix elements pubished in my original investigation (the three references cited at the start of this blog).
While I obviously didn’t invent either quaternions or the Kalman filter, there was another thing I didn’t do: fail to credit, in my publications, pre-existing sources that contributed to my findings. Publication of the material cited here, I’ve been told, paved the way for understanding and insight to many who followed. No one owes me anything for that; an analyst’s work, truthfully and realistically presented, is what the analyst has to offer.

It is worth pointing out that both the attitude determination study and the 1976 book cover another facet of rotational analysis absent from many other related publications: dynamics — in the sense of physics.  Whereas modern estimation lumps time-variations of the state together into one all-encompassing “dynamic” model, classical physics makes a separation: Kinematics defines the relation between position, rates, and accelerations.  Dynamics determines translational accelerations resulting from forces or rotational accelerations resulting from torques.

Despite absence of GPS from my early (1960s/70s) investigations, one feature that can still make them useful for today’s analysts is the detailed characterization of torques acting — in very different ways — on spinning and gravity-gradient satellites, plus their effects on rotational motion. Many of the later studies focused on the rotational kinematics, irrespective of those torques and their consequences. Similarly, the “minimal-math”approach to explaining integrated navigation has enabled many to grasp the concepts.  Printed testimony to that effect, from courses I taught decades ago, is augmented by more recent source noted near the end of another page shown on this site.

BOOK on TRACKING

Tracking acceleration dynamics by GNSS, radar, imaging

My 2007 book on GPS and GNSS (GNSS Aided Navigation & Tracking), as its title implies, involves both navigation and tracking. This discussion describes the latter, covered in the longest chapter of the book (Chapter 9).  In addition to the flight-validated algorithms for navigation (processing of inertial sensor data, integration with GPS/GNSS, integrity, etc.), this text offers extensive coverage of tracking. Formulations are given for a variety of modes, in 2-D (e.g., for runway incursion prevention or ships) and 3-D (in-air), using GPS/GNSS and/or other sensors (e.g., radar, optical).  Position and velocity vectors are formed, in some operations joined by some or all components of acceleration.

This author was fortunate to be “at-the-right-places at-the-right-times” when a need arose to address each of the topics covered.  As a result, the words of one reviewer — that the book is

…………….. “teeming with insights that are hard to find or unavailable elsewhere.”

applies to tracking as well as to navigation.  The book identifies subtleties that arise in specific applications (aircraft, ships, land vehicles, satellites, long-range or short-range projectiles, reentry vehicles, missiles, … ). In combination with a variety of possible conditions affecting sensor suite and location (air-to-air; air-to-ground; air-to-sea surface; surface-to-air, etc. — with measurements associated with distance or direction or both; shared or not shared among participants who may communicate from different positions), it is not surprising that striking contrasts can arise, influencing the characterization and approaches used.  The array of formulations offered, while fully accounting for marked differences among operations, nevertheless exploits an underlying commonality to the maximum possible extent.

Tracking dynamics of aircraft, missiles, ships, satellites, projectiles, …

Formulations described in Chapter 9 were used for tracking of both aircraft and missiles, concurrently, through usage of an agile beam radar.  For another example, air-to-surface operations subdivide into air-to-ground and vessel tracking from the air.  That latter case constrains tracked objects’ altitudes to mean sea level — a substantial benefit since it obviates the need for elevation measurements, which are subject to large errors from refraction (bearing and range measurements, much less severely degraded, suffice). Air-to-ground tracking, by contrast, further subdivides into stationary and moving targets; the former potentially involves imaging possibilities (by real or synthetic aperture) while the latter — if not being imaged by inverse SAR — separates its signature from clutter via doppler.

Reentry vehicles, quite different from other track operations, present a unique set of “do’s” and “don’ts” owing to high-precision range measurements combined with much larger cross-range errors (because of proportionality to extreme distances involved).  Pitfalls from uncertain axial direction of “pancake” shaped one-sigma error ellipsoids must be avoided.  A counterexample, having angle observations only (without distance measurements), is also addressed.  Orbit determination is unique in still another way, often permitting “patched-conic” modeling for its dynamics.  A program based on Lambert’s theorem provides initial trajectories from two position vectors with the time interval separating them.

Those operations and more are addressed with most observations from radar or other (e.g., infrared imaging) sensors rather than satellite measurements.  That of course applies to tracked objects carrying no squitters. Friendlies tracking one another, however, open the door for using GNSS data.  Those subjects plus numerous supporting functions are discussed at some length in Chapter 9.  Despite very different dynamics applicable to various operations, the underlying commonality (Chapter 2) connects the error propagation traits in their estimation algorithms and also — though widely unrecognized — short-term INS error propagation under cruise conditions (Chapters 2 and 5).  Support operations such as synthetic aperture radar (SAR) and transfer alignment are described in the chapter Addendum.