Once again I am privileged to work with Ohio University Prof. Frank vanGraas, in presenting tutorial sessions at the Institute of Navigation’s GNSS-17 conference. For 2017, as in several consecutive previous years, two sessions will cover integrated navigation with Kalman filtering.  Descriptions of the part 1 session and part 2 session are now available online.

By way of background: The first session is introductory.  Each attendee will be given a book with a development aimed at those learning inertial navigation and/or Kalman filtering for the first time.  Prior to the course, my free-to-members online tutorial is recommended.  Also my three-part matrix tutorial video will be made freely available to attendees.

Prof. vanGraas sponsored, and provided the flight data that enabled, the successful validation of 1-cm/sec RMS velocity vector accuracy obtained from 1-second sequential changes in carrier phase.  Those results for almost an hour in air are provided, with the algorithms used to obtain them, in a more recent book that is given to those attending the second session. attending the second session.

 

 

A recent video describes a pair of long-awaited developments that promise dramatic benefits in achievable navigation and tracking performance.  Marked improvements will occur, not only in accuracy and availability; over four decades this topic has arisen in connection with myriad operations, many documented in material cited from other blogs here. 

Surveillance with GPS/GNSS

 

 

The ago-old Interrogation/Response method for air surveillance was aptly summarized in an important 1996 GPSWorld article by Garth van Sickle: Response from an unidentified IFF transponder is useful only to the interrogator that triggered it.  That author, who served as Arabian Gulf Battle Force operations officer during Desert Storm, described transponders flooding the air with signals.  Hundreds of interrogations per minute in that crowded environment produced a glut of r-f energy – but still no adequate friendly air assessment.

 

The first step toward solving that problem is a no-brainer: Allocate a brief transmit duration to every participant, each separate from all others.  Replace the Interrogation/Response approach with spontaneous transmissions.  Immediately, then, one user’s information is no longer everyone else’s interference; quite the opposite: each participant can receive every other participant’s transmissions.  In the limit (with no interrogations at all), literally hundreds of participants could be accommodated.  Garble nonexistent.   Bingo.

 

Sometimes there a catch to an improvement that dramatic.  Fortunately that isn’t true of this one.  A successful demo was performed at Logan Airport – using existing transponders with accepted data formatting (extended squitter), in the early 1990s, by Lincoln Labs.  I then (first in January 1998) made two presentations, one for military operation (publication #60- click here) and another one for commercial aviation (publication #61-click here), advocating adoption of that method with one important change.  Transmitting GPS pseudoranges rather than coordinates would enable an enormous increase in performance.  Reasons include cancellation of major errors – which happens when two users subtract scalar measurements from the same satellite, but not coordinates formed from different sets of satellites.   That, however, only begins to describe the benefit of using measurements (publication #66); continue below:

 

With each participant receiving every other participant’s transmissions, each has the ability to track all others.  That is easily done because
(1) every extended squitter message includes unique source identification, and (2) multiple trackers maintained in tandem have been feasible for years; hundredsof tracks would not tax today’s computing capability at all. Tracks can be formed by ad hoc stitching together coordinate differences, but accuracy will not be impressive.  A Kalman tracker fed by those coordinate differences would not only contain the uncancelled errors just noted, but nonuniform sensitivities, unequal accuracies, and cross-axis correlations among the coordinate pseudomeasurement errors would not be taken into account.  Furthermore, the dynamics (velocity and acceleration) – as derivatives – would degrade even more – and dynamic accuracy is absolutely crucial for ability to anticipate near-future position (e.g., for collision avoidance).

 

The sheer weight of all the considerations just noted should be more than enough to motivate the industry towards preparing to exploit this capability.  But, wait – there’s more.  Much more, in fact.  For how many years have we been talking about consolidating various systems, so that we wouldn’t need so many different ones?  Well, here’s a chance to provide both 2-dimensional (runway incursion) and 3-dimensional (in-air) collision avoidance with the same system.  The performance benefits alone are substantial but that plan would also overcome a fundamental limitation for each –
* Ground: ASDE won’t be available at smaller airports
* In-air: TCAS doesn’t provide adequate bearing information; conflict resolution is performed with climb/dive.
The latter item doesn’t make passengers happy, especially since that absence of timely and accurate azimuth information prompts some unnecessary “just-in-case” maneuvers.

 

No criticism is aimed here toward the designers of TCAS; they made use of what was available to them, pre-GPS.  Today we have not just GPS but differential GPS.  Double differencing, which revolutionized surveying two decades ago, could do the same for this 2-D and 3-D tracking.  The only difference would be absence of any requirement for a stationary reference.  All positions and velocities are relative – exactly what the doctor ordered for this application.

 

OK, I promised – not just more but MUCH more.  Now consider what happens when there aren’t enough satellites instantaneously available to provide a full position fix meeting all demands (geometry, integrity validation): Partial data that cannot provide instantaneous position to be transmitted is wasted (no place to go).  But ancient mariners used partial information centuries ago.  If we’re willing to do that ourselves, I’ve shown a rigorously derived but easily used means to validate each separate measurement according to individual circumstances.  A specific satellite might give an acceptable measurement to one user but a multipath-degraded measurement to another.  At each instant of time, any user could choose to reject some data without being forced to reject it all.  My methods are applicable for any frequency from any constellation (GPS, GLONASS, GALILEO, COMPASS, QZSS, … ).

 

While we’re at it, once we open our minds to sharing and comparing scalar observations, we can go beyond satellite data and include whatever our sensors provide.  Since for a half-century we’ve known how to account for all the nonuniform sensitivities, unequal accuracies, and cross-axis correlations previously mentioned, all incoming data common to multiple participants (TOA, DME, etc.) would be welcome.

 

So we can derive accurate cross-range as well as along-range relative dynamics as well as position, with altitude significantly improved to boot.  Many scenarios (those with appreciable crossing geometry) will allow conflict resolution in a horizontal plane via deceleration – well ahead of time rather than requiring a sudden maneuver.  GPS and Mode-S require no breakthrough in inventions, and track algorithms already in public domain carry no proprietary claims.  Obviously, all this aircraft-to-aircraft tracking (with participants in air or on the ground) can be accomplished without data transmitted from any ground station.  All these benefits can be had just by using Mode-S squitter messages with the right content.

 

There’s still more.  Suppose one participant uses a different datum than the others.  Admittedly that’s unlikely but, for prevention of a calamity, we need to err on the side of caution; “unlikely” isn’t good enough.  With each participant operating in his own world-view, comparing scalar measurements would be safe in any coordinate reference.  Comparing vectors with an unknown mismatch in the reference frame, though, would be a prescription for disaster.  Finally, in Chapter 9 of GNSS Aided Navigation & Tracking I extend the approach to enable sharing observations of nonparticipants.

 

In the About panel of this site I pledged to substantiate a claim of dramatic improvements afforded by methods to be presented.  This operation is submitted as one example satisfying that claim.  Many would agree (and many have agreed) that the combined reasons given for the above plan is compelling.  Despite that, there is no commitment by the industry to pursue it.  ADSB is moving inexorably in a direction that was set years ago.  That’s a reality – but it isn’t the only reality.  The world has its own model; it doesn’t depend on how we characterize it.  It’s up to us to pattern our plans in conformance to the real world, not the other way around.  Given the stakes I feel compelled to advocate moving forward with a pilot program of modest size – call it “Post-Nextgen” – having the robustness to recover from severe adversity.  Let’s get prepared.

In 2013 a phone presentation was arranged, for me to talk for an hour with a couple dozen engineers at Raytheon. The original plan was to scrutinize the many facets and ramifications of timing in avionics. The scope expanded about halfway through, to include topics of interest to any participant. I was gratified when others raised issues that have been of major concern to me for years (in some cases, even decades).  Receiving a reminder from another professional, that I’m not alone in these concerns, prompts me to reiterate at least some aspects of the ongoing struggle — but this time citing a recent report of flight test verification

The breadth of the struggle is breathtaking. The About panel of this site offers short summaries, all confirmed by authoritative sources cited therein, describing the impact on each of four areas (satnav + air safety + DoD + workforce preparation). Shortcomings in all four areas are made more severe by continuation of outdated methods, as unnecessary as they are fundamental, Not everyone wants to hear this but it’s self-evident: conformance to custom — using decades-old design concepts (e.g., TCAS) plus procedures (e.g., position reports) and conventions (e.g., interface standards — guarantees outmoded legacy systems. Again, while my writings on this site and elsewhere — advocating a different direction — go back decades, I’m clearly not alone (e.g., recall those authoritative sources just noted). Changing more minds, a few at a time, can eventually lead to correction of shortcomings in operation.

We’re not pondering minor improvements, but dramatic ones. To realize them, don’t communicate with massaged data; put raw data on the interface. Communicate in terms of measurements, not coordinates — that’s how DGPS became stunningly successful. Even while using all the best available protection against interference, (including anti-spoof capability), follow through and maximize your design for robustness;  expect occurrences of poor GDOP &/or less than a full set of SVs instantaneously visible. Often that occurrence doesn’t really constitute loss of satnav; when it’s accompanied by history of 1-sec changes in carrier phase, those high-accuracy measurements prevent buildup of position error. With 1-sec carrier phase changes coming in, the dynamics don’t veer toward any one consistent direction; only location veers during position data deficiencies (poor GDOP &/or incomplete fixes) and, even then, only within limits allowed by that continued accurate dynamic updating. Integrity checks also continue throughout.

So then, take into account the crucial importance of precise dynamic information when a full position fix isn’t instantaneously available. Take what’s there and stop discarding it. Redefine requirements to enable what ancient mariners did suboptimally for many centuries — and we’ve done optimally for over a half-century.  Covariances combined with monitored residuals can indicate quality in real time. Aircraft separation means maintaining a stipulated relative distance between them, irrespective of their absolute positions and errors in their absolute positions. None of this is either mysterious or proprietary, and none of this imposes demands for huge budgets or scientific breakthroughs — not even corrections from ground stations.

A compelling case arises from cumulative weight of all these considerations. Parts of the industry have begun to address it. Ohio University has done flight testing (mentioned in the opening paragraph here) that validates the concepts just summarized. Other investigations are likely to result from recent testing of ADSB. No claim is intended that all questions have been answered, but — clearly — enough has been raised to warrant a dialogue with those making decisions affecting the long term.

Avionics Commonality

A LinkedIn discussion centered on the Future Airborne Capability Environment (FACE) standard contained an important observation concerning certification.  Granted — requirements for validation, with acceptance by governing agencies, definitely are essential for safety. What follows here is advocacy for a proposed way to realize the common avionics benefits offered by FACE while retaining (and in fact, improving) the process of certification. Reasoning is based on three major items:

* CHANGE. In many respects this has necessitated improved standards. 

* HISTORY. Spectacular failures in what we have now are widely documented.

* COST. The status quo is (and, for a long time, has been) unaffordable.

In regard to the first item: the pace of change in so many areas (hardware, software, operating systems, data communication, etc., etc., etc.) — and the effects on procurement cycles — are well known. How can certification remain unchanged when nothing else does? That argument would be undercut if the process had a rock solid track record — but that theme would not be supported by the second item — history:

Myriad shortcomings of existing operational systems are so pervasive that no one is considered a “loose cannon” for openly discussing them. Any of my horror stories — too strange and too numerous to be revisited here — would be trumped anyway by a document from the government itself. GAO-08-467SP, in 2008, described outlandish cost overruns, schedule delays, and deficient technical performance in the defense industry. That 3-way combination speaks for itself. Now a significant addition: the certification process has not been at all immune to serious flaws. The first-ever certified GPS receiver is now well known to have failed spectacularly in multiple facets of integrity testing by another manufacturer. It is readily acknowledged that correction of those early problems is quite credible, but one issue is inescapable: Historical proof of flightworthiness improperly bestowed — with proprietary rights accepted for algorithms and tests –- happened,  and that was not widely known until much later.

There is still more, including integrity failure probability limits missed by orders-of-magnitude in certified GPS receivers, severe limitations of GO/NO-GO testing, and failed attempts to gain approval to set requirements for correcting those plus other deficiencies. For brevity here, those issues are covered by citing the fifth page from another related reference.

The final item is, after years of fruitless talk about cost reduction, being acknowledged — we can’t do what we’ve been doing any more.  With dollars being the ultimate driver of so many decisions, we might finally see the necessary break from ingrained habits. FACE already addresses the issues and the requisite justifications. To make it all happen, two essential ingredients are

* raw-data-across-the-board, and

* nonproprietary software, with standardization under government control.
Flight-validated algorithms already in existence can be converted (e.g., from proof-of-concept to in-flight real-time form) according to government specification, by small groups more interested in engineering than in dollars (yes, that does exist). The payoff in cost savings can be huge.

Significant momentum is evolving toward a role for Open System Architecture (OSA) applied to radar. My observations in connection with that, voiced in a short LinkedIn discussion, seem worth repeating here.

One step could add major impact to this development: Rather than position (or relative position) outputs, provide measured range, azimuth, elevation (doppler could optionally be added if applicable) on the output interface. That simple step would vastly improve effectiveness of track file maintenance. Before attempting to describe all reasons for improved performance, two obvious benefits can be identified first:
* ability to use partial information (e.g., range-only or, for passive operation, angle-only)
* proper weighting of data for updating track state estimates.
The first item is self-evident. The second arises from common-sense attachment of greater value to the most accurate information. An explanation:

One-sigma error ellipsoids for individual radar fixes are not spherical (not a beachball shape but more like a flattened beachball), even at close range. At longer distances the shape progresses from a frisbee to a pancake to a DVD. Kalman filtering has enabled us to capitalize on that feature for over a half-century. Without exploiting it, we effectively treat separate radar-derived “coordinates” by intersecting volumes in space that are common to overlapping spheres. Resulting uncertainty volume is enormously larger than it should be.

The feature just noted shows up dramatically when mixing data among multiple platforms. Consider cooperative engagement whereby participants, all tracking each other via network-transmitted GPS observations, share radar observations from an unknown non-participant. Share measurements or coordinates? No contest; multiple lines crossing from different directions can offer best (i.e., along-range) accuracies applicable in 3-D.

That fact (i.e., combining data from different sensors and different platforms further dramatizes available improvements) doesn’t diminish the basic issue; even with a time history of data from only one radar, a designer with direct measurements available — instead of, not in addition to, coordinates — can provide incomparably superior performance.

“Send Measurements not Coordinates” (1999; #66 from the “Published Articles” panel, opening with eight rock-solid reasons) was aimed at GPS rather than radar. Many of the principles are the same when mixing data with information from other platforms — and from other sensors such as GPS. There is no reason, in fact, why data from satellite navigation and radar can’t be combined in the same estimation algorithm. That practice hasn’t evolved but the historical separation of operations (e.g., navigation and surveillance), arising from old equipment limitations, should no longer be a constraint. Moreover, with focus shifted from tracking to navigation, integration with additional (e.g., inertial) data offers still more reasons for using direct measurements. Rather than loose integration, superior benefits are widely known to result as the sophistication progresses forward (tight. ultratight, and deep integration).

Further elaborations on “casting off our old habits” appear from different perspectives in various blogs, one-pagers, and a few manuscripts available at this site. If your library has a copy of GNSS Aided Navigation & Tracking  pages 203-4 show a way to implement the cooperative sharing of radar data obtained from a non-participant, among participants tracking each other via the mutual surveillance and tracking approach defined earlier in that same chapter.

Because so many operational systems (in fact, a vast majority) use reports in the form of coordinates, reiteration is warranted. The central issue is the content, not the amount, of data. Rather than coordinates, provide accurately time-stamped direct measurements with links connected to whichever platform observed the data (e.g., for satnav — pseudoranges; for radar — range, azimuth, elevation). Those links are automatically attached when Mode-S extended squiter (e.g., chosen for ADSB) is the means for conveying the data.  For message content, strictly disallow “massaging the data beyond the light of day” (e.g., by unknown processes, with uncertain timing, … ) which invariably results in enormous loss of performance in common occurrence today.

Life before GPS

Before GPS took over so many operations by storm (e.g., navigation,tracking, timing, surveying, etc.), designers had access to other — far less capable — provisions.  That condition forced our hands; to derive maximum benefit from what was available, we had to extract full information content from those provisions.  Now that GPS is subjected to challenges (aging, jamming, spoofing, etc.), some of those older methods are receiving increased scrutiny.  Recently I’ve received renewed interest in areas I analyzed decades ago.  Old publications from two of those areas are discussed here: 1) attitude determination and 2) nav integration.

“Attitude Determination by Kalman Filtering” is the title of three documents I had published.  In reverse sequence they are:
1) Automatica (IFAC Journal), v6 1970, pp. 419-430,
2) my Ph.D. dissertation (Univ. of Maryland, 1967),
3) NASA CR-598, Sept., 1966.
As indicated by the last reference, the work was the result of a contractual study sponsored by NASA (specifically Goddard Space Flight Center – GSFC – in Greenbelt Maryland).  I was working for Wetinghouse Defense and Space Center at the time.  The proposal I had written to win this contract cited my work prior to then, in both modern estimation (“Simulation of a Minimum Variance OrbitalNavigation System,” AIAA JSR v 3 Jan 1966 pp. 91-98) and attitude computation (“Performance of Strapdown Inertial Attitude Reference Systems,” AIAA JSR v 3 Sept 1966, pp. 1340-1347).  Let me hasten to explain the dates of those Journal publications: each followed its inclusion at an AIAA-sponsored conference, about a year earlier.

By the mid-1960s there was an appreciable amount of validation for Kalmen filtering applied to determination of orbits (that track record was convincing) but not yet for attitude.  A GSFC-sponsored investigation was then planned — the very first one for attitude using modern estimation methods.  GSFC management understandably wanted that contractual investigation to be performed by someone with demonstrable experience in both Kalman filtering and rotational dynamics.  In those days that combination was rare; the Westinghouse proposal was chosen as the winner.  At the time of that study, provisions realistically available for attitude updating consisted of mediocre-accuracy items such as magnetometers and horizon scanners– not bad but not spectacular either.
All that was of course before GPS weighed in, with its opportunity to reveal attitude from phase differences between antennas spaced at known distances apart.  That vastly superior capability effectively reduced earlier crude measurements to relative obscurity.  A directly parallel situation occurred in connection with navigation; the book that first tied together several facets of advancement in that field (integration, strapdown inertial, modern estimation with  acceptance of all data sources, multimode operation, extension to tracking, clear exposition of all commonly used representations of attitude, etc.) was”pre-GPS” (1976), and consequently regarded as less relevant. Timing can be decisive — that’s no one’s fault.

The item just noted — attitude representation — is worth further discussion here.  Unlike many other sources, the 1976 book offered an opportunity to use quaternion properties without any need to learn a specialized quaternion algebra.  A literature search, however, will point primarily to various sources (of necessity, later than 1976).that benefit from the superior performance offered through GPS usage. Again, in view of GPS as a game-changer, that is not necessarily improper.  Most publications on attitude determination don’t cite the first-ever investigation, sponsored by GSFC, for that innocent reason.

The word beginning that last sentence (“Most”) has an exception.  One author, widely quoted as an authority (especially on quaternions), did cite the original work — dismissing it as “ad-hoc” — while using an exact copy of the sensitivity matrix elements pubished in my original investigation (the three references cited at the start of this blog).
While I obviously didn’t invent either quaternions or the Kalman filter, there was another thing I didn’t do: fail to credit, in my publications, pre-existing sources that contributed to my findings. Publication of the material cited here, I’ve been told, paved the way for understanding and insight to many who followed. No one owes me anything for that; an analyst’s work, truthfully and realistically presented, is what the analyst has to offer.

It is worth pointing out that both the attitude determination study and the 1976 book cover another facet of rotational analysis absent from many other related publications: dynamics — in the sense of physics.  Whereas modern estimation lumps time-variations of the state together into one all-encompassing “dynamic” model, classical physics makes a separation: Kinematics defines the relation between position, rates, and accelerations.  Dynamics determines translational accelerations resulting from forces or rotational accelerations resulting from torques.

Despite absence of GPS from my early (1960s/70s) investigations, one feature that can still make them useful for today’s analysts is the detailed characterization of torques acting — in very different ways — on spinning and gravity-gradient satellites, plus their effects on rotational motion. Many of the later studies focused on the rotational kinematics, irrespective of those torques and their consequences. Similarly, the “minimal-math”approach to explaining integrated navigation has enabled many to grasp the concepts.  Printed testimony to that effect, from courses I taught decades ago, is augmented by more recent source noted near the end of another page shown on this site.

An early comment sent to this site raised a question as to how long I’ve been doing this kind of work.  Yes I’m an old-timer.  Some of my earlier Kalman filter studies are cited in books dating back to the 1970s — e.g., Jazwinski, Stochastic Processes and Filtering Theory, 1970 (page 267); Bryson & Ho, Applied Optimal Control, 1975 (page 374); Spilker, Digital Communication by Satellite, 1977 (page 636).  My first book, published by Academic Press, initially appeared in 1976.

In the early 1960s, not long after Kalman’s ASME breakthrough paper on optimal filtering, I was at work simulating its effectiveness for orbit determination (publication #4).  No formal recognition of EKF existed at that time, but nonlinearities in both dynamics and observables made that course of action an obvious choice.  In 1967 I applied it to attitude determination for my Ph.D. dissertation (publication #9). Shortly thereafter I wrote a program (publication #16) for application to deformations of a satellite so large (end-to-end length taller than the Empire State Building) that its flexural oscillations were too slow to allow decoupling from its rotational motion (publications #10, 11, 12, 14, 15, 27).  Within that same time period I analyzed and simulated strapdown inertial navigation (publications #6, 7, 8).

Early familiarizarion with Kalman filtering and inertial navigation paid huge dividends during subsequent efforts in other areas.  Those included, at first, doppler nav with a time-shared radar beam (publication #20), synthetic aperture radar (publications #21, 22, 38, 41), synchronization (publication #19), tracking (publications #23, 24, 28, 30, 32, 36, 39, 40, 48, 52, 54, 60, 61, 66, 67, 69), transfer alignment (publications #29, 41, 44), software validation (publications #34, 42), image fusion (publications #43, 49), optimal control (publication #33), plus a few others.  All these efforts made it quite clear to me — there’s much more to all this than sets of equations.

Involvement in all those fields had a side effect of delaying my entry into GPS work; I was a latecomer when the GPS pioneers were already established.  GPS/GNSS is heavily involved, however, in much of my later work (latter half of my publications) — and my work in other areas produced a major benefit:  The experience provided insights which, in the words of one reviewer quoted in the book description (click here) are either hard to find or unavailable anywhere else.  Recognizing opportunities for synergism — many still absent from today’s operational systems — enabled me to cross the line into advocacy (publications #26, 47, 55, 63, 66, 68, 73, 74, 77, 83, 84, 85, 86).  Innovations present in GNSS Aided Navigation and Tracking were either traceable to or enhanced by my earlier familiarization with techniques used in other areas.

For steady-state a suboptimal estimator can be designed with near-optimal performance.  A Kalman filter, though, optimizes accuracy during transients too – provided that the model is known and linear.  Immediately we’ll  invoke the “almost/most/if” qualification: an extended Kalman filter (EKF) is almost optimum, throughout most of its operation, if the model is almost linear and modeling errors are held in check via process noise.  Rather than presenting justification here I’ll cite a set of “do’s-and-dont’s” – validated by long experience – from Section 2.9 of GNSS Aided Navigation and Tracking, with GPS/INS flight test data included.  Eqs. (9.9)-(9.19) of that same reference provides simple design equations for alpha-beta and alpha-beta-gamma trackers that have consistently produced success in operation.

First we’ll note that suboptimal is not equated to “constant gain” – if for no other reason, the time between measurements will vary in many systems.  That’s quite easily accommodated by the alpha-beta[-gamma] designs just mentioned.  There are additional reasons, though, that can be illustrated by addressing a taxing situation for initiating a radar track file in close range air-to-air encounters between two fighter jets.  The target’s (i.e., tracked object’s) cross-range velocity at lock-on time is unknown.  It could be 800 ft/sec, for example, in which case the tracker’s initial velocity error has at least that 800-ft/sec component.  With any additional unknown component of along-range velocity the target may have at that instant (doppler, if observed, might not yet be trusted to represent range rate dynamics), the tracker’s initial velocity error will then exceed 800 ft/sec.  The transient at acquisition could easily be further complicated by acceleration.  Anyone familiar with servo pull-in dynamics will immediately see how the transient can reach significantly beyond the initial error – very fast – in multiple directions (e.g., East/West and North/South).  Since we’re not at all comfortable with velocity errors on the order of 1000 ft/sec, the task is to wash that out ASAP.

A Kalman filter having accurate knowledge of the initial [P] matrix would breeze through this challenge.  An excellent example of its role is provided by this transient behavior.  Knowledge of that matrix is tantamount to knowing whether – at initiation time – the tracker’s North velocity error is positively or negatively correlated (i.e., likely to have the same or opposite sign) as its North position error, likewise for East velocity error with same-vs-opposite sign of North acceleration error, and likewise … all combinations – not only the signs but also the RMS amounts.  Of course that’s completely unrealistic.  So now what?

Suboptimal gain sets phased in at the right times can handle this.  For a simple illustration, let a 3-dimensional tracker, divided into three separable 3-state (position/speed/acceleration) single-direction channels, have a 20-Hz update rate.  If the first few updates have gains of 0.5 or more for position only, even a huge position error can be quickly brought down near sensor-error levels before accompanying errors in dynamics have much time to propagate. Then after that many (“K1”) position corrections, a position-&-speed update phase can be initiated, using the alpha-beta tracker gains related as shown in Eq. (9.12) of the reference cited above.  Duration of that phase is devised to last only as long as necessary to reduce speed error to design levels (which will be proportional to measurement error divided by that duration).  After the total number of corrections has reached that intended design value (“K2”), the alpha-beta-gamma phase can start with gains related according to Eqs. (9.18-19) of that same reference.  That phase continues until the total corrections count reaches “K3” at which time acceleration error is reduced to an amount inversely proportional to the square of (K3-K2).  Gains thereafter may conform to Kalman filter weighting.

This example is not intended to advocate substituting suboptimal for optimal designs just anywhere.  Separation of 3-dimensional trackers into 3-state single-direction channels is often permissible (and sometimes even highly advisable), but – as shown in the cited reference – sometimes inappropriate.  Where it is permitted, use it; solving the unknown-P-zero problem is especially important in applications of this type.  A word to the wise: Do not (repeat: do not) make the update counts K1,K2,etc. programmable.  If you do, someone unfamiliar with the reasoning above will experiment, allowing resets to values producing very prolonged back-and-forth transfer of errors among position and dynamics (one gets worse as another improves; then vice-versa).  When that spectacle is seen by nontechnical administrators, your image in their minds will forever be indelibly painted with that long drawn-out transient veering back and forth between plus and minus extreme levels.

Another slice-of-advice: Even if inputs are extremely erratic, your tracker must maintain high responsiveness (for sensor sightline stabilization at short range and for range[/doppler] gate placement at any range) – but – the outside world doesn’t have to witness the results of that “hitchy-hatchy” from wildly erratic inputs.  So: don’t change the tracker but do low-pass filter what goes outside.  If “hiding the system’s warts” thereby produces criticism, the justification is: the filtered output, even with the resulting delay (and possibly an accompanying distortion), is easier to interpret.

http://www.JamesLFarrell.com

As an alternative to TCAS in air and ASDE on ground, all facets of collision avoidance (see 9-minute video) can be supplanted with vast improvement:

  • INTEGRATION – one system for both 2-D (runway incursions) and 3-D (in-air)
  • AUTONOMY – no ground station corrections required
  • COMMUNICATION – interrogation/response replaced by ModeS squitter operation
  • COORDINATION – coordinated squitter scheduling eliminates garble
  • TRACKING – all tracks maintained with GPS pseudoranges in data packets
  • DYNAMICS – tracks provide optimally estimated velocity as well as position
  • TIMELINESS – history of dynamics with position counteracts latency
  • MULTITARGET HANDLING – every participant can track every other participant
  • CONTROL – collisions avoided by deceleration rather than climb/dive

My previous investigations (publication #61 and #66, combined with publication #85 as well as Chapter 9 of GNSS Aided Navigation and Tracking) provided in-depth analyses for all but the last of these items.  The control aspect of the problem is addressed here.  This introductory discussion involves only two participants, initially on a coaltitude collision course.  One (the “intruder”) continues with his path unchanged (so that the method could remain applicable for encounters between a participant and a non-participant tracked by radar or optical sensors).  The other (“evader”) decelerates to change projected miss distance to a chosen design value.  This simplest-of-all scenarios can readily be extended to encounters at different altitudes and, by reapplying the method to all users wherever projected miss distance falls below a designated threshold, to multiple-participant cases.

Considered here are simple scenarios with aircraft initially on a collision course at angles from 30 to 130 degrees between their velocity vectors.  Those limits can of course be changed but, the closer the paths are to collinear the more deceleration is required to prevent a collision (in the limit – direct head-on – no amount of deceleration can suffice; turns are required instead).  Turns can be addressed in the future; here we briefly discuss the 30-to-130 degree span.

In Coordinates Magazine and again as applied to UAVs it was shown that, over a wide combination of intruder speed, evader speed, and angles (within the 30-to-130 degree span just noted), the required amount of evader speed reduction is modest.  A linearized approximation can be derived intuitively from scenario parameter values.  The speeds and the angle determine a closing range rate, while closest approach time is near the initial time-to-go (ratio of initial distance to closing rate) though deceleration produces a difference.  The projection of evader speed reduction along the relative velocity vector direction has approximately that much time to build up 500 to 1000 meters of accumulated horizontal separation.  Initiation of the speed change that far in advance allows the dynamics to be gradual, in marked contrast to the sudden TCAS maneuver.  To avoid a wake problem, the evader’s aim point can be directed to a few hundred feet above the original coaltitude.  Continuous tracking of the intruder allows the evader to perform repetitive trim adjustments.

A program with results illustrating this scheme will not fit on a one-page summary, but it comes as no surprise that, with accurate tracks established well in advance (a minute or two prior to closest approach time), a modest deceleration can successfully avert collisions.