A recent post expressed concern (serious concern, in fact, to anyone tracing through the references cited therein), regarding future air traffic. Those concerns, serious as they are, all stemmed from events that  preceded developments adding further disquiet. Consider this “triple-whammy” —

* prospect for tripling of air traffic by 2025
* addition of UAVs into NAS
* ADSB methodology plans
Now add all implications of this headline:  GAO Faults DHS, DoT for GPS Interference/Backup Effort  in the 2013 year-end issue of InsideGNSS.  In that year-end description a friend of mine (Terry McGurn) is quoted, saying “I don’t know who’s in charge.” That thoroughly credible diagnosis, combined with subsequent observations by father-of-GPS Brad Parkinson, crystallizes in many minds an all-too-familiar scenario with a familiar prospective outcome: The steps they’ve correctly prescribed “can’t” be implemented now. It’s “too late” to change the plan — until “too late” acquires a new meaning (i.e., too-late-to-undo-major-damage).

Historically this writer has exerted considerable effort to avoid dramatizing implications of the status quo. While straining to continue that effort, I feel compelled to highlight those implications related to air safety.

OK there will be no naming names here, but many with influence and authority do “not believe that the minimum performance required by the ADS-B rule presents a significant risk to the operation of the National Airspace.”  Here’s my worry:
* That minimum required performance allows ten meters/second velocity error
* Existing efforts to provide precise position therefore become ineffective within an extremely short time; what translates into collision avoidance isn’t accuracy in present but in future relative position
* Consequently the accuracy that matters isn’t in position but in velocity relative to every potentially conflicting object

* Each of those relative velocities can be in error by 10 m/sec or more

* Guidance decisions made for collision avoidance must be prepared in advance
* Even TCAS, with its sudden climb/dive maneuvers (which recently produced a news story about screaming passengers), plans a minute and a half ahead
* With 10 m/sec velocity error that gives 900 meters uncertainty in projected miss distance — no higher math, just simple arithmetic
* That simple calculation is very far from being a containment limit
* Only containment limits based on conservative or realistic statistics can provide confidence for safety

* containment limit can’t just be 3 sigma; for low enough collision probability with non-gaussian distributions, ten sigma could still be insufficient.

Combined weight of these factors calls for assessment of prospects with future (tripled) air traffic plus UAVs. It has been asserted that, rather than ten m/sec, one m/sec is much more likely. As noted earlier, “much more likely” is far from sufficient reassurance. Furthermore, multiplying time-to-closest-approach by 1 m/sec, and amplifying that result by enough to produce a credible containment limit, still produces unacceptably large uncertainty in projected miss distance. So: expect (1) either frequent TCAS climb/dives or (2) guidance commands generated for safe separation resulting in enormous deviations from what’s really needed. Forget closer spacing.

Add to that runway incursions, well over a thousand per year and increasing — also with everything said about needs for avoiding other objects applicable to maritime (shoals) as well as airborne conflicts; Amoco-Cadiz, Exxon-Valdez, …  .

These points are covered by presentations and documentation from decades back.  The best possible sense-&-avoid strategy is incomplete without precise relative (not absolute) future (i.e., at closest approach, not current) position — which translates into velocity accuracies expressed in cm/sec (not meters/sec).  The industry has much to do before that becomes familiar — let along the norm that containment requires.  A step in this direction is offered by flight results (pages 221-234, Fall 2013 issue of Institute-of-Navigation Journal).

How reliable is reliable enough?

 

GPS is by far the best-ever system for both navigation and timing. Recognition of that is essentially universal. Less widely recognized are the ramifications of growing dependence on GPS, in both communications and navigation. This discussion will concentrate on the latter, highlighting attendant risk in flight. Although extensive deliberation already exists, I’ll presume to offer my experience. Immediately it is acknowledged that the risk is low; but now let’s ask what is low enough. While every effort is made here to avoid an alarmist tone, answering that question calls for an unflinching look at potential consequences if the gamble ever failed.

The above paragraph opens a broader (two-page) discussion on this same site.  That expanded discussion addresses several familiar topics; GPS, for example, lives up to expectations, brilliantly performing as advertised. Since no system can be perfect, the industry uses firmly established methods, supported by widely documented successful results.

Backup for GPS, strongly urged in the widely acclaimed 2001 Volpe report, remains incomplete. Continuation of this condition calls to mind the Titanic or the 2008 financial fiasco. Meanwhile, shortcomings of GPS integrity tests are described and inescapably demonstrated by citing a document from the spring of 2000, plus history of flightworthiness improperly bestowed with proprietary rights accepted for algorithms and tests (rather than rigor advocated two decades ago; #57 of the publication list). A subsequently documented panel discussion shows the extent of unpreparedness in the legal realm.

While bending over backwards to acknowledge that disaster is unlikely, an unflinching look reviews potential though improbable consequences. The meaning of G in GPS suggests that “unlikely” isn’t enough.

Although this is old information, realization has not been widespread. Also, present plans for upgrading to Automatic Dependent Surveillance Broadcast (ADSB) raise new concerns (#83 of the publication list, plus various blogs on this site related to collision avoidance, runway incursions, … ). This dialogue is prompted by considerations of safety. Again, “low” likelihood combined with absence of a calamity thus far offers no guarantee. I advocate a revisit of this issue with all its ramifications.

An earlier post cited a detailed analysis for deformations of a 3-D structure composed from GPS monitoring stations surrounding the area affected by the 2011 Tohoku quake.  That full development is now featured on the cover, and also within, the August 2013 issue of Coordinates zime.   A summary is available here for those desiring only a capsule review.  

An intense discussion among LinkedIn UAV group members involves several important topics, including the source of a one-out-of-a-billion requirement for probability of a mishap.  The source is thus far unidentified.

I can shed some light on the genesis of another related requirement.  From within RTCA SC-195 (GPS Integrity) Working Group for FDI/FDE (Fault Detection and Isolation / Fault Detection and Exclusion) in the 1990s, parameters were used to establish the Missed Detection requirement as follows:
* From records obtained as far back as possible (1959) there were over 333 million flight-hours nationwide between 1959 and 1990.
* In the (inevitably imperfect) real-world, the maximum allowable number of hull-loss accidents in 30 years cannot be specified at zero; so that maximum allowable number was set to one, producing 3 billionths per flight-hour
* The number used for mean time between loss of GPS integrity was 18 years.
* Probability of an unannounced SV (satellite) malfunction is then  1 – exp{ -1/18x365x24 } = approx. 6 millionths per hr per SV.
* Since 6 SVs are needed for FDE, that probability is multiplied by 6, producing 36 millionths per hour as the probability of an unannounced SV malfunction in any SV among those chosen for FDE.
* Probability of an undetected unannounced SV malfunction is then  36 millionths per hour multiplied by Missed Detection probability.
* With an incident/accident ratio slightly below 1/10, a value of 0.001 for Missed Detection probability satisfies the 3 billionths per flight-hour requirement.

None of this is intended to signify thoroughness in the genesis of decisions affecting flight safety requirements. Neither demands for rigorous validation in early 1994, nor an attempt to facilitate meeting those demands — via replacement of GO/No-GO testing by quantitative assessment — met the “collective-will” acceptance criteria. History related to that, not reassuring, is recounted on page 5 of a synopsis offered here and page 127 of GNSS Aided Navigation & Tracking. The references just cited, combined with additional references within them, address a challenging topic: how to substantiate, with high confidence, satisfaction of very low probabilities.  There are methods, using probability scaling, not yet accepted.

Returning to the original question that prompted this blog: It might be uncovered — possibly from some remote source — that the number with a mysterious origin was supported, at one time, by some comparable logic.

Elmen C. Quesinberry

Earlier this year I wrote a belated tribute to a well-known pioneer in strapdown. Now I must write another tribute, even more belated, to a pioneer who was less well-known — but with a legacy equal to any other whose work helped mine to come alive.

Over three decades (Feb 1961 to Nov 1993) I was a full-time employee of Westinghouse (division names varied from AirArm to DESC to …) — but what I want to express here is first of all a salute to many of the people whose paths crossed with mine. That word “many” is no exaggeration; one recollection that stands out occurred during a chance conversation with Tim Gunn, at about lunch hour, near the cafeteria. Over and over again, seemingly everyone-&-their-brother walking by, was saying “Hi” to me. Tim was flabbergasted at how many people I knew, whether they were from the shop (in some of those cases, from barbershop quartet or other music activities) or — in many other cases — from one department or another of engineering.

In recollection it is crystal clear that, during that time period, I was privileged to work with many of the best. That includes names like Joe Dorman, Jim Mims,  … And when position/velocity/acceleration gains had to be set for track at lock-on over 7 octaves of range with 16-bit words, George Axelby and John Stuelpnagel helped show the way — and who could forget the Schafer/Leedom/Weigle triumvirate or, from TIR, Bill Hopwood or, from software, names like Heasley + Landry + Kahn + Clark (who as a techie was among the best, as was another from his group — working with me on A12 when John crossed into management) — plus others, too numerous to mention. Many of the latter names are more obscure, it is realized; in some ways that’s the most important part of this effort, to give credit where credit is overdue. Of all the best-and-brightest named &/or unnamed here, no one stands higher in my memory than Elmen C. Quesinberry. His contribution to Westinghouse’s collection of achievements over time is realized by only very few. I guess what’s important is that he realized it himself; he earned every bit of his salary, and much more.

This revisit-of-history isn’t intended to imply that all was sunshine + roses; in fact, we encountered some major opposition. No need to go into detail now, but many had peripheral (or less) understanding. I made my peace with those long ago and have no desire to retract it. For doubters, flight-validated results appear elsewhere on this site. Enough said; of central importance here is the lasting legacy of a truly great engineer. Elmen C. Quesinberry, a true Christian gentlemen, was an outstanding engineer whose collaboration gave me benefits unsurpassed by any other over three decades at Westinghouse.

Nav Course in India

Prof. H.B. Hablani from Dept Aerospace Engrg, Indian Institute of Technology (Bombay) recently taught a 5-day course on integrated navigation, based primarily on Integrated Aircraft Navigation and augmented with others (including my 2007 book).  He has approved my quoting him as saying “both books are of very high quality, unique and original.”

In 2013 a phone presentation was arranged, for me to talk for an hour with a couple dozen engineers at Raytheon. The original plan was to scrutinize the many facets and ramifications of timing in avionics. The scope expanded about halfway through, to include topics of interest to any participant. I was gratified when others raised issues that have been of major concern to me for years (in some cases, even decades).  Receiving a reminder from another professional, that I’m not alone in these concerns, prompts me to reiterate at least some aspects of the ongoing struggle — but this time citing a recent report of flight test verification

The breadth of the struggle is breathtaking. The About panel of this site offers short summaries, all confirmed by authoritative sources cited therein, describing the impact on each of four areas (satnav + air safety + DoD + workforce preparation). Shortcomings in all four areas are made more severe by continuation of outdated methods, as unnecessary as they are fundamental, Not everyone wants to hear this but it’s self-evident: conformance to custom — using decades-old design concepts (e.g., TCAS) plus procedures (e.g., position reports) and conventions (e.g., interface standards — guarantees outmoded legacy systems. Again, while my writings on this site and elsewhere — advocating a different direction — go back decades, I’m clearly not alone (e.g., recall those authoritative sources just noted). Changing more minds, a few at a time, can eventually lead to correction of shortcomings in operation.

We’re not pondering minor improvements, but dramatic ones. To realize them, don’t communicate with massaged data; put raw data on the interface. Communicate in terms of measurements, not coordinates — that’s how DGPS became stunningly successful. Even while using all the best available protection against interference, (including anti-spoof capability), follow through and maximize your design for robustness;  expect occurrences of poor GDOP &/or less than a full set of SVs instantaneously visible. Often that occurrence doesn’t really constitute loss of satnav; when it’s accompanied by history of 1-sec changes in carrier phase, those high-accuracy measurements prevent buildup of position error. With 1-sec carrier phase changes coming in, the dynamics don’t veer toward any one consistent direction; only location veers during position data deficiencies (poor GDOP &/or incomplete fixes) and, even then, only within limits allowed by that continued accurate dynamic updating. Integrity checks also continue throughout.

So then, take into account the crucial importance of precise dynamic information when a full position fix isn’t instantaneously available. Take what’s there and stop discarding it. Redefine requirements to enable what ancient mariners did suboptimally for many centuries — and we’ve done optimally for over a half-century.  Covariances combined with monitored residuals can indicate quality in real time. Aircraft separation means maintaining a stipulated relative distance between them, irrespective of their absolute positions and errors in their absolute positions. None of this is either mysterious or proprietary, and none of this imposes demands for huge budgets or scientific breakthroughs — not even corrections from ground stations.

A compelling case arises from cumulative weight of all these considerations. Parts of the industry have begun to address it. Ohio University has done flight testing (mentioned in the opening paragraph here) that validates the concepts just summarized. Other investigations are likely to result from recent testing of ADSB. No claim is intended that all questions have been answered, but — clearly — enough has been raised to warrant a dialogue with those making decisions affecting the long term.

INFRASTRUCTURE

The sky isn’t falling but some of our bridges are. About seventy thousand of them are structurally deficient, and about a quarter of those could go at any time. Calamities are sure to mount if nothing is done. The number that will occur (e.g., after belated prevention efforts) is unknown.

That number could be reduced by a method not currently in use. In combination with other steps (placement of sensors at strategic locations on a structure) a historical pattern of deformations can be generated automatically. The means of analyzing the deformations has already been shown to provide early warning capability, via application to data recorded before the 2011 Tohoku earthquake. There is no need to repeat the description here; it’s already documented.

Sooner or later another subject comes up: The “C” word (cost). Aside from severity of the problem, about the only other item prompting agreement is the notion that a solution is unaffordable. Let me change that notion this way: if a complete solution is deemed unaffordable, a partial solution doesn’t have to be. Prioritize. Bridges exhibiting the most urgent warning signs are highest priority for remedial action. At 1/70,000th of the total cost, no one could reasonably refuse to fix one that must be fixed.

An acknowledgment: the argument just cited is overstated. Initial investment is always a greater fraction of the long-term total, and applying a method for the first time to any operation requires ironing out some wrinkles. Still, admitting that a fraction exceeds 1/70,000 doesn’t constitute a shocking confession. The sensors don’t have to be top-of-the-line. If the “bottom line” is all that matters, then here’s the real bottom line: Their cost, plus the cost of a government-sponsored project, pales in comparison with losses resulting from a bridge collapse — let alone the losses incurred from seventy thousand collapsed bridges.

John Bortz

In February of this year the navigation community lost a major contributor to navigation — John Bortz. To many his name is best known in connection with “the Bortz equation” which easily deserves a note here to highlight its significance in development of strapdown inertial nav. Before his work in the early 1970s, strapdown was widely considered as something with possible promise “maybe, if only it could ever come out of the lab-&-theory realm” and into operation. Technological capabilities we take for granted today were far less advanced then; among the many state-of-the-art limitations of that time, processing speed is a glaringly obvious example. To make a long story short, John Bortz made it all happen anyway. Applying the previously mentioned equation (outgrowth of an early investigation of Draper Lab’s Dr. J.H. Laning) was only part of his achievement. Working with 1960s hardware and those old computers, he made a historic mark in the annals of strapdown.  Still, importance of that accomplishment should not obscure his other credentials. For example, he also made significant contributions to radio navigation — and he spent the lst two decades of his life as a deacon.

 A comment challenged my video .  I’m glad it included an acknowledgment that some points might have been missed. To be frank that happened a bunch; bear with me while I explain. First, there’s the accuracy issue; doppler &/or deltarange info provided from many receivers is far less accurate than carrier phase (sometimes due to cutting corners in implementation — recall that carrier phase, as the integral of doppler, will be smoother if processing is done carefully). Next, preference for 20-msec intervals will backfire badly. If phase noise at L-band gives a respectable 7mm = 0.7cm, doppler velocity error [(current phase) – (previous phase)] / 1 sec is (1.414) (0.7) = 1 cm/sec RMS for a 1-sec sequential differencing interval.  Now use 20 msec: FIFTY times as much doppler error! Alternatively if division is implicit instead of overt, degradation is more complicated: sequential phase differences are highly correlated (with a correlation coefficient of -1/2, to be precise). That’s because the difference (current phase) – (previous phase) and the difference (next phase) – (current phase) both contain the common value of current phase. In a modern estimation algorithm, observations with sequentially correlated errors are far more difficult to process optimally.  That topic is a very deep one; Section 5.6 and Addendum 5.B of my 2007 book address it thoroughly. I’m not expecting everyone to go through all that but, to offer fortification for its credibility, let me cite a few items:

* agreement from other designers who abandoned efforts to use short intervals
* table near the bottom of a page on this site.

* phase residual plots from Chapter 8 of my 2007 book.

The latter two, it is recalled, came from flight test for an extended duration (until flight recorder was full), under severe test aircraft (DC-3) vibration.

For doppler updating from sources other than satnav, my point is stronger still. Doppler from radar (which lacks the advantage of passive operation) won’t get velocity error much below a meter/sec — and even that is an improvement over unaided inertial nav (we won’t see INS velocity specs expressed in cm/sec within our lifetime).

Additional advantages of what the video offers include (1) no requirement for a mask angle (2) GNSS interoperability, and (3) robustness. A brief explanation:

(1) Virtually the whole world discards all measurements from low-elevation satellites because of propagation errors. But ionospheric and tropospheric effects change very little over a second; 1-sec phase differences are great for velocity information. Furthermore they offer a major geometry advantage while occurrence of multipath would stick out like a sore thumb, easily edited out.
(2) 1-sec differences from various constellations are much easier to mix than the phases themselves. 
(3) For receivers exploiting FFT capability  even short fragments of data, not sufficiently continuous for conventional mechanizations (track loops), are made available for discrete updates.
The whole “big picture” is a major improvement is robust operation 

The challenger isn’t the only one who missed these points; much of our industry, in fact, is missing the boat in crucial areas. Again I understand skepticism, but consider the “conventional wisdom” regarding ADSB: Velocity errors expressed in meters per second — you can hear speculative values as high as ten. GRADE SCHOOL ARITHMETIC shows how scary that is; collision avoidance extrapolates ahead. Consider the vast error volume resulting from doing that 90 seconds ahead of closest approach time with several meters per second of velocity error. So — rely on see-and-avoid? There are beaucoup videos that show how futile that is (and many more videos that show how often near misses occur — in addition there are about a thousand runway incursions each year). That justifies the effort for dramatic reduction of errors in tracking dynamics — to cm/sec relative velocity accuracy.

It’s perfectly logical for people to question my claims if they seem too good to be true. All I ask is follow through, with visits to URLs cited here.