A review described  my 2007 book as “teeming with insights that are hard to find or unavailable elsewhere” — I hasten to explain that the purpose wasn’t to be different for the sake of being different.  With today’s large and growing obstacles placed in the way of satellite navigation, unusual features of my approach were motivated primarily by one paramount objective: robustness.  Topics now to be addressed are prompted largely by a number of LinkedIn discussions.  In one of them I pledged that my unusual-&-unfamiliar methods, adding up to a list of appreciable length, would soon be made available to all. This blog satisfies that promise, in a way that is more thorough than listings offered previously. I’ll begin with innovations made in my earlier (pre-GPS) book Integrated Aircraft Navigation. That book’s purpose was primarily educational; learning either inertial navigation or Kalman filtering from any/all literature existing in the mid-1970s was quite challenging  (try it if you’re skeptical). Still it offered some features originating with me. Chief among those were

* extension of previously known precession analysis, following through to provide a full closed form solution for the attitude matrix vs  time (Appendix 2.A.2)
* extension of the previously known Schuler phenomenon, following through to provide a full closed form solution for tilt and horizontal velocity errors throughout a Schuler period (Section 3.4.2), and reduction to intuitive results for durations substantially shorter
* an exact difference in radii, facilitating wander azimuth development that offers immunity to numerical degradation even as the polar singularity is reached and crossed (Section 3.6)
* analytical characterization for average rate of drift from pseudoconing (Section 4.3.4), plus connection between that and the gyrodynamics analysis preceding it with the classical (Goodman/Robinson) coning explanation
* expansion of the item just listed to an extensive array of motion-sensitive errors for gyros and accelerometers, including rectification effects (some previously unrecognized) in Chapter 4
* Eq. (5-57) with powerful ramifications for the level of process noise spectral density (which, without a guide, can otherwise be the hardest part of Kalman filter design) .

The list now continues, with innovations appearing in the 2007 book —
* Eq. (2.65), in correspondence to the last item just identified — with follow-through in Section 4.5 (and also with history of successful usage in tracking operations)
* Section 2.6, laying a foundation for much material following it
* Eqs. (3.10-3.12), again showing wander azimuth immune to numerical degradation
* Section 3.4.1 for easier-than-usual yet highly accurate position (cm per km) incrementing in wander azimuth systems
* first bulleted item on the lower half of p.46, which foreshadows major simplifications in Kalman filter models that follow it
* Table 4.2, which the industry continues to ignore — at its peril when trying to enable free-inertial coast over extended durations
* sequential changes in carrier phase  (Section 5.6, validated in Table 5.3) — and how it relieves otherwise serious interoperability problems (Section 7.2.3), especially if used with FFT-based processing (Section 7.3) 
* single-measurement RAIM, Section 6.3
* computational sync, Section 7.1.2
* tracking applications (Chapter 9, also validated in operation) with emphasis on identifying what’s common — and what isn’t — among different operations
* realistic free-inertial coast characterization and capabilities, Appendix II
* practical realities, Appendix III
* my separaton of position from dynamics + MANY ramifications
* commonality of track with short-term INS error propagation (Section 5.6.1)

There are more items, (e.g., various blogs from website JamesLFarrell.com). It can be helpful also to point out other descriptions e.g., 1-sec carrier phase usage

60MinutesVideoPicOn 11/23/14 60 Minutes drew wide attention to neglect of U.S. infrastructure, correctly attributing this impending crisis to inexcusable dereliction of duty.  I won’t claim authority to straighten out our politicians but I can offer a way to light a fire under them: suppose they were given evidence, known also to the public, that collapse of a particular bridge was imminent. Those responsible, to escape subsequent condemnation, would promptly find funds for the essential repairs.  The 60 Minutes program showed an instance of exactly that, arising from evidence discovered purely by chance.
 
 
QuakeVideoEndPicOK, maybe that’s obvious, but how could evidence come by design?  A year and a half ago I offered a way to approach that,  supported by a successful experience analyzing precise daily recordings from monitoring stations surrounding Tohoku (March 2011); both in a Detailed Video Presentation  and a short summary are available.  For application to infrastructure the details would differ, but certain key features would remain pertinent.  Collapse of a steel bridge would be preceded by change in shape of one or more structural members.  
 
The same is true for one made of concrete mixed with polyvinyl alcohol fibers (e.g., used in Japan and New Zealand). Permanent deformation occurs when the elastic limit is exceeded. Gradual accumulation of deformed members would provide early warning. Just as 3-dimensional shape state analysis identified the station nearest a quake epicenter, location of critical structural members would be revealed by a program using sequences of infrastructure measurements.

 

To access the free offer below, become a subscriber by clicking here now. The free video viewing will be available to all subscribers beginning on the afternoon of November 15th 

PostDividerPic
 
 
I’m giving free access to a 20-minute video, available for three days to all subscribers. It is the first of three segments I’ve put together for two reasons:
 
 
1) Dearth of time during a short course doesn’t allow adequate coverage of applicable matrix theory in class.
2) The material was organized to drive home a fundamental point: Math without insight is grossly inadequate. That lesson will be useful to professors, instructors, and mentors as well as designers.
I start with a no-math real-world example dramatically illustrating preference for insight over blind acceptance of computer outputs.
PostDividerPic
 
FREE PREVIEWS
Each of the three parts has a preview that can be watched free at any time. Click Here
If you click on any section, options include a preview for no cost (white circle on the right). Attendees of any course I teach will receive free 3-day access all three parts, and others (e.g., students or trainees) would also benefit from this information. An admission: The recording was done during hay-fever season; I sound (and look) like it. That’s unimportant in comparison to the message.
 
 
HomePageNewBookPicIn addition you will receive a 100 page excerpt form my latest book GNSS Aided Navigation and Tracking
 
 
MORE TO COME
My website now has excerpts from a wide variety of topics, all of which can be expanded further into helpful learning aids — backed by a long history of real-world experience and insights very much in need today. Occasionally I make incremental additions generated for release. As always, subscribers have the option for subsequent opt-out with no danger of further contact, spam, etc.

 

Changes in coordinates at stations affected by earthquakes have been monitored successfully, for years, with precision using satellite navigation.  Results of interest have then been produced in the past by processing the outcome, e.g., investigating the history of triangles formed thereby.  The first application to earthquakes of entirely different criteria (affine deformation states) has produced results with encouraging prospects for prediction, both in time (more than two weeks prior to the 2011 Tohoku quake) and spatially (departures from the affine model at the station nearest epicenter).

The fifteen independent states of a standard 3-D 4×4 affine transformation can be categorized in five sets of three, each set having x-, y-, and z-components for translation, rotation, perspective, scaling, and shear.  Immediately the three degrees-of-freedom associated with perspective are irrelevant for purposes here.  In addition both translation and rotation, clearly having no effect on shape, can be analyzed separately — and the same is likewise true of uniform scaling.  It is thus widely known that there are five “shape states” involved in 3-D affine deformation, three for shear and two for nonuniform scaling.  One way to describe shape states is to note their effects in 2-D, where there is only one for nonuniform scaling (which deforms a square into a rectangle) and one for shear (which deforms a rectangle into a parallelogram).  Therefore it is noted here that added insight into earthquake investigation can be obtained by analyzing affine features – with specific attention given to their individual traits (degrees-of-freedom).

For investigating earthquakes from affine degrees-of-freedom, methodology of another very different field — anatomy — is highly relevant but ironically lacking a crucial feature.  As currently practiced, physiological studies of affine deformations concentrate heavily on two-dimensional representations.  While full affine representation is very old, its inversion  — i.e., optimal estimation of shape states from a given overdetermined coordinate set — has previously been limited to 2-D.

Immediately then, extension was required for adaptation.  The fundamentals, however, still remain applicable. Instead of designated landmark sets coming from a group of patients, here they are associated with a series of days (e.g., from several days before to several days after a quake).  Each landmark set is then subjected to a series of procedures (centroiding, normalization, rotation) for fitting landmark sets from one day to another.

The Procrustes representation from procedural steps just described provides sequences of centroid shifts in each direction, rotations about each axis, and amounts of uniform scaling needed for each separate day.  In addition to those seven time histories, there are five more that offer potential for greater insight (again, shape states — three shear and two nonuniform scaling).  All were obtained for landmark coordinate sets reported before and after the 2011 Tohoku quake.  From sample recorded coordinates provided along with shape state values, readers of the manuscript are enabled to verify results.

A video completed recently provides just enough matrix theory needed for Kalman filtering. It’s available for  (1) purchase or 72-hour rent at low cost or (2) free to those attending courses I teach in 2014 or after (because the short durations don’t allow time to cover it). The one-hour presentation is divided into three sections. Each section has a preview, freely viewable.

 

 

The first section, with almost NO math, begins by explaining why matrices are needed — and then immediately emphasizes that MATH ALONE IS NOT ENOUGH, To drive home that point, a dramatic illustration was chosen. Complex motions of a satellite, though represented in a MATHEMATICALLY correct way, were not fully understood by its designers nor by the first team of analysts contracted to characterize it. From those motions, shown with amplitudes enlarged (e.g., doubled or possibly more) for easy visualization, it becomes clear why insight is every bit as important as the math.

 

EmailMarketingPicFor some viewers the importance of insight alone will be of sufficient interest with no need for the latter two sections. Others, particularly novices aspiring to be designers, will find the math presentation extremely helpful. Straight to the point for each step where matrices are applied, it is just the type of information I was earnestly seeking years ago, whole “pulling teeth” to extract clarification of ONLY NECESSARY theory without OVER simplification.

 

The presentation supplies matrix theory prerequisites that will assist aspiring designers in formulating linear(ized) estimation algorithms in block (weighted least squares) or sequential (recursive Kalman/EKF) form. Familiar matrix types (e.g., orthogonal, symmetric), their properties, how they are used — and why they are useful — with interpretation of physical examples, enable important operations both powerful and versatile. An enormous variety of applications involving systems of any order can be solved in terms of familiar expressions we saw as teenagers in college.

 

Useful for either introduction or review, there is no better way to summarize this material than to repeat one word that matters beyond all else — INSIGHT.

Surveillance with GPS/GNSS

 

 

The ago-old Interrogation/Response method for air surveillance was aptly summarized in an important 1996 GPSWorld article by Garth van Sickle: Response from an unidentified IFF transponder is useful only to the interrogator that triggered it.  That author, who served as Arabian Gulf Battle Force operations officer during Desert Storm, described transponders flooding the air with signals.  Hundreds of interrogations per minute in that crowded environment produced a glut of r-f energy – but still no adequate friendly air assessment.

 

The first step toward solving that problem is a no-brainer: Allocate a brief transmit duration to every participant, each separate from all others.  Replace the Interrogation/Response approach with spontaneous transmissions.  Immediately, then, one user’s information is no longer everyone else’s interference; quite the opposite: each participant can receive every other participant’s transmissions.  In the limit (with no interrogations at all), literally hundreds of participants could be accommodated.  Garble nonexistent.   Bingo.

 

Sometimes there a catch to an improvement that dramatic.  Fortunately that isn’t true of this one.  A successful demo was performed at Logan Airport – using existing transponders with accepted data formatting (extended squitter), in the early 1990s, by Lincoln Labs.  I then (first in January 1998) made two presentations, one for military operation (publication #60- click here) and another one for commercial aviation (publication #61-click here), advocating adoption of that method with one important change.  Transmitting GPS pseudoranges rather than coordinates would enable an enormous increase in performance.  Reasons include cancellation of major errors – which happens when two users subtract scalar measurements from the same satellite, but not coordinates formed from different sets of satellites.   That, however, only begins to describe the benefit of using measurements (publication #66); continue below:

 

With each participant receiving every other participant’s transmissions, each has the ability to track all others.  That is easily done because
(1) every extended squitter message includes unique source identification, and (2) multiple trackers maintained in tandem have been feasible for years; hundredsof tracks would not tax today’s computing capability at all. Tracks can be formed by ad hoc stitching together coordinate differences, but accuracy will not be impressive.  A Kalman tracker fed by those coordinate differences would not only contain the uncancelled errors just noted, but nonuniform sensitivities, unequal accuracies, and cross-axis correlations among the coordinate pseudomeasurement errors would not be taken into account.  Furthermore, the dynamics (velocity and acceleration) – as derivatives – would degrade even more – and dynamic accuracy is absolutely crucial for ability to anticipate near-future position (e.g., for collision avoidance).

 

The sheer weight of all the considerations just noted should be more than enough to motivate the industry towards preparing to exploit this capability.  But, wait – there’s more.  Much more, in fact.  For how many years have we been talking about consolidating various systems, so that we wouldn’t need so many different ones?  Well, here’s a chance to provide both 2-dimensional (runway incursion) and 3-dimensional (in-air) collision avoidance with the same system.  The performance benefits alone are substantial but that plan would also overcome a fundamental limitation for each –
* Ground: ASDE won’t be available at smaller airports
* In-air: TCAS doesn’t provide adequate bearing information; conflict resolution is performed with climb/dive.
The latter item doesn’t make passengers happy, especially since that absence of timely and accurate azimuth information prompts some unnecessary “just-in-case” maneuvers.

 

No criticism is aimed here toward the designers of TCAS; they made use of what was available to them, pre-GPS.  Today we have not just GPS but differential GPS.  Double differencing, which revolutionized surveying two decades ago, could do the same for this 2-D and 3-D tracking.  The only difference would be absence of any requirement for a stationary reference.  All positions and velocities are relative – exactly what the doctor ordered for this application.

 

OK, I promised – not just more but MUCH more.  Now consider what happens when there aren’t enough satellites instantaneously available to provide a full position fix meeting all demands (geometry, integrity validation): Partial data that cannot provide instantaneous position to be transmitted is wasted (no place to go).  But ancient mariners used partial information centuries ago.  If we’re willing to do that ourselves, I’ve shown a rigorously derived but easily used means to validate each separate measurement according to individual circumstances.  A specific satellite might give an acceptable measurement to one user but a multipath-degraded measurement to another.  At each instant of time, any user could choose to reject some data without being forced to reject it all.  My methods are applicable for any frequency from any constellation (GPS, GLONASS, GALILEO, COMPASS, QZSS, … ).

 

While we’re at it, once we open our minds to sharing and comparing scalar observations, we can go beyond satellite data and include whatever our sensors provide.  Since for a half-century we’ve known how to account for all the nonuniform sensitivities, unequal accuracies, and cross-axis correlations previously mentioned, all incoming data common to multiple participants (TOA, DME, etc.) would be welcome.

 

So we can derive accurate cross-range as well as along-range relative dynamics as well as position, with altitude significantly improved to boot.  Many scenarios (those with appreciable crossing geometry) will allow conflict resolution in a horizontal plane via deceleration – well ahead of time rather than requiring a sudden maneuver.  GPS and Mode-S require no breakthrough in inventions, and track algorithms already in public domain carry no proprietary claims.  Obviously, all this aircraft-to-aircraft tracking (with participants in air or on the ground) can be accomplished without data transmitted from any ground station.  All these benefits can be had just by using Mode-S squitter messages with the right content.

 

There’s still more.  Suppose one participant uses a different datum than the others.  Admittedly that’s unlikely but, for prevention of a calamity, we need to err on the side of caution; “unlikely” isn’t good enough.  With each participant operating in his own world-view, comparing scalar measurements would be safe in any coordinate reference.  Comparing vectors with an unknown mismatch in the reference frame, though, would be a prescription for disaster.  Finally, in Chapter 9 of GNSS Aided Navigation & Tracking I extend the approach to enable sharing observations of nonparticipants.

 

In the About panel of this site I pledged to substantiate a claim of dramatic improvements afforded by methods to be presented.  This operation is submitted as one example satisfying that claim.  Many would agree (and many have agreed) that the combined reasons given for the above plan is compelling.  Despite that, there is no commitment by the industry to pursue it.  ADSB is moving inexorably in a direction that was set years ago.  That’s a reality – but it isn’t the only reality.  The world has its own model; it doesn’t depend on how we characterize it.  It’s up to us to pattern our plans in conformance to the real world, not the other way around.  Given the stakes I feel compelled to advocate moving forward with a pilot program of modest size – call it “Post-Nextgen” – having the robustness to recover from severe adversity.  Let’s get prepared.

A recent post expressed concern (serious concern, in fact, to anyone tracing through the references cited therein), regarding future air traffic. Those concerns, serious as they are, all stemmed from events that  preceded developments adding further disquiet. Consider this “triple-whammy” —

* prospect for tripling of air traffic by 2025
* addition of UAVs into NAS
* ADSB methodology plans
Now add all implications of this headline:  GAO Faults DHS, DoT for GPS Interference/Backup Effort  in the 2013 year-end issue of InsideGNSS.  In that year-end description a friend of mine (Terry McGurn) is quoted, saying “I don’t know who’s in charge.” That thoroughly credible diagnosis, combined with subsequent observations by father-of-GPS Brad Parkinson, crystallizes in many minds an all-too-familiar scenario with a familiar prospective outcome: The steps they’ve correctly prescribed “can’t” be implemented now. It’s “too late” to change the plan — until “too late” acquires a new meaning (i.e., too-late-to-undo-major-damage).

Historically this writer has exerted considerable effort to avoid dramatizing implications of the status quo. While straining to continue that effort, I feel compelled to highlight those implications related to air safety.

OK there will be no naming names here, but many with influence and authority do “not believe that the minimum performance required by the ADS-B rule presents a significant risk to the operation of the National Airspace.”  Here’s my worry:
* That minimum required performance allows ten meters/second velocity error
* Existing efforts to provide precise position therefore become ineffective within an extremely short time; what translates into collision avoidance isn’t accuracy in present but in future relative position
* Consequently the accuracy that matters isn’t in position but in velocity relative to every potentially conflicting object

* Each of those relative velocities can be in error by 10 m/sec or more

* Guidance decisions made for collision avoidance must be prepared in advance
* Even TCAS, with its sudden climb/dive maneuvers (which recently produced a news story about screaming passengers), plans a minute and a half ahead
* With 10 m/sec velocity error that gives 900 meters uncertainty in projected miss distance — no higher math, just simple arithmetic
* That simple calculation is very far from being a containment limit
* Only containment limits based on conservative or realistic statistics can provide confidence for safety

* containment limit can’t just be 3 sigma; for low enough collision probability with non-gaussian distributions, ten sigma could still be insufficient.

Combined weight of these factors calls for assessment of prospects with future (tripled) air traffic plus UAVs. It has been asserted that, rather than ten m/sec, one m/sec is much more likely. As noted earlier, “much more likely” is far from sufficient reassurance. Furthermore, multiplying time-to-closest-approach by 1 m/sec, and amplifying that result by enough to produce a credible containment limit, still produces unacceptably large uncertainty in projected miss distance. So: expect (1) either frequent TCAS climb/dives or (2) guidance commands generated for safe separation resulting in enormous deviations from what’s really needed. Forget closer spacing.

Add to that runway incursions, well over a thousand per year and increasing — also with everything said about needs for avoiding other objects applicable to maritime (shoals) as well as airborne conflicts; Amoco-Cadiz, Exxon-Valdez, …  .

These points are covered by presentations and documentation from decades back.  The best possible sense-&-avoid strategy is incomplete without precise relative (not absolute) future (i.e., at closest approach, not current) position — which translates into velocity accuracies expressed in cm/sec (not meters/sec).  The industry has much to do before that becomes familiar — let along the norm that containment requires.  A step in this direction is offered by flight results (pages 221-234, Fall 2013 issue of Institute-of-Navigation Journal).

How reliable is reliable enough?

 

GPS is by far the best-ever system for both navigation and timing. Recognition of that is essentially universal. Less widely recognized are the ramifications of growing dependence on GPS, in both communications and navigation. This discussion will concentrate on the latter, highlighting attendant risk in flight. Although extensive deliberation already exists, I’ll presume to offer my experience. Immediately it is acknowledged that the risk is low; but now let’s ask what is low enough. While every effort is made here to avoid an alarmist tone, answering that question calls for an unflinching look at potential consequences if the gamble ever failed.

The above paragraph opens a broader (two-page) discussion on this same site.  That expanded discussion addresses several familiar topics; GPS, for example, lives up to expectations, brilliantly performing as advertised. Since no system can be perfect, the industry uses firmly established methods, supported by widely documented successful results.

Backup for GPS, strongly urged in the widely acclaimed 2001 Volpe report, remains incomplete. Continuation of this condition calls to mind the Titanic or the 2008 financial fiasco. Meanwhile, shortcomings of GPS integrity tests are described and inescapably demonstrated by citing a document from the spring of 2000, plus history of flightworthiness improperly bestowed with proprietary rights accepted for algorithms and tests (rather than rigor advocated two decades ago; #57 of the publication list). A subsequently documented panel discussion shows the extent of unpreparedness in the legal realm.

While bending over backwards to acknowledge that disaster is unlikely, an unflinching look reviews potential though improbable consequences. The meaning of G in GPS suggests that “unlikely” isn’t enough.

Although this is old information, realization has not been widespread. Also, present plans for upgrading to Automatic Dependent Surveillance Broadcast (ADSB) raise new concerns (#83 of the publication list, plus various blogs on this site related to collision avoidance, runway incursions, … ). This dialogue is prompted by considerations of safety. Again, “low” likelihood combined with absence of a calamity thus far offers no guarantee. I advocate a revisit of this issue with all its ramifications.

An earlier post cited a detailed analysis for deformations of a 3-D structure composed from GPS monitoring stations surrounding the area affected by the 2011 Tohoku quake.  That full development is now featured on the cover, and also within, the August 2013 issue of Coordinates zime.   A summary is available here for those desiring only a capsule review.  

An intense discussion among LinkedIn UAV group members involves several important topics, including the source of a one-out-of-a-billion requirement for probability of a mishap.  The source is thus far unidentified.

I can shed some light on the genesis of another related requirement.  From within RTCA SC-195 (GPS Integrity) Working Group for FDI/FDE (Fault Detection and Isolation / Fault Detection and Exclusion) in the 1990s, parameters were used to establish the Missed Detection requirement as follows:
* From records obtained as far back as possible (1959) there were over 333 million flight-hours nationwide between 1959 and 1990.
* In the (inevitably imperfect) real-world, the maximum allowable number of hull-loss accidents in 30 years cannot be specified at zero; so that maximum allowable number was set to one, producing 3 billionths per flight-hour
* The number used for mean time between loss of GPS integrity was 18 years.
* Probability of an unannounced SV (satellite) malfunction is then  1 – exp{ -1/18x365x24 } = approx. 6 millionths per hr per SV.
* Since 6 SVs are needed for FDE, that probability is multiplied by 6, producing 36 millionths per hour as the probability of an unannounced SV malfunction in any SV among those chosen for FDE.
* Probability of an undetected unannounced SV malfunction is then  36 millionths per hour multiplied by Missed Detection probability.
* With an incident/accident ratio slightly below 1/10, a value of 0.001 for Missed Detection probability satisfies the 3 billionths per flight-hour requirement.

None of this is intended to signify thoroughness in the genesis of decisions affecting flight safety requirements. Neither demands for rigorous validation in early 1994, nor an attempt to facilitate meeting those demands — via replacement of GO/No-GO testing by quantitative assessment — met the “collective-will” acceptance criteria. History related to that, not reassuring, is recounted on page 5 of a synopsis offered here and page 127 of GNSS Aided Navigation & Tracking. The references just cited, combined with additional references within them, address a challenging topic: how to substantiate, with high confidence, satisfaction of very low probabilities.  There are methods, using probability scaling, not yet accepted.

Returning to the original question that prompted this blog: It might be uncovered — possibly from some remote source — that the number with a mysterious origin was supported, at one time, by some comparable logic.