Net-Centric Air Traffic Management System Explained
Net-centric, in its most common definition, refers to “participation as a part of a continuously evolving, complex community of people, devices, information and services interconnected by a communications network to optimise resource management and provide superior information on events and conditions needed to empower decision makers.” It will be clear from the definition that “net-centric” does not refer to a network as such. It is a term that covers all elements constituting the environment referred to as “net-centric”.
Exchanges between members of the community are based not on cumbersome individual interfaces and point to point connections but a flexible network paradigm that is never a hindrance to the evolution of the net-centric community.
Net-centricity promotes a “many-to-many” exchange of data, enabling a multiplicity of users and applications to make use of the same data which in itself extends way beyond the traditional, predefined and package oriented data set while still being standardised sufficiently to ensure global interoperability. The aim of a net-centric system is to make all data visible, available and usable, when needed and where needed, to accelerate and improve the decision making process.
In a net-centric environment, unanticipated but authorised users can find and use data more quickly. The net-centric environment is populated with all data on a “post and share before processing” basis enabling authorised users and applications to access data without wait time for processing, exploitation and dissemination. This approach also enables vendors to develop value added services, tailored to specific needs but still based on the shared data.
In the context of Air Traffic Management (ATM), the data to be provided is that concerning the state (past, present and future) of the ATM Network. Participants in this complex community created by the net-centric concept can make use of a vastly enlarged scope of acceptable data sources and data types (aircraft platforms, airspace user systems, etc.) while their own data also reaches the community on a level never previously achieved.
How are decisions different in a net-centric environment?
Information sharing and the end-user applications it enables is the most beneficial enabler of collaborative decision making. The more complete the information that is being shared and the more thorough its accessibility to the community involved, the higher the benefit potential. In a traditional environment, decisions are often arbitrary and the effects of the decisions are not completely transparent to the partners involved.
Information sharing on a limited scale (as is the case in the mainly local information sharing hitherto implemented) results in a substantial improvement in the quality of decisions but this is mainly local and improvements in the overall ATM Network are consequential rather than direct.
If the ATM Network is built using the net-centric approach, decisions are empowered on the basis of information available in the totality of the net-centric environment and interaction among members of the community, irrespective of their role or location, can be based on need rather than feasibility.
Since awareness of the state (past, present or future) of the ATM Network is not limited by lack of involvement of any part as such, finding out the likely or actual consequences of decisions is facilitated, providing an important feed-back loop that further improves the quality of decisions on all levels.
Looking at things from the collaborative decision making (CDM) perspective, it is important to realise that net-centricity is not something created for the sole purpose of making CDM better. Net-centricity is a feature of the complete ATM system design, providing different benefits to different aspects of air traffic management operations. It is when collaboration in decision making exploits also the facilities made possible by the overall net-centric ATM Network, that the superior quality of decisions becomes truly visible.
The concept of services
In traditional system design, information technology (IT) was often driving developments and the functionality being provided in some cases became a limitation on the business it was meant to support. Service orientation is the necessary step to separate the business processes from the IT processes and to enable business considerations to drive the underlying IT requirements. Aligning IT to the business rather than the other way round improves business agility and efficiency.
“Service” in this context is defined as “the delivery of a capability in line with published characteristics, including policies.” This refers to the ATM services required and not the underlying (technical) supporting services and physical assets that need to be deployed. In other words, service refers to the business services and not the information technology services.
Well designed business services must exhibit a number of characteristics that describe the service being offered sufficiently well for the service consumer(s) to clearly understand the service and hence to want to make use them.
On the business level, contracts and service level agreements that put the service in the proper context are very important as they cover not only the function(s) that will be performed but also the non-functional terms and conditions to which the consumer and provider have agreed.
There are several business processes that can be identified in the context of air traffic management. Some are related to the aircraft themselves (e.g. turn-round), others concern the passengers and their baggage. These and all other business processes require specific services to progress and complete in accordance with the business objectives of the process owner.
Cleaning and refuelling of the aircraft, passenger check-in, security checking, etc. are just a few examples of the business services that need to be provided in order to achieve the objective, in this case a timely and safe departure.
When viewed on an enterprise level, a given service once defined is often reusable across the enterprise where identical or similar processes are found, resulting in a major potential for cost saving.
The services so defined will then set the requirements for the underlying IT support.
The effects of net-centric integration
The term “integration” is often associated with “centralisation” and the elimination/rationalisation of facilities. While from an economic perspective integration may indeed mean all of the above, net-centric integration is about empowering better decision making through the creation of the complex, networked community of people, devices, information and services that generate benefits to all members of the community without necessarily changing the mapping (nature, number and location) of the community members.
At the same time, net-centric integration enables superior business agility and flexibility so that community members may evolve and change (drop out or new ones come in) in response to the changing needs of the users of the system concerned.
In the net-centric context it is not integration as such that changes the enterprise landscape. Such changes, if any, are the result of the economic imperatives that need to be met and which can now be met based on the improved business agility.
The end-user aspects of net-centric operations
One of the less understood aspects of traditional decision making is that it is not really possible to realise when decisions are based on less then full and/or correct information. The garbage in/garbage out principle applies also to the decision making process.
At the same time, the effects of less than good decisions may not be immediately visible. In many cases, poor decisions will affect the efficiency of the overall operation without the negative effects even being traceable to individual decisions. So, while everyone may be doing their very best, the result may still be far short of the quality that would be otherwise achievable.
When the scope and quality of data upon which decisions are based is expanded and improved, the quality of decisions improves almost automatically. The decision makers will notice the expanded possibilities and ultimately the success of the enterprise will also improve in a visible way.
When net-centric operations are introduced, the potential for improvement and the options for achieving the improvement multiply considerably. In the more restricted environment, end-users will have been asking for more information and tools to make using data easier. More often than not, their wish went unfulfilled due to lack of data and/or poor quality and the consequent poor performance of the tools that may have been created.
The shared environment under net-centric operations brings all the data anyone may ever wish to have. The services are defined on the basis of the business needs and will also support the tools end-users need to properly interact with net-centric environment, integrating into a coherent whole their individual decision making processes.
In a way a well implemented net-centric system is transparent to the end-users. In particular, they do not need to concern themselves with the location of data they require or the quality thereof. Information management, that is part of the net-centric environment, takes care of finding the information needed and also its quality assurance.
End-user applications are the most visible part of net-centric operations and they can be built to satisfy end-user needs in respect of any process that needs to be handled.
In the ATM context, vastly improved controller decision making tools, safety nets and trajectory calculation are only a few examples of the possible benefits.
The institutional implications of net-centric operations
International air navigation is by definition a highly regulated environment and regulations provide some of the most important pillars of both safety and interoperability. The net-centric and service oriented future ATM environment possesses a number of aspects which by themselves provide powerful drivers for proper regulation.
It is important to note that the institutional issues associated with net-centric operations are wider than just CDM and hence early efforts to address the CDM related aspects will benefit the whole of the ATM enterprise. The items of particular relevance are summarised below:
#. Wide scope of information contributors – The information needs of the future ATM Network, including the scope of that information, will result in a multitude of new information sources/contributors and/or new types of information being obtained from various information sources.
#. Air and ground integration – In the traditional ATM set-up, the coupling between ground and airborne systems are normally very loose or non-existent. Once the net-centric ATM Network is realised and aircraft become nodes on the network, a completely new regulatory-target regime is created in the form of the integrated air/ground ATM elements.
#. Information sharing – The value of using shared information is one of the main reasons why System Wide Information Management (SWIM) for the future net-centric ATM environment is being defined. There are however legitimate requirements for protecting some information in one or more of several ways, including de-identification of the source, limiting access, etc.
#. Integration of diverse airspace use activities – Airspace is used for various purposes and civil aviation is just one of those. Specific military usage (not all of which involves aircraft operations) as well as various civilian projects and missions employ information that is even more sensitive than the normal business or security sensitive information categories.
Their proper protection is essential if the military and other operators generating such sensitive information are to be integrated into the overall ATM process. This aspect poses a specific challenge since not only is the information possibly in a military/State security domain but the regulatory domains may also be nested in different organisations that need to be brought together for and under the SWIM umbrella.
#. Disappearance of the difference between voice and data – In the mid- to longer time frames, the expected traffic levels will make the move to almost exclusive use of digital link communications inevitable. This does not mean the disappearance of voice communications on the end-user level.
However, a reliable communications system that can serve the voice and data needs of the future ATM environment is by definition digital and hence even voice messages will be transferred via digital means. Hence a convergence of the regulatory regimes for voice and data communications will be inevitable.
#. Global interoperability – Aeronautical information has always been global in nature but the strongly limited access and product oriented philosophy has contained the issues of global interoperability. The net-centric approach of the new ATM environment will create large islands of shared information which must however be able to interoperate between each other as well as with legacy environments, constituting a new, global need for proper regulatory regimes.
#. Common information pipes for passenger and operational communications – In the traditional analogue environment, aviation has enjoyed dedicated communications means and this tradition was carried over to a certain extent also into the new digital communications technologies.
The dedicated “pipe” in air/ground communications is certainly a reality today but the same cannot be said of the ground-ground communications links. The early point to point connections have been replaced in most applications by leased lines which, for substantial segments, are in fact shared with other, often not aviation, users.
The drivers behind this change are obviously cost effectiveness considerations. Although early attempts to provide in-flight passenger connectivity have not proved the commercial success many had forecast, it is already visible that in the not too distant future, personal communications needs will evolve to the point where people will demand uninterrupted connectivity even on relatively short flights.
Since such demands will always fetch a premium price, it stands to reason that combining the operational and passenger connectivity needs onto a single air/ground pipe could be commercially attractive. While the technology to do this safely will certainly be available, the regulatory aspects will have to be explored in time to ensure that the actual solutions used meet all the safety and other requirements.
#. The value of information – Information is a valuable commodity and in the competitive environment of aviation this commodity is of course sought after by many partners, including others than only aircraft operators or airports.
The essential safety contribution of information in air traffic management creates an especially complicated web of relationships, some commercial some not, some State obligations some voluntary, and so on that need to be properly regulated with a view to ensuring cost recovery while not discouraging information use.
#. Cost effectiveness – Although not always thought of as a driver for regulation, a proper regulatory environment will favour cost-effective, user oriented solutions.
#. Training and personnel licensing – The information sharing environment of SWIM will require experts who are conversant not only with the requirements of air traffic management and aircraft operations but also the information technology aspects of the new approach to managing information.
This has implications in the construction and approval of training syllabuses, examination fulfilment criteria as well as the qualification requirements. The need for refresher/recurrent training also grows and needs to be part of the overall regulatory regime.
#. Standardisation – System wide sharing of information in a net-centric environment requires that the data be the subject of proper standardisation on all levels. This is the key to achieving global interoperability in the technical as well as the service/operational sense. The development and use of the necessary standards can only be realised under a proper regulatory regime.
All the above aspects imply the creation of a regulatory regime that is aligned with the specific needs of a net-centric operation and which is able to regulate for safety and proper performance, including economic performance, appropriate for the new digital environment.
Trying to apply traditional methods of regulation without taking the new realities into account is counter productive and must be avoided. This is an important message for both the regulators and the regulated.
The aspects of regulation to be considered include:
- Safety
- Security
- Information interoperability
- Service level interoperability
- Physical interoperability
- Economics
In terms of who should be regulated, thought should be given to at least:
- The State as data provider
- Licensed providers of services, including network services
- Licensed data sources
- Licensed providers of end-user applications
- User credentials and trusted users
It is important to answer also the question: who should be the regulator? This must be agreed in terms of:
- International rules and global oversight
- Licensing rules and global oversight
The types of regulatory activities that need to be put in place concern mainly compliance verification and certification; quality maintenance; and enforcement and penalties.
As mentioned already, the above institutional aspects concern more than just CDM, however, for CDM and in particular information sharing to work in the net-centric environment, they need to be addressed as a prerequisite of implementation.
The technical implications of net-centric operations
On the conceptual level, net-centric operations mean the sharing of superior quality information as part of a community and acting on that information to improve decisions for the benefit of the individual as well as for the network (the networked community). Obviously, this type of operation must be enabled by a proper technical infrastructure.
This technical infrastructure is often thought of as a network with the required band-width and reliability; it is true that the replacement of the one-to-one connections that characterise legacy systems with the many-to-many relationships of the net-centric environment does require a powerful network that fully meets all the quality requirements, but there is much more to net-centricity than this.
The management of the shared data pool, including currency, access rights, quality control, etc. brings in a layer of technical requirements that sit higher than the network as such.
If we then define ‘information’ as ‘data put in context’ it is easy to see that creating the information from the shared data constitutes yet another layer of required technical solutions. These are often referred to as intelligent end-user applications.
Tools which end-users can call upon to perform tasks they need for successfully completing their missions. End-users may be pilots, air traffic controllers, flight dispatchers, handling agents or any other person or system with a need for the shared information.
In all cases, the end-user applications collect and collate the data needed to create the information required. This then may be a synthetic display of the airport on an EFB, a trajectory on a what-if tool display or a list of arrivals for the taxi company and so on.
End-user applications are scalable to fit, both in functionality and cost, the specific needs of the end-user for whom they are created. This scalability enables the end-user applications to run on different networked devices from simple PDAs through airlines systems to on-board equipment.
It shall be noted that one of the most important characteristics of a net-centric environment that technical solutions must support is that the requirements against equipment are driven by the services/functionality they must provide and NOT by their actual location in the network.
As an example, the integrity of the data used to build a trajectory and the quality of the application used to manipulate/interact with the trajectory will depend on the use that will be made of the trajectory and not per se on whether the application is running on the ground or in an aircraft.
This adaptability of the technical solutions to the actual needs (rather than location in the network) leads to important cost saving opportunities.
Net-centricity – the essence of the future
The net-centric approach to system design is not a silver bullet. It is just the environment that enables properly managed information to be exploited to the full and provide the enterprise with the agility it needs to constantly adapt to the changing world for the benefit of the customers and the enterprise itself.
It is the end-user applications built to work in the net-centric environment that come closest to being the silver bullets…
Furniture Design and Ergonomics for Air Traffic Control Rooms
Air traffic control (ATC) and airport security are two of the most critical operations at airport facilities. The nerve center of these activities is the air traffic control room. The air traffic controllers guide aircraft on the ground and in the air, and perform important security and flight operations.
The air traffic controller has great responsibility and for this reason, it’s critical that the consoles be as well designed and equipped as possible.
Factors Affecting Design of Air Traffic Control Room Consoles
To start, there are no standard designs for airport control rooms; therefore the following factors should be considered.
- Overall shape and size of the tower control and command room
- Work and walkway space
- Lighting and placement of communications equipment
- Space requirements for staff and equipment
All the before mentioned factors influence the design of modern ATC control room consoles.
Ergonomic Solutions for Modern Control Room Furniture Design
The primary goal for control and command center consoles and surrounding furniture is to provide comfort and support to air traffic controllers working extended periods of time under continuous stress. Operator console furniture should be designed such that all equipment is within easy reach and with optimal viewing angles and sight lines toward coworkers and visual aids located throughout the command center.
These ergonomic concepts are critical to successful furniture design. Although people vary greatly in shape and size, console furniture is designed to the ‘human norm” with consideration to adjustability and ADA requirements for wheel chair access.
Sound ergonomics say the operator consoles should be designed for ease of use with least impact on how the operator works. The operator console should be an instrument with adjustable features to meet the needs of the individual. These features can include adjustable monitor arms, task lights and phones. Adjustable equipment should be in easy reach and lift above the desktop.
Lifted equipment maximizes desktop space and helps keep the desktop clean and uncluttered. An important ergonomic feature often overlooked in ATC are adjustable desktops lifts for operators who need to stand, but can’t leave the space during normal activities. Unfortunately, the operational concept and stationary equipment found in most tower control rooms don’t allow for adjustable work surfaces.
With sound application of modern ergonomics and an understanding of the work being performed, control room furniture can be designed to enhance operational performance and safety as well as influence operator job satisfaction and longevity.
Professional space planners and furniture designer-manufacturers should be engaged to provide the solutions necessary to best meet operational needs within budget.
Is an Aircraft an Aeroplane Or the Other Way Round? The Importance of Proper Terminology
The word game
A lot of air traffic management related material passes through our hands, usually to be checked with a view to ensuring quality of content and consistency of the terminology. There is a disturbing trend that is becoming more and more evident with the passage of time. The documents show a deteriorating level of quality in respect of terminology use.
Why is this a problem? Unless they have been sensitized to the issue, the authors of those documents may not feel particularly disturbed by the fact that they use the terms aircraft, aeroplane or airplane interchangeably in their text, they may even feel that the varied use of words reflects better writing style. But in technical documents, the terms used must all have their precise definition and it is not enough to find a given word in a Webster’s Dictionary.
Let’s have a look at these three words, aircraft, aeroplane, and airplane. They are all English words and they all mean something that flies. Very true. But there are many things that “fly”, from hot air balloons to helicopters and, depending on how you define “fly”, even hovercraft. So how do we know which exactly a given text refers to if it is not clear from the context?
If you see a piece of text that says “a flashing white light shall be displayed on all aircraft” and then another one that says “a flashing white light shall be displayed on all aeroplanes” and you own a helicopter, a glider and a hot air balloon, which one would you need to equip based on the first requirement? And the second?
Although I assume you know the answer without the explanation that follows, it is still interesting to look at these terms in more detail.
First and foremost, we have to say good-by to the term “airplane”, at least in the international context. Only aircraft and aeroplane have been defined by the International Civil Aviation Organization (ICAO).
An aircraft is any machine that can derive support in the atmosphere from the reaction of the air other than the reactions of the air against the earth’s surface.
A aeroplane is a power-driven heavier-than-air aircraft, deriving its lift in flight chiefly from aerodynamic reactions on surfaces which remain fixed under given conditions of flight.
So what do these definitions tell us? A hovercraft is not an aircraft (reactions of the air against the earth’s surface) and a glider is not an aeroplane (power driven) but it is an aircraft. A balloon is an aircraft but it is not an aeroplane… and so on.
As you can see, expressing requirements, infrastructure suitability and services desired does need proper terminology use, otherwise things quickly become ambiguous, leading to misunderstanding and endless discussions.
We used the terms aircraft and aeroplane (the subject of the most common errors) as examples but there are scores of other terms which, if used improperly or inconsistently, can lead to serious problems of understanding.
A few simple rules can help
Proper terminology use is not rocket science. It needs good knowledge of the subject and a bit of discipline. Here are a few simple rules that can help.
• If there is an ICAO defined term for something, use it. ICAO has developed definitions for the terms it uses in the provisions aviation the world over follows. Using terms as defined by ICAO provides immediate benefits in terms of consistency with ICAO documents and documents derived from them. Those definitions are also consistent among themselves.
• If there is no ICAO definition but a definition from another big organization, use it. In some cases ICAO may be lagging behind developments and they may not have a definition (yet) for a term or the term is not used in the ICAO provisions.
Some other organization may however have developed a definition that is widely accepted or even standardized. In such cases, this recognized definition should be used and the source clearly identified. There may be several definitions from different sources… use the one that appears to be the most appropriate but use it everywhere consistently.
• Create your own definition. In some cases you may find that a term that nobody has yet given a definition needs to be understood in a particular way and only that way. Create your own definition and use it consistently across your documents. It is also a good idea to try and promote your new definition. If you had a need for it, so might do others. The wider it will be used, the better for overall consistency.
• When a term has multiple meanings. A great example of this is air-side and land-side, two terms that divide an airport in two, one you might call the public area and one restricted to passengers and employees only. The trouble is, there are at least two schools of thought on where the dividing line is between the air-side and the land-side.
Although the dividing line is always artificial and arbitrary, its actual position does make a difference to the processes that extend across the division. In such cases feel free to adopt whichever dividing line position is best for you, however, always state clearly where the boundary between air-side and land-side is (or any other aspect the given term requires). A clear indication mitigates the negative effects of this kind of multiple usage.
• Be consistent. Perhaps the most important rule is to be consistent. There is only one thing worse than using undefined terms or terms with the wrong definition and that is using terms inconsistently across a document. Inconsistent use of technical terms is the surest way of confusing the reader.
What about abbreviations?
Few disciplines in the world are so prolific with creating abbreviations as aviation. When we speak, the uninitiated may think we are using some kind of secret code language… Worse, we tend to assume that each of us knows all the abbreviations from every part of the business while in fact CUTE (Common User Terminal Equipment) may mean nothing to an air traffic controller while ATIS (Automatic Terminal Information Service) may sound like a four letter word to a check-in agent.
To managers higher up, who may have come from the financial world, neither CUTE nor ATIS may say much except if there is a price put against them… So what to do with abbreviations?
Here again the main rules are: use accepted abbreviations whenever possible and be consistent at all times. Include a list of abbreviations in all technical documents and consider writing the words full out (followed by the abbreviation) when first used in the text.
Avoid creating new abbreviations. Of course this is not always possible, if nothing else, there are new working groups, new processes, new equipment and they all crave their own, easy to remember names. So, go ahead and come up with new abbreviations but do try to avoid re-using abbreviations that already have a well established meaning. You may feel that your field is stronger and you will eventually squeeze out the other guy but believe me, not paying attention to this will only confuse everybody.
What if you are writing in your national language?
Whether you are writing in English or your national language, the guidelines are the same. But, they may not be so easily implemented if the terminology has not yet been introduced into your language to the same level of detail as it is in English.
There may be opportunities to be a pioneer in enriching the local language with the required new terms… In some cases trying to force consistency and new terms onto the professional writing scene may not be easy or appreciated by your peers. Use good arguments and examples similar to those above to convince them of the importance of proper terminology use.
The responsibility of SESAR, NextGen and SWIM
Experts in Europe and the United States are busy writing the blue prints for the next generation air traffic management systems SESAR and NextGen respectively. Those systems will introduce new concepts, new technologies and new processes, each bringing with them their specific terms and abbreviations.
System Wide Information Management (SWIM) is something that draws heavily on ideas first put forward in the general information technology field, with SWIM applying those things in an aviation context.
All the above activities will be generating tons of new documents which must be consistent across the board, both in terms of the old definitions and abbreviations and the new ones they will be introducing. Their responsibility is huge if we consider that the SESAR and NextGen documents will determine for decades to come what is called what and what we mean by what.
Get it wrong or inconsistent and future generations will struggle with the inconsistent, diverging terminology for a long time to come.
The new documents we see to-day are cause for concern and show signs of people ignoring the simplest rules of terminology use. They must remember that at the end of the day, we will all need to know beyond a shadow of a doubt whether we need to bolt that flashing white light onto the particular flying machine we own. Only consistent, proper terminology can help in deciding…
Same Time, Same Place, Same Level – Chapter 12
Enter the doctors. Inept aptitude tests…
Air traffic controllers, just like flight crew, have to meet vigorous health standards to be allowed to practice their trade. At most places, even before entering the training course, prospective controllers are sent to specialized medical institutions where a careful evaluation is made, not only to check that the candidate has the required number of ears and eyes but also to make sure, via various aptitude tests, that his personality is the kind that can, in theory, be “corrupted” to become that of an air traffic controller.
When these aptitude tests were still fairly new, a lot of people, not only controllers, believed that they were a waste of time. This negative opinion had been partly due to a low level of experience in matters of air traffic control on the part of the shrinks concerned, a situation which tended to produce rather poor results at the end of the selection process. Though there were places where things had turned out better, the first encounter with scientific selection had been a definite disaster in many places.
The psychologists assigned to the job had about as much awareness of flying as a cabbage growing under the final track… The poor dears wanted to set up a grading, a yardstick to which new applicants could eventually be measured, and to this end a bunch of experienced controllers were selected on whom they would run their tests, with the results to be considered as falling into the acceptable level of performance.
On arrival at the institute, first they had to answer a series of questions about the job itself. Now imagine a neurosurgeon explaining a complex brain operation to a gardener and you can picture the situation. They were simply not on the same frequency. I am sure their first impression had been that of dealing with very, very strange individuals…
Next came the gadgets. These mechanical and electrical contraptions, they were told, were in use to test truck drivers, railway engine drivers and the like, with excellent results. All of them nice, aviation types, just like ourselves, they thought.
The final test scores probably had to do something with the fact that they just could not take the whole thing seriously. Anyway, the shrinks made up their reports on each of them and these were compared with the empirical assessment produced earlier by management.
Thanks heaven, the whole exercise had been run anonymously, or else some of they guys would have lost their jobs on the spot. Controllers known to be capable of vectoring heaps of airplanes with ease were shown as being on the dumb side of not being very bright and even the highest scoring guy was below your average truck driver… It was quite clear the shrinks were looking for all the wrong qualities.
Following this first disaster, there was silence for several years, at the end of which a new set of shrinks entered the ball game, claiming to have new methods for foolproof selection. Without wanting to appear unduly skeptical of their claims I still feel that, no matter how a controller had been selected for the job, his or her professional aptitude can only be usefully judged after he or she had issued the first few clearances and radar vectors…
Good ears, good listeners
Controllers need good ears. Communication with pilots and other controllers is mainly done using the spoken word, and any problem in hearing can quickly lead to disaster. The situation is complicated by the fact that airports are noisy places and the crackle coming through the controller’s headset is not exactly HI-FI, either.
Consequently, controllers’ ears tend to suffer a lot, and next to the eyes, they give rise to most of the concerns. It is only natural that the yearly medicals concentrate a lot on making sure that their hearing is as good as it had been when they started working the airwaves.
The most basic check required that five of them stand in line about 5 meters from the doctor, sideways, so that only one ear is pointed toward him, with the other blocked out by their hand. The doctor would then whisper random numbers which they had to repeat correctly.
While standing in line, awaiting their turn, they often wondered, does the old doctor’s hearing measure up to what was required of us? One day a young chap decided to satisfy his curiosity. When his number was whispered by the doctor, I think it had been “fifty-six”, he replied correctly, but in the same low whisper. At first there was no response from the doctor, but then he reacted in the most basic, though very human way. “Eh….??” – well, there was one question answered.
Another test involved a tuning fork. The vibrating instrument was placed on top of the head and the doctor would look at the controller with a questioning eye. Before the very first occasion, experienced colleagues had told the newcomers that the standard reply expected was “I hear the fork on top, in the middle.” They never bothered to check where else the damned thing could possibly be heard, though old timers made sure all new candidates were informed of what they should say.
No wonder each and every one of them invariably passed this particular test, even after the good doctor’s suspicions had been mildly aroused when a guy recited the standard reply even before the fork was placed anywhere near his head….
8.33 KHz Channel Spacing – What is This?
The radio spectrum, a scarce resort
One of the most basic activities in a cockpit is tuning the radio to the assigned frequency of whoever we want to talk to. Contacting ground control, the tower or one’s own company is done by turning a few knobs until the right numbers show in the radio control panel display and we can talk.
Air traffic controllers see the same thing slightly differently. They do not normally have to tune their radios. The proper frequencies for their sector or other working position are pre-set and need no further attention.
With the matter being so pedestrian and the actions so routine, few of us realize that the ability of pilots and controllers to talk to each other is in fact dependent on one of the scarcest resources in aviation, namely the radio spectrum allocated to aviation use.
Many other disciplines have their own radio spectrum and we all guard jealously what we have been given and for good reason. With so many users wanting to use the radio waves, the incumbents better watch or the use it or lose it principle kicks in. Luckily, the frequencies most widely used by aviation (118 – 137 MHz) are not coveted so strongly by others. Our problem is different but not in the least less serious.
VHF fundamentals
VHF is a line-of-site system. This means that two stations can talk to each other assuming that they are tuned to the same frequency and they can “see” each other (from a radio point of view). If one of the stations is below the horizon of the other station, communications becomes impossible.
Being tuned to the same frequency means that both stations are tuned to the same pre-defined frequency which is within the aviation band. These pre-defined frequencies are separated by agreed “spaces”, expressed in kHz. The spaces ensure that communications taking place on adjacent pre-defined frequencies do not interfere with each other. And herein lies the problem!
You can only pre-define a limited number of frequencies with the required spacing between them if you are to stay within the aviation band. There are many more sectors, towers and other aeronautical stations that need their own, discrete frequencies than there are frequencies available. So what do we do?
The line-of-sight character of VHF radio waves offers a solution of a kind. You can re-use the frequencies if you ensure that the usage areas of each are separated sufficiently so that no interference occurs. Frequencies only used close to the ground can be re-used much more readily than can those used at higher levels.
The horizon of these latter is much wider and hence aircraft hundreds of miles away might be heard by a center that has nothing to do with it if the frequency assignment is not done properly.
Reuse not enough? Cut the spacing!
I am not sure who was the first one to bolt a radio on an aircraft, but the idea caught on quickly and soon enough the problem of frequency shortages was born.
Originally the spacing between the frequencies was 200 kHz, providing just 70 channels between 118-132 MHZ as the band was back then (1947). In 1958, he spacing was reduced to 100 KHz, doubling the number of channels to 140.
In 1959 the upper limit of the aviation band was expanded to 136 MHz, giving us another 40 channels, bringing the total to 180.
In 1964, the channel spacing was halved again to 50 kHz, resulting in 360 channels being available.
These dates show not only aviation’s ever increasing hunger for frequencies, but also the evolution of aviation radios. In the 1950s no radio set would have been suitable for work with 50 kHz spacing. By 1964, 50 kHz was the standard with more to come…
The channel spacing was further cut to 25 kHz in 1972, doubling the available channels to 720. Seven years later, in 1979, the upper limit of the aviation band was once again expanded, this time to 137 MHz and this delivered another 40 channels, bringing the total to 760.
In 1995, the proposal was made to reduce the channel spacing to 8.33 kHz. Theoretical number of channels: 2280!
This may sound like radio channel nirvana but in real life things are never that simple.
The underlying reasons for the channel hunger
The need for ever more frequencies was driven mainly by the dramatic increase in the number of control sectors in the en-route ATC environment. As traffic grew, air traffic service providers had to split sectors into ever smaller chunks to enable controllers to cope. Each new sector needed its own frequency and most of the sectors were in the upper airspace, hence the re-use distance between identical frequencies was very big. This translated into a seemingly insatiable hunger for ever more discrete frequencies.
By the mid-1990s it became clear that the existing VHF system would not be able to make available the required number of frequencies. This would put an end to the creation of new sectors, severely limiting the ATC system’s ability to handle the increasing air traffic demand.
Curiously, there seemed to be a mismatch in the magnitude of the problem as seen in the US and in Europe.
While traffic density on the Eastern Seaboard of the US was in fact higher than the busiest areas in Europe, the US frequency managers had no problem satisfying the FAA’s demand for new frequencies. At the same time, in Europe, with its lower traffic density, the alarm bells were being sounded that frequency doomsday was nigh.
So what was happening?
To understand this, it is important to remember that frequency managers in European States were part of the communications side of things, often coupled with the old postal monopolies, and they were not really given to international cooperation or worries about aviation’s problems outside their own land. That aviation was no longer a purely domestic affair had apparently not really touched them.
Although the States never formally admitted this, most of the frequency shortage was due to poor management of the available frequencies. Valuable frequencies were dormant, never used or simply left there in the dust after the organization originally using it had long disappeared.
The airspace users did raise the issue, brought several examples but to no avail. The local czars of frequency management did not relent and hence there was no other choice but to look at technology based solutions.
The choice between 8.33 kHz channel spacing and VDL Mode 3
While the immediate driver behind the effort to find a solution to the frequency shortage was the fear of skyrocketing delays, experts had been saying since the late 1980s that the complete aviation communications system needed overhaul. The VHF AM voice system and the freshly identified future need for air/ground digital link communications all argued for a common solution that would address the frequency shortage as well as the future communications needs.
Keep in mind that in other areas of communications huge advances were taking place at around the same time while aviation was still trying to make up its mind whether or not to replace a voice communications system that had changed little since the 1940s and which was clearly struggling to keep up with demand.
In the United States a system called VDL Mode 3 was being proposed. This system would have enabled four digital channels to be used on every existing 25 kHz channel and would have provided non-voice data link capability also. There were not many believers outside the US in the feasibility of this technology though and it has still not been implemented anywhere.
In Europe, the splitting of the channel spacing to 8.33 kHz was being put forward as the best solution. Missing a once in a lifetime opportunity, the industry did not examine any long-term alternatives…
The 8.33 decision and what followed
As mentioned earlier, the airspace users were not at all convinced about the need to spend money on aircraft modifications when in their view the frequency shortage was mainly due to poor management of the aviation spectrum.
It was in this ambivalent mood that the industry gathered to attend the ICAO European Regional Air Navigation Meeting (EUR RAN) in 1994 where proposals to address the frequency shortage were also to be discussed and decisions made.
For the current generation of ATM decision makers it may be of interest to mention how most decisions were made back then. Seeking a solution to the frequency shortage, 8.33 kHz was picked up without ever considering possible alternatives and without looking at cost-benefit aspects, user impact or the longer term communications requirements. Clearly not something to bring back… ever.
The airspace users, with the specter of even more serious delays hanging over their heads and with their protests brushed aside, had no choice but to note the mandate: 8.33 kHz in European upper airspace as of 1st January 1998.
The ICAO European Air Navigation Planning Group (EANPG) was charged with organizing the introduction of the new channel spacing. The EANPG in turn requested EUROCONTROL to develop a transition plan and manage its implementation.
This is a very important detail that needs to be remembered. To this day, airspace users tend to blame EUROCONTROL for the whole 8.33 issue when in fact EUROCONTROL was only the agent appointed by ICAO (the States you may say) to carry out the implementation.
They did an excellent job and it is not EUROCONTROL’s fault that they had to orchestrate the realization of a less than optimal solution. If we consider that EUROCONTROL had to deal with all the ICAO member states in Europe (49) and had to manage the creation of a mixed 25 kHz/8.33 kHz environment, the eventual achievement of the goals is even more laudable.
Mr. Murphy and the 8.33 implementation plan
“If it can go wrong, it will” – states Murphy’s first law and this was certainly true of this implementation.
EUROCONTROL, quite correctly, had decided early on to establish a project oriented organization to handle the matter and they also had the good sense of requesting the participation of outside experts from organizations like IFATCA and IATA to ensure direct links to the end-users of the new system.
Right from the start the project was up against a time problem. With the first project steps being taken only in early 1996, the 1 January 1998 deadline was clearly a big question mark. So, the first delay kicked the deadline back to 1 January 1999 and the second delay to 7 October 1999.
Why the delays? The rate of equipage of course was the primary and decisive factor.
In many mandated aircraft equipage scenarios you see the equipage curve rising slowly in the beginning, as only a few aircraft are fitted, then as the deadline approaches, the curve becomes very steep but usually does not reach 100 % before the mandate date. What does this mean?
Obviously, airspace users do not want to spend money too early and fly around with the new equipment without it bringing any benefits. When the time comes and fitting becomes inevitable, there is a mad rush to equip, which in turn can result in a shortage of equipment and an overloading of the shops performing retrofits. In the end, inevitably, there are aircraft left out in the cold, not being able to meet the mandate!
All of this had happened in the case of 8.33 and then more.
When the project started, there were no 8.33 kHz capable radios on the market. A few pre-production samples had been produced, but nothing anyone could buy. In spite of the clear mandate, the presence of the competing VDL Mode 3 system and the fact that 8.33 would only be required in Europe somehow led the manufacturers to slow product development and not produce anything until their customers came with definite orders.
The customers on the other hand were reluctant to place orders until closer to the mandate deadline which had to be put off as a result of low equipage rates because of a scarcity of radios! A vicious circle if ever there was one… At times meetings of the 8.33 project team had an air of most participants wishing the whole thing would just go away…
Then there were the aircraft themselves. No matter how advanced the new radios were 8.33 kHz is a very small distance between channels and trials on various aircraft revealed surprising behaviors. Radios on the Boeing 767 for instance worked well while the doors were open but started to produce interference the moment they were closed…
Controllers were fretting about what would happen if pilots regularly mistuned their radios. True, for the first time ever, the numbers seen on the radio control panel do not show the real frequency of an 8.33 spaced channel and this can be confusing.
Issues with the new radiotelephony expressions were also on the agenda for while.
In the end however, the final deadline came and went and the new system worked pretty well. Apart from a few isolated incidents no problems were reported and 8.33 kHz, like any other part of the ATM system, became part of the European scene.
Next steps?
In the meantime, EUROCONTROL has continued to manage the implementation of 8.33 kHz, extending its use also into the lower airspace. They have fulfilled and continue to fulfill the role assigned to them by the EANPG and the benefits specific to 8.33 kHz will no doubt continue to accrue. It is even rumored that the FAA also wants to look into 8.33 kHz channel spacing for introduction in the US.
Did the benefits materialize?
It all depends on how you want to measure the benefits. If the measure is the number of requests for new frequencies that could be accommodated, then the outcome of the exercise is definitely positive. At the very first Frequency Block Planning Meeting held after the introduction of 8.33 kHz channel spacing, 57 of the 59 requests were accommodated, an absolute first. The level of subsequent request satisfactions shows a similar pattern.
It is very likely that a comparison with a “do nothing” scenario would show that investing in 8.33 kHz was not a bad idea.
On the other hand, 8.33 kHz did create the impression that the problem was solved and the motivation to really address the shortcomings of this obsolete communications system has all but disappeared. Back around the time the 8.33 kHz decision was made, it might have been easier to also initiate the development of a new system that would by now provide services to the pilots on a par with what passengers are getting in the near future.
As it is, we are left with a legacy system which will be much more difficult to replace on an industry level now, not least because of the sad shape airlines are in these days.
It is a pity that the EUR RAN meeting in 1994 did not have the vision to look beyond the immediate solution to the problem of frequency shortages.