Find a Security Clearance Job!

Space



MODULAR AUXILIARY DATA SYSTEM...

AVIONICS SYSTEMS

AVIONICS SYSTEMS

The space shuttle avionics system controls, or assists in controlling, most of the shuttle systems. Its functions include automatic determination of the vehicle's status and operational readiness; implementation sequencing and control for the solid rocket boosters and external tank during launch and ascent; performance monitoring; digital data processing; communications and tracking; payload and system management; guidance, navigation and control; and electrical power distribution for the orbiter, external tank and solid rocket boosters.

Automatic vehicle flight control can be used for every phase of the mission except docking, which is a manual operation performed by the flight crew. Manual control-referred to as the control stick steering mode-also is available at all times as a flight crew option.

The avionics equipment is arranged to facilitate checkout, access and replacement with minimal disturbance to other systems. Almost all electrical and electronic equipment is installed in three areas of the orbiter: the flight deck, the three avionics equipment bays in the middeck of the orbiter crew compartment and the three avionics equipment bays in the orbiter aft fuselage. The flight deck of the orbiter crew compartment is the center of avionics activity, both in flight and on the ground. Before launch, the orbiter avionics system is linked to ground support equipment through umbilical connections.

The space shuttle avionics system consists of more than 300 major electronic black boxes located throughout the vehicle, connected by more than 300 miles of electrical wiring. There are approximately 120,400 wire segments and 6,491 connectors in the vehicle. The wiring and connectors weigh approximately 7,000 pounds, wiring alone weighing approximately 4,600 pounds. Total weight of the black boxes, wiring and connectors is approximately 17,116 pounds.

The black boxes are connected to a set of five general-purpose computers through common party lines called data buses. The black boxes offer dual or triple redundancy for every function.

The avionics are designed to withstand multiple failures through redundant hardware and software (computer programs) managed by the complex of five computers; this arrangement is called a fail-operational/fail-safe capability. Fail-operational performance means that, after one failure in a system, redundancy management allows the vehicle to continue on its mission. Fail-safe means that after a second failure, the vehicle still is capable of returning to a landing site safely.

DATA PROCESSING SYSTEM

    The space shuttle vehicle relies on computerized control and monitoring for successful performance. The data processing system, through the use of various hardware components and its self-contained computer programming (software), provides the vehicle with this monitoring and control.

    The DPS hardware consists of five general-purpose computers for computation and control, two magnetic tape mass memory units for large-volume bulk storage, a time-shared computer data bus network consisting of serial digital data buses (essentially party lines) to accommodate the data traffic between the GPCs and space shuttle vehicle systems, 19 orbiter and four solid rocket booster multiplexers/demultiplexers to convert and format data from the various vehicle systems, three space shuttle main engine interface units to command the SSMEs, four multifunction CRT display systems used by the flight crew to monitor and control the vehicle and payload systems, two data bus isolation amplifiers to interface with the ground support equipment/launch processing system and the solid rocket boosters, two master events controllers, and a master timing unit.

    The software stored in and executed by the GPCs is the most sophisticated and complex set of programs ever developed for aero space use. The programs are written to accommodate almost every aspect of space shuttle operations, including orbiter checkout at Rockwell's Palmdale, Calif., assembly facility; space shuttle vehicle prelaunch and final countdown for launch; turnaround activities at the Kennedy Space Center and eventually Vandenberg Air Force Base; control and monitoring during launch ascent, on-orbit activities, entry and landing; and aborts or other contingency mission phases. A multicomputer mode is used for the critical phases of the mission, such as launch, ascent, entry, landing and aborts.

    Some of the DPS functions are as follows: support the guidance, navigation and control of the vehicle, including calculations of trajectories, SSME thrusting data and vehicle attitude control data; process vehicle data for the flight crew and for transmission to the ground and allow ground control of some vehicle systems via transmitted commands; check data transmission errors and crew control input errors; support annunciation of vehicle system failures and out-of-tolerance system conditions; support payloads with flight crew/software interface for activation, deployment, deactivation and retrieval; process rendezvous, tracking and data transmissions between payloads and the ground; and monitor and control vehicle subsystems.

SOFTWARE

    DPS software is divided into two major groups, system software and applications software. The two software program groups are combined to form a memory configuration for a specific mission phase. The software programs are written in HAL/S (high-order assembly language/shuttle) especially developed for real-time space flight applications.

    The system software is the GPC operating software that controls the interfaces among the computers and the rest of the DPS. It is loaded into the computer when it is first initialized. It always resides in the GPC main memory and is common to all memory configurations. The system software controls the GPC input and output, loads new memory configurations, keeps time, monitors discretes into the GPCs and performs many other functions required for the DPS to operate. The system software has nothing to do with orbiter systems or systems management software.

    The system software consists of three sets of programs: the flight computer operating program (the executive) that controls the processors, monitors key system parameters, allocates computer resources, provides for orderly program interrupts for higher priority activities and updates computer memory; the user interface programs that provide instructions for processing flight crew commands or requests; and the system control program that initializes each GPC and arranges for multi-GPC operation during flight-critical phases. The system software program tells the general-purpose computers how to perform and how to communicate with other equipment.

    One of the system software responsibilities is to manage the GPC input and output operations, which includes assigning computers as commanders and listeners on the data buses and exercising the logic involved in sending commands to these data buses at specified rates and upon request from the applications software.

    The applications software contains (1) specific software programs for vehicle guidance, navigation and control required for launch, ascent to orbit, maneuvering in orbit, entry and landing on a runway; (2) systems management programs with instructions for loading memories in the space shuttle main engine computers and for checking the vehicle instrumentation system, aiding in vehicle subsystem checkout, ascertaining that flight crew displays and controls perform properly and updating inertial measurement unit state vectors; (3) payload processing programs with instructions for controlling and monitoring orbiter payload systems that can be revised depending on the nature of the payload; and (4) vehicle checkout programs needed to handle data management, performance monitoring, special processing, and display and control processing.

    The applications software performs the actual duties required to fly and operate the vehicle. To conserve main memory, the applications software is divided into three major functions: guidance, navigation and control; systems management; and payload. Each GPC operates in one major function at a time, and usually more than one computer is in the GN&C major function simultaneously for redundancy.

    The highest level of the applications software is the operational sequence required to perform part of a mission phase. Each OPS is a set of unique software that must be loaded separately into a GPC from the mass memory units. Therefore, all the software residing in a GPC at any time consists of system software and an OPS. An OPS can be further subdivided into groups called major modes, each representing a portion of the OPS mission phase.

    During the transition from one OPS to another, the flight crew requests a new set of applications software to be loaded in from the MMU. Every OPS transition is initiated by the flight crew. An exception is GN&C OPS 1, which is divided into six major modes and contains the OPS 6 return-to-launch-site abort, since there would not be time to load in new software for an RTLS. When an OPS transition is requested, the redundant OPS overlay contains all major modes of that sequence.

    Each major mode has with it an associated CRT display, called an OPS display, that provides the flight crew with information concerning the current portion of the mission phase and allows flight crew interaction. There are three levels of CRT displays. Certain portions of each OPS display can be manipulated by flight crew keyboard input (or ground link) to view and modify system parameters and enter data. The specialist function of the OPS software is a block of displays associated with one or more operational sequences and enabled by the flight crew to monitor and modify system parameters through keyboard entries. The display function of the OPS software is a block of displays associated with one OPS or more. These displays are for parameter monitoring only (no modification capability) and are called from the keyboard.

    The principal software used to operate the vehicle during a mission is the primary avionics software system. It contains all the programming needed to fly the vehicle through all phases of the mission and manage all vehicle and payload systems.

    Since the ascent and entry phases of flight are so critical, four of the five GPCs are loaded with the same PASS software and perform all GN&C functions simultaneously and redundantly. As a safety measure, the fifth GPC contains a different set of software, programmed by a company different from the PASS developer, designed to take control of the vehicle if a generic error in the PASS software or other multiple errors should cause a loss of vehicle control. This software is called the backup flight system. In the less dynamic phases of on-orbit operations, the BFS is not required.

    GPCs running together in the same GN&C OPS are part of a redundant set performing identical tasks from the same inputs and producing identical outputs. Therefore, any data bus assigned to a commanding GN&C GPC is heard by all members of the redundant set (except the instrumentation buses because each GPC has only one dedicated bus connected to it). These transmissions include all CRT inputs and mass memory transactions, as well as flight-critical data. Thus, if one or more GPCs in the redundant set fail, the remaining computers can continue operating in GN&C. Each GPC performs about 325,000 operations per second during critical phases.

    Each computer in a redundant set operates in synchronized steps and cross-checks results of processing about 440 times per second. Synchronization refers to the software scheme used to ensure simultaneous intercomputer communications of necessary GPC status information among the primary avionics computers. If a GPC operating in a redundant set fails to meet two redundant synchronization codes in a row, the remaining computers will vote it out of the redundant set. Or if a GPC has a problem with its multiplexer interface adapter receiver during two successive reads of response data and does not receive any data while the other members of the redundant set do not receive the data, they in turn will vote the GPC out of the set. A failed GPC is halted as soon as possible.

    GPC failure votes are annunciated in a number of ways. The GPC status matrix on panel O1 is a 5-by-5 matrix of lights. For example, if GPC 2 sends out a failure vote against GPC 3, the second white light in the third column is illuminated. The yellow diagonal lights from upper left to lower right are self-failure votes. Whenever a GPC receives two or more failure votes from other GPCs, it illuminates its own yellow light and resets any failure votes that it made against other GPCs (any white lights in its row are extinguished). Any time a yellow matrix light is illuminated, the GPC red caution and warning light on panel F7 is illuminated, in addition to master alarm illumination, and a GPC fault message is displayed on the CRT.

    GPCs.

    Five identical general-purpose computers aboard the orbiter control space shuttle vehicle systems. Each GPC is composed of two separate units, a central processor unit and an input/output processor. All five GPCs are IBM AP-101 computers. Each CPU and IOP contains a memory area for storing software and data. These memory areas are collectively referred to as the GPC's main memory.

    The central processor controls access to GPC main memory for data storage and software execution and executes instructions to control vehicle systems and manipulate data. In other words, the CPU is the ''number cruncher'' that computes and controls computer functions.

    The IOP formats and transmits commands to the vehicle systems, receives and validates response data from the vehicle systems and maintains the status of interfaces with the CPU and the other GPCs.

    The IOP of each computer has 24 independent processors, each of which controls 24 data buses used to transmit serial digital data between the GPCs and vehicle systems, and secondary channels between the telemetry system and units that collect instrumentation data. The 24 data buses are connected to each IOP by multiplexer interface adapters that receive, convert and validate the serial data in response to discrete signals calling for available data to be transmitted or received from vehicle hardware.

    During the receive mode, the multiplexer interface adapter validates the received data (notifying the IOP control logic when an error is detected) and reformats the data. During the receive mode, its transmitter is inhibited unless that particular GPC is in command of that data bus.

    During the transmit mode, a multiplexer interface adapter transmits and receives 28-bit command/data words over the computer data buses. When transmitting, the MIA adds the appropriate parity and synchronization code bits to the data, reformats the data, and sends the information out over the data bus. In this mode, the MIA's receiver and transmitters are enabled.

    The first three bits of the 28-bit word provide synchronization and indicate whether the information is a command or data. The next five bits identify the destination or source of the information. For command words, 19 bits identify the data transfer or operations to be performed; for data words, 16 of the 19 bits contain the data and three bits define the word validity. The last bit of each word is for an odd parity error test.

    The main memory of each GPC is non-volatile (the software is retained when power is interrupted). The memory capacity of each CPU is 81,920 words, and the memory capacity of each IOP is 24,576 words; thus, the CPU and IOP constitute a total of 106,496 words.

    The hardware controls for the GPCs are located on panel O6. Each computer reads the position of its corresponding output , initial program load and mode switches from discrete input lines that go directly to the GPC. Each GPC also has an output and mode talkback indicator on panel O6 that are driven from GPC output discretes.

    Each GPC power on , off switch is a guarded switch. Positioning a switch to on provides the computer with triply redundant power (not through a discrete) by three essential buses-ESS1BC, 2AC and 3AB-which run through the GPC power switch. The essential bus power is transferred to remote power controllers, which permits main bus power from the three main buses (MNA, MNB and MNC) to power the GPC. There are three RPCs for the IOP and three for the CPU; thus, any GPC will function normally, even if two main or essential buses are lost.

    Each computer uses over 600 watts of power. GPCs 1 and 4 are located in forward middeck avionics bay 1, GPCs 2 and 5 are located in forward middeck avionics bay 2, and GPC 3 is located in aft middeck avionics bay 3. The GPCs receive forced-air cooling from an avionics bay fan. There are two fans in each avionics bay but only one is powered at a time. If both fans in an avionics bay fail, the computers will overheat and could not be relied on to operate properly for more than 20 minutes if the initial condition is warm.

    Each GPC output switch is a guarded switch with backup , normal and terminate positions. The output switch provides a hardware override to the GPC that precludes that GPC from outputting (transmitting) on the flight-critical buses. The switches for the primary avionics GN&C GPCs are positioned to normal , which permits them to output (transmit). The backup flight system GPC switch is positioned to backup, which precludes it from outputting until it is engaged. The switch for a GPC designated on orbit to be a systems management computer is positioned to terminate since the GPC is not to command anything on the flight-critical buses.

    The output talkback indicator above each output switch on panel O6 indicates gray if that GPC output is enabled and barberpole if it is not.

    Each GPC receives run , stby , or halt discrete inputs from its mode switch on panel O6, which determines whether that GPC can process software. The mode switch is lever-locked in the run position. The halt position for a GPC initiates a hardware-controlled state in which no software can be executed. A GPC that fails to synchronize with others is moded to halt as soon as possible to prevent the failed computer from outputting erroneous commands. The mode talkback indicator above the mode switch for that GPC indicates barberpole when that computer is in halt.

    In standby, a GPC is also in a state in which no software can be executed but is in a software-controlled state. The stby discrete allows an orderly startup or shutdown of processing. It is necessary, as a matter of procedure, for a GPC that is shifting from run to halt to be temporarily (more than one second) in the standby mode before going to halt since the standby mode allows for an orderly software cleanup and allows a GPC to be correctly initialized without an initial program load. If a GPC is moded from run to halt without pausing in standby, it may not perform its functions correctly upon being remoded to run. There is no stby indication on the mode talkback indicator above the mode switch; however, it would indicate barberpole in the transition from run to standby and run from standby to halt.

    The run position permits a GPC to support its normal processing of all active software and assigned vehicle operations. Whenever a computer is moded from standby or halt to run, it initializes itself to a state in which only system software is processed (called OPS 0). If a GPC is in another OPS before being moded out of run and the initial program has not been loaded since, that software still resides in main memory; but it will not begin processing until that OPS is recalled by flight crew keyboard entry. The mode talkback indicator always reads run when that GPC switch is in run and the computer has not failed.

    Placing the backup flight system GPC in standby does not stop BFS software processing or preclude BFS engagement; it only prevents the BFS from commanding.

    The IPL push button indicator for a GPC on panel O6 activates the initial program load command discrete input when depressed. When the input is received, that GPC initiates an IPL from whichever mass memory unit is specified by the IPL source , MMU 1 , MMU 2 , off switch on panel O6. The talkback indicator above the mode switch for that GPC indicates IPL .

    During non-critical flight periods in orbit, only one or two GPCs are used for GN&C tasks and another for systems management and payload operations.

    A GPC on orbit can also be ''freeze-dried;'' that is, it can be loaded with the software for a particular memory configuration and then moded to standby. It can then be moded to halt and powered off. Since the GPCs have non-volatile memory, the software is retained. Before an OPS transition to the loaded memory configuration, the freeze-dried GPC can be moded back to run and the appropriate OPS requested.

    A simplex GPC is one in run and not a member of the redundant set, such as the BFS GPC. Systems management and payload major functions are always in a simplex GPC.

    A failed GPC can be hardware-initiated, stand-alone-memory-dumped by switching the powered computer to terminate and halt and then selecting the number of the failed GPC on the GPC memory dump rotary switch on panel M042F in the crew compartment middeck. Then the GPC is moded to standby to start the dump, which takes three minutes.

    Each CPU is 7.62 inches high, 10.2 inches wide and 19.55 inches long; it weighs 57 pounds. The IOPs are the same size and weight as the CPUs.

    The new upgraded general-purpose computers, AP-101S from IBM, will replace the existing GPCs, AP-101B, aboard the space shuttle orbiters in mid-1990.

    The upgraded GPCs allow NASA to incorporate more capabilities into the space shuttle orbiters and apply more advanced computer technologies than were available when the orbiter was first designed. The new design began in January 1984, whereas the older GPC design began in January 1972.

    The upgraded computers provide 2.5 times the existing memory capacity and up to three times the existing processor speed with minimum impact on flight software. The upgraded GPCs are half the size and approximately half the weight of the old GPCs, and they require less power to operate.

    The upgraded GPCs consist of a central processor unit and an input/output processor in one avionics box instead of the two separate CPU and IOP avionics boxes of the old GPCs. The upgraded GPC can perform more than 1 million benchmark tests per second in comparison to the older GPC's 400,000 operations per second. The upgraded GPCs have a semiconductor memory of 256,000 32-bit words; the older GPCs have a core memory of up to 104,000 32-bit words.

    The upgraded GPCs have volatile memory, but each GPC contains a battery pack to preserve the software when the GPC is powered off.

    The initial predicted reliability of the upgraded GPCs is 6,000 hours mean time between failures, with a projected growth to 10,000 hours mean time between failures. The mean time between failures for the older GPCs is 5,200 hours-more than five times better than the original reliability estimate of 1,000 hours.

    The AP-101S avionics box is 19.55 inches long, 7.62 inches high and 10.2 inches wide, the same as one of the two previous GPC avionics boxes. Each of the five upgraded GPCs aboard the orbiter weighs 64 pounds, in comparison to 114 pounds for the two units of the older GPCs. This change reduces the weight of the orbiter's avionics by approximately 300 pounds and frees a volume of approximately 4.35 cubic feet in the orbiter avionics bays. The older GPCs require 650 watts of electrical power versus 550 watts for the upgraded units.

    Thorough testing, documentation and integration, including minor modifications to flight software, were performed by IBM and NASA's Shuttle Avionics Integration Laboratory in NASA's Avionics Engineering Laboratory at the Johnson Space Center.

MASS MEMORY UNITS

    There are two mass memory units aboard the orbiter. Each is a coaxially mounted, reel-to-reel digital magnetic tape storage device for GPC software and orbiter systems data that can be written to or read from. The MMU tape is 602 feet long and 0.5 of an inch wide and has nine tracks (eight data tracks and one control track). These tracks are divided into files and subfiles for finding particular locations.

    Computing functions for all mission phases requires approximately 400,000 words of computer memory. The orbiter GPCs are loaded with different memory groups from the MMUs containing the desired program. In this way, software can be stored in MMUs and loaded into the GPCs when actually needed.

    To fit the required software into the available GPC memory space, programs are subdivided into nine memory groups corre sponding to functions executed during specific flight and checkout phases. Thus, in addition to the central memory in the GPCs themselves, 34 million bytes of information can be stored in each of the two mass memory units. Critical programs and data are loaded in both MMUs and protected from erasure.

    The principal function of the MMU, besides storing the basic flight software, is to store background formats for certain CRT displays and the checkpoints that are written periodically to save system data in case the systems management GPC fails.

    MMU operations are controlled by logic and the read and write electronics that activate the proper tape heads (read or write/erase) and validate the data.

    Each MMU interfaces with its mass memory data bus through multiplexer interface adapters, which function like the GPCs. Each mass memory data bus is connected to all five computers; however, each MMU is connected to only one mass memory data bus. All MMU operations are on an on-demand basis only.

    The mass memory units are an advanced form of data storage that fills the gap between slow-access drives of high storage capacity and discs or drums with fast access but relatively low storage capacity.

    The power switches are located on panel O14 for MMU 1 and panel O15 for MMU 2. The MMU 1 switch positioned to on allows control bus power to activate an RPC, which allows MNA power to MMU 1. The MMU 2 switch positioned to on operates in a similar manner with MNB power. A mass memory unit uses 20 watts of power in standby and 50 watts when the tape is moving.

    MMU 1 is located in crew compartment middeck avionics bay 1, and MMU 2 is in avionics bay 2. Each unit is cooled by water coolant loop cold plates. Each MMU is 7.6 inches high, 11.6 inches wide and 15 inches long and weighs 22 pounds.

MULTIFUNCTION CRT DISPLAY SYSTEM

    The MCDS on the orbiter crew compartment flight deck allows onboard monitoring of orbiter systems, computer software processing and manual control for flight crew data and software manipulation.

    The system is composed of three types of hardware: display electronics units; display units that include the CRTs; and keyboard units, which together communicate with the GPCs over the display/keyboard data bus network.

    The MCDS provides almost immediate response to flight crew inquiries through displays, graphs, trajectory plots and predictions about flight progress. The crew controls the vehicle system operation through the use of keyboards in conjunction with the display units. The flight crew can alter the system configuration, change data or instructions in GPC main memory, change memory configurations corresponding to different mission phases, respond to error messages and alarms, request special programs to perform specific tasks, run through operational sequences for each mission phase and request specific displays.

    Three keyboards are located on the flight deck: two on the left and right sides of the flight deck center console (panel C2) and one on the flight deck at the side aft flight station (panel R12). Each consists of 32 momentary double-contact push button keys. Each key uses its double contacts to communicate on separate signal paths to two DEUs. Only one set of contacts on the aft station keys is actually wired because this keyboard can communicate with only the aft display electronics unit.

    There are 10 number keys, six letter keys (used for hexadecimal inputs), two algebraic keys, a decimal key, and 13 special key functions. Using these keys, the flight crew can ask the GPC more than 1,000 questions about the mission and condition of the vehicle.

    Each of the four DEUs responds to computer commands, transmits data, executes its own software to process keyboard inputs and sends signals to drive displays on the CRTs (or display units). The four DEUs store display data, generate the GPC/keyboard unit and GPC/display unit interface displays, update and refresh on-screen data, check keyboard entry errors and echo entries to the CRT (or DU).

    There are three CRTs (or display units) on flight deck forward display and control panel F7 and one at the side aft flight deck station on panel R12. Each CRT is 5 by 7 inches.

    The display unit uses a magnetic-deflected, electrostatic-focused CRT. When supplied with deflection signals and video input, the CRT displays alphanumeric characters, graphic symbols and vectors on a green-on-green phosphorous screen activated by a magnetically controlled beam. Each CRT has a brightness control for ambient light and flight crew adjustment.

    The DEUs are connected to the display/keyboard data buses by multiplexer interface adapters that function like those of the GPCs. Inputs to the DEU are from a keyboard or a GPC. The CRT switches on panel C2 designate which keyboard controls the forward DEUs and CRT (or DUs). When the left CRT sel switch is positioned to 1, the left keyboard controls the left CRT 1; if positioned to 3, it controls the center CRT 3. When the right CRT sel switch is positioned to 2 , the right keyboard controls the right CRT 2; if positioned to 3 , it controls the center CRT 3. If the left CRT sel and right CRT sel switches are both positioned to 3, keystrokes from both keyboards are interleaved. Thus, flight crew inputs are made on the keyboards and data is output from the GPCs on the CRT displays.

    The aft station panel R12 keyboard is connected directly to the aft panel R12 DEU and CRT (or DU); there is no select switch.

    Each DEU/DU pair, usually referred to as a CRT, has an associated power switch. The CRT 1 power , on , stby , off switch on panel C2 positioned to stby or on allows control bus power to activate RPCs and sends MNA power to DEU/ DU 1. The stby position warms up the CRT filament. The on position provides high voltage to the CRT. The CRT 2 switch on panel C2 functions the same as the CRT 1 switch, except that CRT 2 is powered from MNB. The CRT 3 switch on panel C2 functions the same as the CRT 1 switch, except that CRT 3 is powered from MNC. The CRT 4 switch on panel R12 functions the same as the CRT 1 switch, except that CRT 4 is powered from MNC. The respective keyboards receive 5 volts of ac power to illuminate the keys. Each DEU/DU pair uses about 300 watts of power when on and about 230 watts in standby.

    The CRT 1, 3, 2, major func, GNC, SM and PL switches on panel C2 tell the GPCs which of the different functional software groups is being processed by the keyboard units and what information is presented on the CRT. The CRT 4 , major func, GNC, SM and PL switches on panel R12 function in the same manner.

    Positioning the display electronics unit 1 , 2 , 3, 4 switches on panel O6 to load initiates a GPC request for data stored in mass memory through a GPC before operations begin. The information is sent from the mass memory to the GPC and then loaded from the GPC into the DEU memory.

    It is possible to do in-flight maintenance and exchange DU 4 with DU 1 or 2. DU 3 cannot be changed out because of the control and display panel configuration. Also, either forward keyboard can be replaced by the aft keyboard. The DEUs are located behind panels in the middeck. DEUs 1 and 3 are on the left, and DEUs 2 and 4 are on the right. DEU 4 can replace any of the others; however, if DEU 2 is to be replaced, only the cables are changed because 2 and 4 are next to each other.

    The DEUs and DUs are cooled by the cabin fan system. The keyboard units are cooled by heat dissipation.

MASTER TIMING UNIT

    The GPC complex requires a stable, accurate time source because its software uses Greenwich Mean Time to schedule processing. Each GPC uses the master timing unit to update its internal clock. The MTU provides precise frequency outputs for various timing and synchronization purposes to the GPC complex and many other orbiter subsystems. Its three time accumulators provide GMT and mission elapsed time, which can be updated by external control. The accumulator's timing is in days, hours, minutes, seconds, and milliseconds up to one year.

    The master timing unit is a stable, crystal-controlled frequency source that uses two oscillators for redundancy. The signals from one of the two oscillators are passed through signal shapers and frequency drivers to the three GMT/MET accumulators.

    The MTU outputs serial digital time data (GMT/MET) on demand to the GPCs through the accumulators. The GPCs use this information for their reference time and indirectly for time-tagging GN&C and systems management processing. The MTU also provides continuous digital timing outputs to drive the four digital timers in the crew compartment-two mission timers and two event timers. In addition, the MTU also provides signals to the pulse code modulation master units, payload signal processor and FM signal processor, as well as various payloads.

    The GPCs start by using MTU accumulator 1 as their time source. Every second, each GPC checks the accumulator time against its own internal time. If the time is within tolerance (less than one millisecond), the GPC updates its internal clock to the time of the accumulator, which is more accurate, and continues to use that accumulator. However, if the time is out of tolerance, the GPC will try the other MTU accumulators and then the lowest numbered GPC until it finds a successful comparison.

    The GPCs do not use the mission elapsed time that they receive from the master timing unit because flag compute MET on the basis of current GMT and lift-off time.

    The master timing unit is redundantly powered by the MTU A and MTU B circuit breakers on panel O13. The master timing unit OSC 1, auto, OSC 2 switch on panel O6 controls the MTU. When the switch is in auto and a time signal from the MTU is out of tolerance, the MTU automatically switches to the other oscillator. Normally, the MTU is activated by oscillator 1 with the switch in auto . The OSC 1 or OSC 2 position, manually selects OSC 1 or OSC 2.

    The MTU is located in crew compartment middeck avionics bay 3B and is cooled by a water coolant loop cold plate. The only hardware displays associated with the MTU are the mission and event timers. The mission timers are located on panels O3 and A4. They can display either GMT or MET in response to the GMT or MET switch positions. The forward event timer is on panel F7 and its control switches are on panel C2. The aft event timer is on panel A4 and its control switches are on panel A6.

    The master timing unit contractor is Westinghouse Electric Corp., Systems Development Division, Baltimore, Md.

COMPUTER DATA BUS NETWORK

    The orbiter computer data bus network consists of a group of twisted, shielded wire pairs (data buses) that support the transfer of serial digital commands from the GPCs to vehicle hardware and vehicle systems data to the GPCs. The computer data bus network is divided into specific groups that perform specific functions.

    Flight-critical data buses tie the GPCs to flight-critical MDMs, display driver units, head-up displays, main engine interface units and master events controllers. Intercomputer communication data buses are for GPC-to-GPC transactions. Mass memory data buses conduct GPC/mass memory unit transactions. Display keyboard data buses are for GPC/display electronic unit transactions. Instrumentation/pulse code modulation master unit data buses are for GPC/PCMMU transactions. Launch/boost data buses tie the GPCs to ground support equipment, launch forward and launch aft MDMs, solid rocket booster MDMs and the remote manipulator system manipulator control interface unit. Payload data buses tie the GPCs to payload MDMs and the payload data interleaver.

    Although all data buses except the instrumentation/PCMMU buses are connected to all five GPCs, only one GPC at a time controls (transmits commands over) each bus. However, several GPCs may listen (receive data) from the same bus simultaneously. The flight crew can select the GPC that controls a given bus.

    Each data bus, with the exception of the intercomputer communication data buses, is bidirectional; that is, traffic can flow in either direction. The intercomputer communication data bus traffic flows in only one direction.

    There are five intercomputer communication data buses. The following information is exchanged over the IC buses for proper data processing system operation: input/output errors, fault messages, GPC status matrix data, display electronics unit major function switch settings, GPC/CRT keyboard entries, resident GPC memory configuration, memory configuration table, operational sequences, master timing unit, internal GPC time, system-level display information, uplink data and state vector.

    All GPCs processing primary avionics software exchange status information over the IC data buses. During critical mission phases (launch, ascent and entry), usually GPCs 1, 2, 3 and 4 are assigned to perform GN&C tasks, operating as a cooperative redundant set, with GPC 5 as the backup flight system. One of the PASS GPCs acts as a commander of a given data bus in the flight control scheme and initiates all data bus transactions.

    Cross-strapping the four intercomputer communication buses to the four PASS GPCs allows each GPC access to the status of data received or transmitted by the other GPCs so that identical results among the four PASS GPCs can be verified. The four PASS GPCs are loaded with the same software programs. Each IC bus is assigned to one of the four PASS GPCs in the command mode, and the remaining GPCs operate in the listening mode for the bus. Each GPC can receive data with the other three GPCs, pass data to the others, request data from the others and perform any other tasks required to operate the redundant set. In addition, GPC 5 requires certain information to perform its function as the backup flight system because it must listen to the transactions on the IC data bus.

    Flight-critical buses tie the GPCs to flight-critical MDMs, display driver units, head-up displays, main engine interface units and master events controllers. These buses are directed into groups of four compatible with the grouping of four PASS GPCs. Four of these buses-FC1, 2, 3 and 4-connect the GPCs with the four flight-critical forward MDMs, the four aft flight-critical MDMs, the three DDUs and the two HUDs. The other four flight-critical buses-FC5, 6, 7 and 8-connect the GPCs to four forward MDMs, the four aft MDMs, the two mission events controllers and the three main engine interface units. The specific manner in which these units interface is referred to as a string. A string is composed of two flight-critical data buses-one from the first group (FC1, 2, 3 or 4) and one from the second group (FC5, 6, 7 or 8).

    The GPC in the command mode issues data requests and commands to the applicable vehicle systems over its assigned flight-critical (dedicated) bus. The remaining three buses in each group are assigned to the remaining GPCs in the listening mode. A GPC operating in the listening mode can only receive data. Thus, if GPC 1 operates in the command mode on FC1 and FC5, it listens on the three remaining buses. For example, GPC 1 is assigned as the commander of string 1, which includes flight-critical data bus 1 and flight-critical forward MDM 1. GPC 1's transmitter is enabled on FC1, and the three remaining non-commander PASS GPCs need to receive the same redundant information at the same time and verified as consistent identical information; thus, their receivers must also be enabled on FC1 to listen in on the data bus.

    In this example, when all GPCs require a time update from the master timing unit, GPC 1 is the only GPC that actually issues the command to the MTU because it is in command of the intercomputer communication data bus connected to the MDM that interfaces with accumulator 1 of the MTU. All five GPCs receive this time update because they all are listening to the response data transmitted over their own dedicated IC bus.

    Each flight-critical bus in a group of four is commanded by a different GPC. Multiple units of each GN&C hardware item are wired to a different MDM and flight-critical bus.

    In this example, string 1 consists of FC data buses 1 and 5; MDMs flight forward 1 and flight aft 1 and their hard-wired hardware, controls and displays; the three EIUs; the two MECs; the three DDUs; HUD 1; and their associated displays. Thus, four strings are defined in this manner.

    During launch, ascent and entry, when there are four PASS GN&C GPCs, each of the four strings is assigned to a different GPC to maximize redundancy. All flight-critical units are redundant, and the redundant units are on different strings. The string concept provides failure protection during dynamic phases by allowing exclusive command of a specific group of vehicle hardware by one GPC, which can be transferred to another GPC in case of failure. Additional redundancy is provided because each FF and FA MDM is connected to the GPCs by two flight-critical data buses; thus, all or part of one string can be lost and all functions will still be retained through the other string.

    The four display electronics unit keyboard data buses, one for each DEU, are connected to each of the five GPCs. The computer in command of a particular keyboard data bus is a function of the current major func switch setting of the associated CRT, current memory configuration, GPC/CRT keyboard entries and the position of the backup flight control CRT switches.

    Two payload data buses interface the five GPCs with the two payload MDMs (also called payload forward MDMs), which interface with orbiter systems and payloads. A payload data interleaver is connected to payload data bus 1. Each payload MDM is connected to two payload data buses. Up to five safety-critical payload status parameters may be hard-wired; then these parameters and others can be recorded as part of the vehicle's systems management, which is transmitted and received over two payload buses. To accommodate the various forms of payload data, the payload data interleaver integrates payload data for transmission to ground telemetry.

    The five instrumentation/pulse code modulation master unit data buses are unique in that each GPC commands its own individual data bus to two PCMMUs. All the other data buses go to every GPC.

    Flight controllers in the Mission Control Center monitor the status of the vehicle's onboard systems through data transmissions from the vehicle to the ground. These transmissions, called downlink, include GPC-collected data, payload data, instrumentation data and onboard voice. The GPC-collected data, called downlist, includes a set of parameters chosen before flight for each mission phase.

    The system software in each GPC assimilates the specified GN&C, systems management, payload or DPS data according to the premission-defined format for inclusion in the downlist. Each GPC is physically capable of transmitting its downlist to the current active PCMMU over its dedicated instrumentation/PCMMU data bus. Only one PCMMU is powered at a time. It interleaves the downlist data from the different GPCs with the instrumentation and payload data according to the telemetry format load programmed in the PCMMU. The resulting composite data set, called the operational downlink, is transmitted to the network signal processor. Only one NSP is powered at a time. In the NSP, the operational downlink is combined with onboard recorded voice for transmission to the ground. The S-band system transmits the data to the space flight tracking and data network remote site ground stations, which send it to the MCC. Or the downlink is routed through the orbiter's Ku-band system to the Tracking and Data Relay Satellite system.

    Uplink is the method by which ground commands originating in the MCC are formatted, generated and transmitted to the orbiter for validation, processing and eventual execution by onboard software. This capability allows the ground to control software processing, change modes in orbiter hardware and store or change software in GPC memory and mass memory.

    From MCC consoles, operators issue commands and request uplink. The command requests are formatted into a command load for transmission to the orbiter either by the STDN sites and S-band or by the Ku-band system. The S-band or Ku-band transponder receivers aboard the orbiter send the commands to the active network signal processor. The NSP validates the commands until they are requested by the GPCs through an FF MDM. The GPCs also validate the commands before executing them. Those GPCs not listening directly to the flight-critical data buses receive uplink commands over the intercomputer communication data buses.

    The PCMMU also contains a programmable read-only memory for accessing subsystem data, a random-access memory in which to store data and a memory in which GPC data is stored for incorporation into the downlink.

    To prevent the uplink of spurious commands from somewhere other than the MCC, the flight crew can control when the GPCs accept uplink commands and when uplink is blocked. The GPC block position of the uplink NSP switch on panel C3 inhibits uplink commands during ascent and entry when the orbiter is not over a ground station or in TDRS coverage. The flight crew selects this switch position when the capsule communicator at the MCC requests loss-of-signal configuration. The flight crew selects the enable position of the switch during ascent or entry when the capsule communicator requests acquisition-of-signal configuration.

    Two launch data buses, also referred to as launch/boost data buses, are used primarily for ground checkout and launch phase activities. They connect the five GPCs with ground support equipment/launch processing system, the launch forward (LF1) and launch aft (LA1) MDMs aboard the orbiter, and the two left and right SRB MDMs (LL1, LL2, LR1 and LR2). The GSE/LPS interface is disconnected at lift-off by the T-0 umbilical. The solid rocket booster interfaces are disconnected at SRB separation. Launch data bus 1 is used on orbit for interface with the remote manipulator controller by the systems management GPC.

MULTIPLEXERS/DEMULTIPLEXERS

    There are 23 multiplexers/demultiplexers aboard the orbiter; 16 are part of the DPS, connected directly to the GPCs and named according to their location in the vehicle and hardware interface. The remaining seven MDMs are part of the vehicle instrumentation system and send vehicle instrumentation data to the pulse code modulation master unit.

    The data processing system MDMs consist of flight-critical forward MDMs 1 through 4, flight-critical aft MDMs 1 through 4, payload MDMs 1 and 2, SRB launch left MDMs 1 and 2 and launch right MDMs 1 and 2, and GSE/LPS launch forward 1 and launch aft 1.

    Of the seven operational instrumentation MDMs, four are located forward (OF1, OF2, OF3 and OF4) and three are located aft (OA1, OA2 and OA3).

    The system software in each redundant set of GPCs activates a GN&C executive program and issues commands to the bus and MDM to provide a set of input data. Each MDM receives the command from the GPC assigned to it, acquires the requested data from the GN&C hardware wired to it and sends the data to the GPCs.

    The DPS MDMs convert and format serial digital GPC commands into separate parallel discrete, digital and analog commands for various vehicle system hardware. This operation is called demultiplexing. The MDMs also multiplex, or convert, and format the discrete, digital and analog data from vehicle systems into serial digital data for transmission to the GPCs. Each MDM has two redundant multiplexer interface adapters that function the same as the GPC MIAs and are connected to two data buses. The MDM's other functional interface is its connection to the appropriate vehicle system hardware by hard-wired lines.

    When the sets of GN&C hardware data arrive at the GPCs through the MDMs and data buses, the information is generally not in the proper format, units or form for use by flight control, guidance or navigation. A subsystem operating program for each type of hardware processes the data to make it usable by GN&C software. These programs contain the software necessary for hardware operation, activation, self-testing and moding. The level of redundancy varies from two to four, depending on the particular unit. The software that processes data from redundant GN&C hardware is called redundancy management. It performs two functions: (1) selecting, from redundant sets of hardware data, one set of data for use by flight control, guidance and navigation and (2) detecting out-of-tolerance data, identifying the faulty unit and announcing the failure to the flight crew and to the data collection software.

    In the case of four redundant hardware units, the redundancy management software uses three and holds the fourth in reserve. It utilizes a middle value select until one of the three is bad and then uses the fourth. If one of the remaining three is lost, the software downmodes to two and uses the average of two. If one of the remaining two is lost, the software downmodes to one and passes only the data it receives.

    The three main engine interface units between the GPCs and the three main engine controllers accept GPC main engine commands, reformat them and transfer them to each main engine controller. In return, the EIUs accept data from the main engine controller, reformat it and transfer it to GPCs and operational instrumentation. Main engine functions, such as ignition, gimbaling, throttling and shutdown, are controlled by each main engine controller internally through inputs from the guidance equations that are computed in the orbiter GPCs.

    Each flight-critical data bus is connected to a flight forward and flight aft MDM. Each MDM has two MIAs, or ports, and each port has a channel through which the GPCs can communicate with an MDM; however, the FC data buses can interface with only one MIA port at a time. Port moding is the software method used to control the MIA port that is used in an MDM. Initially, these MDMs operate with MIA port 1; if a failure occurs in MIA port 1, the flight crew can select MIA port 2. Since port moding involves a pair of buses, both MDMs must be ported at the same time. The control of all other units connected to the affected data buses is unaffected by port moding.

    Payload data bus 1 is normally connected to the primary MIA port of payload MDM 1 and payload data bus 2 is connected to the primary MIA port of payload MDM 2. Payload data bus 1 can be connected to the secondary MIA port of payload MDM 2 and payload data bus 2 can be connected to the secondary MIA port of payload MDM 1 by flight crew selection.

    The two launch data buses are also connected to dual MDM MIA ports. The flight crew cannot switch these ports; however, if an input/output error is detected on LF1 or LA1 during ascent, an automatic switchover occurs.

    The only hardware controls for the MDMs are the MDM FC and MDM PL power switches on panel O6. These are on/off switches that provide or remove power for the four aft and four forward flight-critical MDMs and PL1, PL2 and PL3 MDMs. The PL3 MDM switch is unwired and is not used. There are no flight crew controls for the SRB MDMs.

    Each MDM is redundantly powered by two main buses. The power switches control bus power for activation of a remote power controller for main power bus to an MDM. The main buses power separate power supplies in the MDM. Loss of either the main bus or MDM power supply does not cause a loss of function because each power supply powers both channels in the MDM. Turning power off to an MDM resets all the commands to subsystems.

    The SRB MDMs receive power through SRB buses A and B; they are tied to the orbiter main buses and controlled by the master events controller circuitry. The launch forward and aft MDMs receive their power through the preflight test buses.

    The FF1, PL1 and LF1 MDMs are located in the forward avionics bays and are cooled by water coolant loop cold plates. LA1 and the FA MDMs are in the aft avionics bays and are cooled by Freon coolant loop cold plates. MDMs LL1, LL2, LR1 and LR2 located in the SRBs are cooled by passive cold plates.

    Modules and cards in an MDM depend on the hardware components accessed by that type of MDM. An FF MDM and an FA MDM are not interchangeable. However, one FF MDM may be interchanged with another or one payload MDM with another.

    Each MDM is 13 by 10 by 7 inches and weighs 36.7 pounds. MDMs use less than 80 watts of power.

    The MDM contractor is Honeywell Inc., Sperry Space Systems Division, Phoenix, Ariz.

MASTER EVENTS CONTROLLERS

    The two master events controllers under GPC control send signals to arm and safe pyrotechnics and command and fire pyrotechnics during the solid rocket booster/external tank separation process and the orbiter/external tank separation process. The MEC contractor is Rockwell International, Autonetics Group, Anaheim, Calif.

DATA BUS ISOLATION AMPLIFIERS

    Data bus isolation amplifiers are the interfacing devices for the GSE/LPS and the SRB MDMs. They transmit or receive multiplexed data in either direction. The amplifiers enable multiplexed communications over the longer data bus cables that connect the orbiter and GSE/LPS. The receiving section of the amplifiers detects low-level coded signals, discriminates against noise and decodes the signal to standard digital data at a very low bit error rate; the transmit section of the amplifiers then re-encodes the data and retransmits it at full amplitude and low noise.

    Data bus couplers couple the vehicle multiplexed data and control signals from the data bus and cable studs connected to the various electronic units. The couplers also perform impedance matching on the data bus, line termination, dc isolation and noise rejection.

    Each data bus isolation amplifier is 7 by 6 by 5 inches and weighs 7.5 pounds. Each data bus coupler is 1 cubic inch in size and weighs less than 1 ounce. The contractor for the data bus isolation amplifiers and data bus couplers is Singer Electronics Systems Division, Little Falls, N.J.

BACKUP FLIGHT CONTROL

    Even though the four primary avionics software system GPCs control all GN&C functions during the critical phases of the mission, there is always a possibility that a generic failure could cause loss of vehicle control. Thus, the fifth GPC is loaded with different software created by a different company than the PASS developer. This different software is the backup flight system. To take over control of the vehicle, the BFS monitors the PASS GPCs to keep track of the current state of the vehicle. If required, the BFS can take over control of the vehicle upon the press of a button. The BFS also performs the systems management functions during ascent and entry because the PASS GPCs are operating in GN&C. BFS software is always loaded into GPC 5 before flight, but any of the five GPCs could be made the BFS GPC if necessary.

    The BFS interface programs, events and applications controllers, and GN&C are provided by the Charles Stark Draper Laboratory Inc., Cambridge, Mass. The remainder of the software, as well as the integration of the total backup flight control system, is provided by Intermetrics and Rockwell International. The GN&C software is written in HAL/S by Intermetrics of Boston, Mass.

    Since the BFS is intended to be used only in a contingency, its programming is much simpler than that of the PASS. Only the software necessary to complete ascent or entry safely, maintain vehicle control in orbit and perform systems management functions during ascent and entry is included. Thus, all the software used by the BFS can fit into one GPC and never needs to access mass memory. For added protection, the BFS software is loaded into the MMUs in case of a BFS GPC failure.

    The BFS, like PASS, consists of system software and applications software. System software in the BFS performs basically the same functions as it does in PASS. These functions include time management, PASS/BFS interface, multifunction CRT display system, input/output, uplink/downlink and engage/disengage control. The system software is always operating when the BFS GPC is not in halt.

    Applications software in the BFS has different major functions, GN&C and systems management; but all of its applications software resides in main memory at one time, and the BFS can process software in both major functions simultaneously. The GN&C functions of the BFS, designed as a backup capability, support the ascent phase beginning at major mode 102 and the deorbit/entry phase beginning at major mode 301. In addition, the various ascent abort modes are supported by the BFS. The BFS provides only limited support for on-orbit operations through major modes 106 or 301. Because the BFS is designed to monitor everything the PASS does during ascent and entry, it has the same major modes as the PASS in OPS 1, 3 and 6.

    The BFS systems management contains software to support the ascent and entry phases of the mission. Whenever the BFS GPC is in the run or standby mode, it runs continuously; however, the BFS does not control the payload buses in standby. The systems management major function in the BFS is not associated with any operational sequence.

    Even though the five general-purpose computers and their switches are identical, the GPC mode switch on panel O6 works differently for a GPC loaded with BFS. Since halt is a hardware-controlled state, no software is executed. The standby mode in the BFS GPC is totally different from its corollary in the PASS GPCs. When the BFS GPC is in standby, all normal software is executed as if the BFS were in run, the only difference being that BFS command of the payload data buses is inhibited in standby. The BFS is normally put in run for ascent and entry and in standby whenever a PASS systems management GPC is operating. If the BFS is in standby or run, it takes control of the flight-critical and payload data buses if engaged. The mode talkback indicator on panel O6 indicates run if the BFS GPC is in run or standby and displays a barberpole if the BFS is in halt or has failed.

    The BFS is synchronized with PASS so that it can track the PASS and keep up with its flow of commands and data. Synchronization and tracking take place during OPS 1, 3 and 6. During this time, the BFS listens over the flight-critical data buses to the requests for data by PASS and to the data coming back. The BFS depends on the PASS GPCs for all of its GN&C data and must be synchronized with the PASS GPCs so that it will know when to receive GN&C data over the FC buses. When the BFS is in sync and listening to at least two strings, it is said to be tracking PASS. As long as the BFS is in this mode, it maintains the current state vector and all other information necessary to fly the vehicle in case the flight crew needs to engage it. The BFS uses the same master timing unit source as PASS and keeps track of Greenwich Mean Time over the flight-critical buses for synchronization.

    The BFS also monitors some inputs to PASS CRTs and updates its own GN&C parameters accordingly. When the BFS GPC is tracking the PASS GPCs, it cannot command over the FC buses but may listen to FC inputs through the listen mode.

    The BFS GPC controls its own instrumentation/PCMMU data bus. The BFS GPC intercomputer communication data bus is not used to transmit status or data to the other GPCs; and the MMU data buses are not used except during initial program load and MMU assignment, which use the same IPL source switch used for PASS IPL.

    A major difference between the PASS and BFS is that the BFS can be shifted into OPS 1 or 3 at any time, even in the middle of ascent or entry.

    The BFC lights on panels F2 and F4 remain unlighted as long as PASS is in control and the BFS is tracking. The lights flash if the BFS loses track of the PASS and stands alone. The flight crew must then decide whether to engage the BFS or try to initiate BFS tracking again by a reset. When BFS is engaged and in control of the flight-critical buses, the BFC lights are illuminated and stay on until the BFC is disengaged.

    Since the BFS does not operate in a redundant set, its discrete inputs and outputs, which are fail votes from and against other GPCs, are not enabled; thus, the GPC matrix status light on panel O1 for the BFS GPC does not function as it does in PASS. The BFS can illuminate its own light on the GPC matrix status panel if the watchdog timer in the BFS GPC times out or if the BFS GPC does not complete its cyclic processing.

    To engage the BFS, which is considered a last resort to save the vehicle, the crew presses a BFS engage momentary push button located on the commander's or pilot's rotational hand controller. As long as the RHC is powered and the BFS GPC output switch is in backup on panel O6, depressing the engage push button on the RHC engages the BFS and causes PASS to relinquish control during ascent or entry. There are three contacts in each engage push button, and all three contacts must be made to engage the BFS. The signals from the RHC are sent to the backup flight controller, which handles the engagement logic.

    When the BFS is engaged, the BFC lights on panels F2 and F4 are illuminated; the BFS output talkback indicator on panel O6 turns gray; all PASS GPC output and mode talkback indicators on panel O6 display a barberpole; the BFS controls the CRTs selected by the BFS CRT select switch on panel C3; big X and poll fail appear on the remaining CRTs; and all four GPC status matrix indicators for PASS GPCs are illuminated on panel O1.

    When the BFS is disengaged and the BFC CRT switch on panel O3 is positioned to on, the BFS commands the first CRT indicated by the BFC CRT select switch. The BFC CRT select switch positions on panel C3 are 1 + 2 , 2 + 3 and 3+1. When the BFS is engaged, it assumes control of the second CRT as well.

    If the BFS is engaged during ascent, the PASS GPCs can be recovered on orbit to continue a normal mission. This procedure takes about two hours, since the PASS inertial measurement unit reference must be re-established. To disengage the BFS after all PASS GPCs have been hardware-dumped and software-loaded, the PASS GPCs must be taken to GN&C OPS 3. Positioning the BFC disengage momentary switch on panel F6 to the up position disengages the BFS. The switch sends a signal to the BFC that resets the engage discretes to the GPCs. The BFS then releases control of the flight-critical buses as well as the payload buses if it is in standby, and the PASS GPCs assume command.

    Indications of the PASS engagement and BFS disengagement are as follows: BFC lights on panels F2 and F4 are out, BFS output talkback indicator on panel O6 displays a barberpole, PASS output talkback indicators on panel O6 are gray and BFS release/PASS control appears on the CRT. After disengagement, the PASS and BFS GPCs return to their normal pre-engaged state.

    If the BFS is engaged, there is no manual thrust vector control or manual throttling capability during first- and second-stage ascent. If the BFS is engaged during entry, the speed brake is positioned using the speed brake/thrust controller and the body flap is positioned manually. The BFC system also augments the control stick steering mode of maneuvering the vehicle with the commander's rotational hand controller.

    The software of the BFC system is processed only for the commander's attitude director indicator, horizontal situation indicator and RHC. The BFC system supplies attitude errors on the CRT trajectory display, whereas PASS supplies attitude errors to the ADIs; however, when the BFC system is engaged, the errors on the CRT are blanked.

    Click Here for GUIDANCE, NAVIGATION AND CONTROL

Table of Contents


Information content from the NSTS Shuttle Reference Manual (1988)
Last Hypertexed Wednesday October 11 17:46:50 EDT 1995
Jim Dumoulin (dumoulin@titan.ksc.nasa.gov)



NEWSLETTER
Join the GlobalSecurity.org mailing list