NASA CONTRACTOR REPORT
NASA CR- 193862
RESEARCH REPORTS - 1993 NASA/ASEE SUMMER FACULTY
FELLOWSHIP PROGRAM
The University of Alabama in Huntsville
Huntsville, Alabama
and
The University of Alabama
Tuscaloosa, Alabama
(NASA-CR-193862) THE 1993 N9 tZ™ 4 ° 5
. NASA/ASEE SUMMER FACULTY FELLOWSHIP ZZ, Vf7Z«
November 1993 ^f Research Reports (Alabama H9WJ.55
G3/80 0193100
Final Report
Prepared for NASA, George C. Marshall Space Flight Center
Marshall Space Flight Center, Alabama 35812
RESEARCH REPORTS
1993 NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
George C. Marshall Space Flight Center
The University of Alabama in Huntsville
and
The University of Alabama
EDITORS:
Dr. Gerald R. Karr
Chairman of Mechanical & Aerospace Engineering
The University of Alabama in Huntsville
Dr. Charles R. Chappell
Associate Director for Science
Marshall Space Flight Center
Dr. Frank Six
University Affairs Officer
Marshall Space Flight Center
Dr. L. Michael Freeman
Associate Professor of Aerospace Engineering
The University of Alabama
NASACR- 193862
REPORT DOCUMENTATION PAGE
Form Approved
OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources,
gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson
Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.
1. AGENCY USE ONLY (Leave blank)
REPORT DATE
November 1993
3. REPORT TYPE AND DATES COVERED
Contractor Report
4. TITLE AND SUBTITLE
Research Reports - 1993 NASA/ASEE Summer
Faculty Fellowship Program
6. AUTHOR(S)
G. Karr, R. Chappell, F. Six, M. Freeman, Editors
5. FUNDING NUMBERS
NGT-01-008-021
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
The University of Alabama in Huntsville and
The University of Alabama, Tuscaloosa, Alabama
PERFORMING ORGANIZATION
REPORT NUMBER
9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES)
National Aeronautics and Space Administration
Washington, DC 20546
10. SPONSORING /MONITORING
AGENCY REPORT NUMBER
NASA CR- 193862
11. SUPPLEMENTARY NOTES
12a. DISTRIBUTION /AVAILABILITY STATEMENT
Undassified/UnUmited
Date: I1~>^ 3
Dr. Frank Six, University Affairs Officer
12b. DISTRIBUTION CODE
13. ABSTRACT (Maximum 200 words)
For the 29th consecutive year, a NASA/ASEE Summer Faculty Fellowship Program was
conducted at the Marshall Space Flight Center (MSFC). The program was conducted by the University
of Alabama in Huntsville and MSFC during the period June 1, 1993 through August 6, 1993.
Operated under the auspices of the American Society for Engineering Education, the MSFC program,
as well as those at other NASA centers, was sponsored by the Office of Educational Affairs, NASA
Headquarters, Washington, DC. The basic objectives of the programs, which are in the 30th year of
operation nationally, are (1) to further the professional knowledge of qualified engineering and
science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to
enrich and refresh the research and teaching activities of the paIticipants , institutions; and (4) to
contribute to the research objectives of the NASA centers.
The Faculty Fellows spent 10 weeks at MSFC engaged in a research project compatible with
their interests and background and worked in collaboration with a NASA/MSFC colleague. This
document is a compilation of Fellows' reports on their research during the summer of 1993. The
University of Alabama in Huntsville presents the Co-Directors' report on the administrative
operations of the program. Further information can be obtained by contacting any of the editors.
u. subject terms Advanced projects; astrionics; payload and orbital systems;
prehminary design; materials and processes; propulsion; space science;
structures and dynamics; mission operations; systems analysis and integration
information systems; space transportation and exploration.
15. NUMBERrOF PAGES
^
16. PRICE CODE
NTIS
17. SECURITY CLASSIFICATION
OF REPORT
Unclassified
18.
SECURITY CLASSIFICATION
OF THIS PAGE
Unclassified
19. SECURITY CLASSIFICATION
OF ABSTRACT
Unclassified
20. LIMITATION OF ABSTRACT
Unlimited
NSN 7540-01-280-5500
Standard Form 298 (Rev. 2-89)
Prescribed by ANSI Std. Z39-18
298-102
TABLE OF CONTENTS
L Amtn, Ashok T.
University of Alabama in Huntsville
Interoperability Through Standardization:
Electronic Mail, and X Window Systems
II. Batson, Robert G.
The University of Alabama
Risk Identification and Reduction in Integrated Product Teams
m. Bower, Mark V.
The University of Alabama in Huntsville
Viscoelastic Analysis of Seals for Extended Service life
IV. Brooks, Joni
Columbia State Community College
CAPE for CaPE
V. Bykat, Alex
Armstrong State College
A Review of ISEAS Design
VI. Campbell, Warren
University of Alabama in Huntsville
Finite Element Based Electric Motor Design Optimization
VIL Cariapa, Vikram
Marquette University
Characteristics of Products Generated by Selective Sintering and
Stereolithography Rapid Prototyping Processes
VM. DeBrunner, Linda
University of Oklahoma
Performance of the Engineering Analysis and Data System II
Common File System
DC Duchon, Claude E.
University of Oklahoma
Water Cycle Research Associated With The CaPE Hydrometeorology
Project (CHymP)
m
X. FJrod, David
The University of Alabama in Huntsville
Foil Bearings
XL Farrington, Phillip A.
The University of Alabama in Huntsville
Design and Specification of a Centralized Mmufacturing Data
Management and Scheduling System
XII. Floyd, Stephen A.
University of Alabama in Huntsville
Technology Utilization Office Data Base Analysis and Design
xn. Foreman, James W.
Alabama A & M University
A Study of the Core Module Simulator Floor Capability
XIV. Gerth, Richard J.
The Ohio University
A Minimum Cost Tolerance Allocation Method for Rocket Engines
XV. Hartfield,Jr.,RoyJ.
Auburn University
Validation of a Nonintrusive Optical Technique for the
Measurement of Liquid Mass Distribution in a Two-Phase Spray
XVI. Highsmith, Alton L.
The University of Alabama
Impact Damage in Filament Wound Composite Bottles
XVH. Hodel, A. Scottedward
Auburn University
Octave: A Marsyas Post-Processor for Computer-Aided Control
System Design
XVm. Ierkic, Henrick M.
University of Puerto Rico-Mayagiiez
On the Analysis of Clear Air Radar Echoes Severely Contaminated
by Clutter
XLX. Jackson, D. Jeff
The University of Alabama
A Compilation of Technology Spinoffs From the U.S. Space Shuttle
Program
IV
XX. Jemian, Wartan A.
Auburn University
Weld Fracture Criteria for Computer Simulation
XXI. Johnson, Adriel D.
The University of Alabama in Huntsville
Measuring the Dynamics of Structural Changes in Biological
Macromolecules from light Scattering Data
XXIL Jolly, Steven D.
University of Colorado at Boulder
Weld Joint Concepts for On-Orbit Repair of Space Station Freedom
Fluid System Tube Assemblies
XXm. Karimi, Majid
Indiana University of Pennsylvania
Diffusion on Cu Surfaces
XXIV. Kunin, Boris L
The University of Alabama in Huntsville
J-Integral Patch for Finite Element Analysis of Dynamic Fracture
Due to Impact of Pressure Vessels
XXV. Landrum, David B.
The University of Alabama in Huntsville
CFD Simulation of Coaxial Injectors
XXVI. Lestrade, John Patrick
Mississippi State University
Structure in Gamma-Ray Burst Time Profiles:
Correlations with Other Observables
XXVII. Iindsey, Patricia F.
East Carolina University
Spatial Interpretation of NASA's Marshall Space Flight Center
Payload Operations Control Center Using Virtual Reality
Technology
XXVm. Luxemburg, Leon A
Texas A&M University
Neural Network-Based Control Using Lyapunov Functions
XXK. Martin, James A.
The University of Alabama
Access to Space Studies
XXX. McNamara, Bernard
New Mexico State University
Flux Measurements Using the BATSE Spectroscopic Detectors
XXXI. Moore, Loretta A.
Auburn University
Integration and Evaluation of a Simulator Designed to be Used
Within a Dynamic Prototyping Environment
XXXH. Moriarity, Debra M.
University of Alabama in Huntsville
Evaluation of Ovostatin and Ovostatin Assay
XXXOL Moynihan,GaryP.
The University of Alabama
Evaluation of Computer-Aided Instruction Techniques for the Crew
Interface Coordinator Position
XXXIV. Noble, VivecaK.
Tuskegee University
Error Coding Simulations
XXXV. Palazzolo.AlanB.
Texas A & M University
Simulation of Cryogenic Turbopump Annular Seals
XXXVL Parker, Joey K.
The University of Alabama
Controller Modeling and Evaluation for PCV Electro-Mechanical
Actuator
XXXVIL Paul, Anthony D.
Oakwood College
The Measurement and Analysis of Leaf Spectral Reflectance of Two
Stands of Loblolly Pine Populations
XXXVm. Phanord, Dieudonne D.
University of Alabama in Huntsville
LRAT: Lightning Radiative Transfer
XXXTX. Santi, L. Michael
Christian Brothers University
Space Shuttle Main Engine Performance Analysis
VI
XL. Schreur, Barbara
Texas A & I University
Evaluation of the Efficiency and Fault Density of Software
Generated by Code Generators
XLL Slattery, Kerry T.
Washington University in St. Louis
Micromechanical Simulation of Damage Progression in Carbon
Phenolic Composites
XIH. Smith, Robert
St. John Fisher College
A Chemical Sensor and Biosensor Based Totally Automated Water
Quality Monitor for Extended Space Flight: Step One
XLffl. Talia, George E.
The Wichita State University
Microstructural Analysis of the 2195 Aluminum-lithium Alloy
Welds
XLIV. Thompson, Roger C.
The Pennsylvania State University
Torque Equilibrium Attitudes for the Space Station
XLV. Wang, C. Jeff
Tuskegee University
Properties and Processing Characteristics of Low Density Carbon
Cloth Phenolic Composites
XLVI. Wang, Jai-Ching
Alabama A & M University
Effects of Thermal-Solutal Convection on Temperature and Solutal
Fields Under Various Gravitational Orientations
XLVE Whitaker, Kevin W.
The University of Alabama
Using Neural Networks to Assist in OPAD Data Analysis
XLVm. Wilson, Gordon R.
The University of Alabama in Huntsville
The Far Ultraviolet (FUV) Auroral Imager for the Inner
Magnetospheric Imager (MD Mission: Options
vu
XDK. Yang, Yii-Ching
Tuskegee University
Evaluation of Advanced Materials Through Experimental Mechanics
and Modelling
L. Varmette, P.G., and Lestrade, J.P.
Mississippi State University
Using Contour Maps to Search for Red-Shifted 511 keV Features in
BATSE GRB Spectra
vm
1993
■_£
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
INTEROPERABILITY THROUGH STANDARDIZATION:
ELECTRONIC MAIL, AND X WINDOW SYSTEMS
Prepared By:
Academic Rank:
Ashok T. Amin
Associate Professor
Institution and
Department:
MSFC Colleague:
University of Alabama in Huntsville
Computer Science Department
Alan Forney
NASA/MSFC:
Office:
Division:
Information Systems Office
Systems Engineering and Integration Division
1.0 Introduction
Since the introduction of computing machines, there has been continual advances
in computer and communication technologies and approaching limits. The user interface
has evolved from a row of switches, character based interface using teletype terminals and
then video terminals, to present day graphical user interface. It is expected that next
significant advances will come in the availability of services, such as electronic mail and
directory services, as the standards for applications are developed and in the 'easy to use'
interfaces, such as Graphical User Interface for example Window and X Window, which
are being standardized.
Various proprietary electronic mail (email) systems are in use within organizations
at each center of NASA . Each systems provides email services to users within an
organization, however the support for email services across organizations and across
centers exists at centers to a varying degree and is often not easy to use. A recent NASA
email initiative is intended "to provide a simple way to send email across organizational
boundaries without disruption of installed base" [4], The initiative calls for integration of
existing organizational email systems through gateways connected by a message switch,
supporting X.400 and SMTP protocols, to create a NASA wide email system and for
implementation of NASA wide email directory services based on OSI standard X.500. A
brief overview of MSFC efforts as a part of this initiative are described.
Window based graphical user interfaces make computers easy to use. X window
protocol has been developed at Massachusetts Institute of Technology in 1984/1985 to
provide uniform window based interface in a distributed computing environment with
heterogeneous computers. It has since become a standard supported by a number of major
manufacturers. X Window systems, terminals and workstations, and X Window
applications are becoming available. However impact of its use in the Local Area Network
environment on the network traffic are not well understood. It is expected that the use of
X Window systems will increase at MSFC especially for Unix based systems. An overview
of X window protocol is presented and its impact on the network traffic is examined. It is
proposed that an analytical model of X window systems in the network environment be
developed and validated through the use of measurements to generate application and user
profiles.
2.0 NASA Email Initiative
NASA centers typically have one or more types of proprietary email systems such
as ccMail, Quick Mail, All-in-One, etc.. Providing email service to users on different email
systems within and across centers can be problematic. NASA email initiative is intended
to provide easy-to-use email services for exchange of messages between users within and
across centers and to facilitate use of email services by providing directory services for
email addresses. The implementation of the initiative is based on use of standards- X.400
for Message Handling and X.500 for Directory services [5].
1-1
Standards for Message Handling and Directory Services
The model of the Message Handling System (MHS), shown in Figure 1, is based
on the familiar postal mail system. A MHS consists of User Agents (UA) which interface
with Message Transfer Agents (MTA) of the Message Transfer Subsystems (MTS), and a
Message Store (MS) for storage of messages in transit. The X.400 standard defines
protocols for communication between MTAs, for access to MTA by MS and UA, and for
access to MS by UA. It supports text, voice, facsimile, teletext, videotex etc., and
provides for non-repudiation of submission and delivery. A justifiable criticism of the
X.400 is lack of standards for the user interface to the UA since it is envisioned that email
will be universal service in the sense that a telephone service is universal. Further utility of
email system depends mainly on the functionality its UA provides to the user.
MHS
_/ <5sef£>
Message Handling System and Directory System.
Figure 1.
The model of the directory service, shown in Figure 1, is based on the familiar
telephone directory services. The directory system consists of Directory Services Agents
(DSA) and Directory User Agents (DUA). The directory is distributed and each part of
the directory is expected to be assigned to a DSA, however a DSA may be assigned more
than one part. The X.500 defines protocols for DSA access by DUA and for
communication between DSAs. It supports authentication of user and of the
information. Here again the user interface to DUA has not been defined. Though the
directory is intended to contain information about objects such as persons, organizations,
processes, in the communication system, it is expected that MHS will be a major user of
the directory services for interpersonal message service. An integrated view of the two
system is depicted in Figure 1 where DUA may be integrated with MHS components.
MSFC Implementation of the Email Initiative
Email systems at MSFC may be classified based whether they are managed by
Information System Office (ISO) or not. The ISO managed email systems are
interconnected through a hierarchy of gateways leading to a central switch (also serves as
DEC X.400 gateway) which routes email to destination email system gateway within
MSFC or outside typically to other centers. The user agents of these systems provide a
1-2
highly functional user interface. However the addressing schemes used by these systems
are different. Of the email systems not managed by ISO, Unix based email systems using
Simple Message Transfer Protocol (SMTP) have universal connectivity to other email
systems using SMTP over the Internet.
A message switch is central to the implementation at MSFC. The switch, a CDC
EP/IX Mail*Hub, supports X.400 and SMTP, fax gateways, has integrated X.500
directory, and provides for address translation between X.400 and SMTP. It will provide
for interoperability across all email systems at MSFC and facilitate simple addressing
based on first-name and last-name through the X.500 directory services. The Electronic
Mail Implementation Group has defined requirements on the content of the directory
entry, and directory access servers. However except for query-by-mail, no requirements
for DUA for on-line directory access by users have been specified.
3.0 X Window Systems in Local Area Network Environment
Graphical user interfaces (GUI) have revolutionized the user interaction with
computer. In comparison with the character based interface, GUI is easy to use and
learning to use a new application is even easier. The X window system, which implements
X window protocol, provides a device independent pixel based graphics for management
of hierarchical , resizable windows. The protocol can be used over any reliable byte
stream. X window system permits multiple applications running simultaneously on local
and remote hosts to manipulate its window on the display. It was originally developed for
use with distributed applications.
Client/Server Computing
Information systems are moving from centralized mainframe computing to file
server based computing in which specialized processors manage a file store and provide
file services to PCs and work stations interconnected over a Local Area Network (LAN).
X window systems are available in the form of X terminals and X work stations and PCs.
X terminals are employed in a client/server architecture for Army's RCAS in which X
terminals, file servers and application servers are interconnected over a LAN, and various
sites are interconnected over a dedicated lines. Little is known about the traffic
implications of X window systems in the network environment
X Window Protocol and Networking
X window protocol is used for communication between a client application
running on local host or a remote host and the X server of the X window system. It was
intended to support distributed applications. Therefore, it has been designed to be efficient
in the network environment. Figure 2 shows a view of X window system operation from a
network traffic perspective.
A client sends draw requests and information requests to the server, and the X
server sends user inputs (events), replies, and error reports to the appropriate client The
events and error reports are of 32 byte size, while requests and replies are multiples of 4
byte size with a reply being at least 32 byte in size. The server manages windows, does
all drawing, and interfaces with the device drivers to get keyboard and the mouse inputs.
1-3
It also manages of-screen memory, window, fonts, cursors, and the colormaps. The
graphic context , the information about how graphic requests are to be interpreted is
Events
Errors
Replies
DISPLAY
A-Local
Requests
KEYBOARD
| [ Mouse
Applications
X Window System in a Network
Figure 2.
cached by the server, so that this information need not be sent over the network for each
graphic request to be interpreted. Other similar abstractions stored in the server include
window- allows server to manage which parts of the screen are displaying which parts of
which window, Pixmap- an off screen virtual drawing surface that must be copied into a
window to become visible, color map- which allows user to easily specify color for
graphic requests.
Previous studies on the traffic impact of the X window protocol in the academic
environment showed that the protocol is very efficient and impact on the network traffic is
not significant. However, measurements are needed in non-academic environments to
better understand the traffic impact. Little is known about the traffic impact in an when X
window systems coexist with PCs in a file server environment. Development of
analytical models and measurements to validate models are suggested for further work in
this area.
References
[1] Standard object attribute formats for NASA X.500 directory implementations, version
1, Electronic Messaging Group, June 25, 1993.
[2] Dunwoody J. C. and Linton M. A., "A Dynamic Profile of Window System Usage",
IEEE Symposium on Local Area Networks, pp.90-99, 1988.
[3] Nye A., "Networking and the X Window System", in Unix Networking (Eds. S. G.
Kochan and P. H. Wood), Hayden Books, 1989.
[4] Lynn J. C, "NASA-Wide Electronic Mail (E-Mail) Initiative", Memorandum to
Information Resource Oversight Council (IORC) Members, June 11, 1993.
[5] Plattner B., Lanz C, Muller M., and Walter T., X400 Message Handling, Addison-
Wesley, 1991.
[6] Scheifler R. W. and Gettys J., "The X Window System", ACM transactions on
Graphics, vol.5, no.2 , pp.79- 109, 1986.
1-4
A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
RISK IDENTIFICATION AND REDUCTION IN INTEGRATED PRODUCT TEAMS
Prepared by:
Academic Rank:
Institution and
Department:
MSFC Colleague(s):
NASA/MSFC:
Office:
Division:
Branch:
Robert G. Batson, Ph.D.
Professor
The University of Alabama
Department of Industrial Engineering
Glen D. Ritter
L. Don Woodruff
Systems Analysis and Integration
Systems Definition
Aerospace Systems Branch
n
Introduction
This brief report summarizes research and planning conducted during Summer 1993
for MSFC on the subjects of risk identification, assessment, and management. Research
findings are presented, citing useful references. The major output of this work, the AXAF-S
Project Risk Management Plan is outlined.
Body
Risk Identification, the first step in the three-step risk analysis process (1), consists
of definition and characterization of all potential problems including analysis of cause-and-
effect, primary/secondary impacts on the project, and a qualitative assessment of whether
each potential problem is high, medium, or low risk. Risk identification is best done via
team meetings, individual interviews, or questionnaires— using the experience and technical
details available in the project. There are other sources of risk identification information
(Garland Bauch, NASA/JSC GM3/SSP Configuration Management, identified over fifty
possibilities in collaboration with the author during July 1993) which may fit neatly into the
following six categories: 1) Checklists, lessons learned, and so-called risk "templates"; 2)
One-on-one interviews, questionnaires; 3) Formal project or engineering reviews; 4)
Cause-and-effect diagrams, bramstorming; 5) Tiger Teams, external reviews; 6) Extracts
from project documents such as planning documents in the "illities", and requirements
documents.
Risk assessment, the second step in risk analysis, uses information from risk
identification, probability encoding techniques, and various quantitative methods to synthesize
the input uncertainties into an overall assessment of program risk. Risk assessment
techniques and the necessary math models they use are fully detailed in (1, 2).
Risk management (4) uses information from risk identification and risk assessment
in decision-making in order to reduce risk. Risk management occurs when the appropriate
manager or team takes action to avoid a risk, or to handle it in some way. Risk management
strategies are numerous, and must fit the given project or situation. General categories of
risk management strategies are: 1) Risk avoidance-select a lower risk alternative, or
eliminate a requirement or system element; 2) Risk control— actions taken to either reduce
the probability of a problem occurring., or to mitigate the consequences if it should occur;
3) Risk transfer— either transfer or share risk through mechanisms such as contract-types and
warranties, or change the risk from one form (e.g., schedule) to another (cost); 4) Risk
assumption— based on an informed understanding of the potential problem (i.e. , its probability
and consequences), agree to do nothing and accept the consequences should the problem
occur; 5) Knowledge and research— when a team cannot select strategies 1-4 based on
inadequate information, they may appoint a Tiger Team or even set-up a small R&D project
to increase their knowledge of the risk.
II- 1
Finally, a sixty-page "AXAF-S Project Risk Management Plan" was written. This
comprehensive plan for a project risk analysis activity, focused on the AXAF-S top-level
team (the Core Product Development Team) as the decision authority for risk management
and tracking, includes the results of the preliminary AXAF-S risk area identification
activities as a series of tables in Section 4.0. An outline of this plan is provided in Table
1 below. The Risk Reduction Plans and concept for the Risk Tracking System are based on
ideas in (4, Chapter 12 and 13) .
1.0 INTRODUCTION 1
1.1 Purpose 1
1.2 Scope 1
1.3 Key Project Guidelines 2
1.4 AXAF-S Master Schedule 3
1.5 AXAF-S Mission Funding 4
2.0 RISK MANAGEMENT TERMINOLOGY 4
2. 1 Risk Analysis Process 4
2.2 Risk Analysis Techniques 5
2.3 Project Risk Glossary 6
3.0 AXAF-S RISK MANAGEMENT APPROACH 8
3.1 Risk Management Philosophy 8
3.2 Risk Assessment Models Required 8
3.3 Use of "Lessons Learned" Documents 9
4.0 AXAF-S RISK AREA IDENTIFICATION 9
4.1 Purpose 9
4.2 Scope . 9
4.3 AXAF-S Risk Area Information Sources 12
4.4 AXAF-S Risk Areas (Preliminary) 12
4.5 Proposed Format to Complete AXAF-S Risk Identification 30
5.0 RISK ASSESSMENT 31
5.1 Introduction 31
5.2 Scope and Rationale 31
5.3 AXAF-S Project Specific Math Models 31
5.3.1 AXAF-S Project Network Model 32
5.3.2 AXAF-S Cost Risk Model 33
5.3.3 AXAF-S Performance Estimating Models 33
5.3.4 AXAF-S Weight Risk Model 33
5.3.5 AXAF-S Power Risk Model 34
5.4 AXAF-S Probability Encoding Techniques . 34
II-2
5.5 AXAF-S Algorithm-Based Risk Assessment Techniques 36
5.5.1 Critical Path Method (CPM) 36
5.5.2 Project Evaluation and Review Technique (PERT) 36
5.5.3 Additive Technique for Total Weight, Power, & Cost 37
5.6 AXAF-S Simulation-Based Risk Assessment Techniques 38
5.6.1 Schedule Risk via Network Simulation 38
5.6.2 Cost Risk via Parametric Cost Model Simulation 40
5.6.3 Performance Risk via Monte Carlo Simulation 41
6.0 RISK MANAGEMENT 43
6.1 Introduction 43
6.1.1 Risk Management Implementation 43
6.1.2 AXAF-S Risk Analysis and Tracking Process . 44
6.1.3 AXAF-S Risk Analysis and Tracking Responsibilities 44
6.2 Risk Management Strategies 44
6.3 Risk Reduction Plans and Reports 47
6.4 AXAF-S Risk Tracking System (RTS) 48
6.4.1 Introduction 48
6.4.2 RTS Concepts 49
6.4.3 Value of an RTS 50
6.4.4 Selection of RTS Parameters 51
6.4.5 Linkage to the Risk Assessment Models 52
6.4.6 Process to Create and Maintain the RTS 53
7.0 SUMMARY STATEMENT AND IMPLEMENTATION SCHEDULE ... 54
Table 1. AXAF-S Project Risk Management Plan Table of Contents
References
1. Batson, R.G., Program Risk Analysis Handbook . NASA Technical Memorandum
TM-100311, NASA George C. Marshall Space Flight Center, August 1987.
2. Information Spectrum, Inc. , Risk Assessment Techniques: A Handbook for Program
Management Personnel . Defense Systems Management College Textbook, July 1993.
3. Lockheed Missiles & Space Company, Systems Engineering Management Guide .
Defense Systems Management College Textbook, 1983.
4. The Analytic Sciences Corporation, Risk Management: Concepts and Guidance .
Defense Systems Management College Textbook, March 1989.
II-3
N
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
VISCOELASTIC ANALYSIS OF SEALS FOR EXTENDED SERVICE LIFE
Prepared By:
Academic Rank:
Institution and
Department:
Mark V. Bower, Ph. D., P. E.
Assistant Professor
The University of Alabama in Huntsville,
Department of Mechanical and Aerospace Engineering
MSFC Colleague(s):
Thomas D. Bechtel
Brian K. Mitchell
NASA/MSFC:
Laboratory:
Division:
Branch:
Propulsion Laboratory
Mechanical Systems Division
Fluid Systems Design Branch
III
Introduction
The space station is being developed for a service live of up to thirty years. As a
consequence, the design requirements for the seals to be used are unprecedented. Full scale
testing to assure the selected seals can satisfy the design requirements are not feasible. As an
alternative, a sub-scale test program (2) has been developed by MSFC to calibrate the analysis
tools to be used to certify the proposed design. This research has been conducted in support
of the MSFC Integrated Seal Test Program. The ultimate objective of this research is to
correlate analysis and test results to qualify the analytical tools which in turn, are to be used to
qualify the flight hardware.
Seals are simple devices, in wide spread use. The most common type of seal is the O-
ring. O-ring seals are typically rings of rubber with a circular cross section. The rings are
placed between the surfaces to be sealed, usually in a groove of some design. The particular
design may differ based on a number of different factors. This research is focused on O-rings
that are staticly compressed by perpendicular clamping forces, commonly referred to as face
seals. In this type of seal the O-ring is clamped between the sealing surfaces by loads
perpendicular to the circular cross section.
Specific Problem Addressed
The Integrated Seal Test Program is currently performing load decay tests to be used
in the qualification of the analysis tools. For these tests to provide an accurate benchmark for
analyses the tests must produce accurate repeatable results. This study was undertaken to
assure the quality of test results produced. To that end, test results from three different tests
are evaluated for repeatability, in both load magnitudes and time dependent behavior. Further,
in an initial attempt to qualify the analysis tool, the results are compared to finite element
analysis results.
Method of Approach
The load decay tests being conducted under the Integrated Seal Test Program use a
sub-scale test article to load an O-ring to a specified level of squeeze. The test article is
closed with a single bolt at the center of the fixture. A load cell is attached to the bolt to
measure the clamping force on the O-ring. The load cell output is converted to digital
information by an analog to digital converter and stored with the time of measurement in data
files using a dedicated 286 computer. Data files generated by the load test are transferred to
other computers by floppy disc. After initial testing, the computer has been setup to
automatically resume load measurements in the event of power loss. The test article is sub-
scale in major diameter only. The cross section diameter of the O-ring (6.86 mm, 0.270
inches) and the squeezes (15%, 25%, and 40%) are of the same order as the full-scale design.
The desired level of squeeze is obtained by clamping the test fixture down to a fixed shim
height and the shims removed.
Due to the nature of the load decay tests, a single test will generate multiple data files
with a very large number of data records. These files are combined into a single file and
III-l
reduced in size using Microsoft Excel (version 4.0) command and function macros developed
for the research. These programs are documented in an associated report (1).
Two issues associated with repeatability of the load decay tests must be addressed to
ensure test quality. They are: load magnitudes and time dependent behavior. Load
magnitudes are be compared by plotting the loads from different tests on a single graph. The
time dependent behaviors are compared by plotting the normalized loads from different tests
on a single graph.
The effects of aging are studied in the same manner as the repeatability issues. In
addition, results from testing of an aged specimen are compared with a virgin specimen by
plotting the two sets of data on the same graph with the time axis for the aged specimen
shifted horizontally. Time shifting of relaxation curves is a commonly accepted procedure in
the analysis of viscoelastic materials. These results are not shown here due to space
limitations.
Results
Results from three preliminary load decay tests performed on O-rings with no side wall
contact are shown in Figure 1. The results are plotted on linear scales. Preliminary tests 1
and 2 were performed on virgin O-rings. Results from preliminary test 1 are indicated with
filled squares, results from test 2 are indicated with filled circles, and results from tests on an
aged O-ring are indicated with unfilled triangles. Note from the figure that the initial, i.e.,
maximum, load for test 1 is 3.76 kN (846 lbs.) while the initial load for test 2 is 4.92 kN (1 105
lbs.). This is a percentage difference of 23.4% relative to the test 2 initial load. This
difference may be explained by several factors: variation in O-ring cross section diameter
from one specimen to another; lack of a reference point on the test article, resulting in angular
displacement of the top relative to the bottom; and different shimming procedures. Each of
these causes could result in a different squeeze level between tests, and hence different load
magnitudes. However, shim height is the most significant factor. Review indicated that a
shim height of 1.78 mm (0.070 inches) was used for test 1 and 1.75 mm (0.069 inches) for
test 2 and the aged O-ring test. The gaps in the data plotted are due to suspension of data
acquisition due to power losses.
Note in Figure 1 for test 1 minor fluctuations in the load value, approximately ± 44 N
(+10 lbs.), for times between approximately 1.5 million seconds and 2 million seconds.
Preliminary analysis indicates that these fluctuations are due to thermal cycling. These
fluctuations have a basic period of one day, with a secondary period found on seven days.
The test article is located in a temperature controlled space. However, due to a number of
factors, the temperature control system can not maintain a close control on the temperature.
The aged O-ring was thermally aged to accelerate the aging process. The load decay
curve shown in Figure 1 was obtained for a specimen that was not loaded during the aging
process. Note from the plot that the load values are significantly below those observed in
either for tests of either virgin O-ring. A theoretical explanation for this result is not available
at this time.
III-2
■ Testl
• Test 2
A Aged
400
MA
_, , , 1 1
0E+0 1E+6 2E+6 3E+6 4E+6 5E+6
Time (seconds)
Figure 1. A Plot Of Load Versus Time
For Three Preliminary Load Decay Tests On Linear Scales.
Figure 2 shows a plot of the normalized load versus time for the preliminary tests
performed on virgin O-rings shown in Figure 1, normalized stress relaxation data (2), and
finite element analysis using the stress relaxation data (5). For these plots, the loads measured
at each time are normalized with respect to the maximum load. Both the ordinate and
abscissa for the plot are logarithmic scales. This figure shows that the results from the two
preliminary tests are virtually indistinguishable from one another. Review of the numerical
values shows less than one percent difference in the normalized values. These results show
that the two load decay tests display the same time dependent behavior in spite of the 23.4%
difference in initial load values. Further, one can conclude that what ever the cause of the
differences in initial load, it does not affect the time dependent behavior of the seal in this load
decay test (at least for the time observed by the test).
Note in Figure 2 that the normalized stress relaxation data curve is consistently below
the load decay curves. The stress relaxation data was obtained from uniaxial testing O-ring
material (V747) at a strain level roughly comparable to that used in the load decay tests.
From other testing of O-ring material it is known qualitatively that the stress relaxation
behavior changes with strain level; the rate of decay is faster at lower strain levels and slower
at higher strain levels. On the basis of this and load decay test results shown, the operative
strain level in the O-ring tested is expected to be above that used to obtain the stress
relaxation data. Further, observe in the figure that the curve for the finite element analysis
passes through the stress relaxation data. This is as expected from theory as implemented by
the ABAQUS finite element code for a constant load analysis (3). On the basis of this
conclusion and the foregoing discussion, the finite element analysis does not accurately
describe the seal behavior because a proper stress relaxation curve was not available.
III-3
1 T
■o
8
1
0.1
'°««»0<>.<K>.
• ■
■ Testl
• Test2
o Stress Relaxation
finite Etement Analysis
«*-
"+-
1E-1
1E«0
1E+1
1E+2 1E+3 1E+4
Time (seconds)
1E+5
1E*6
1E+7
Figure 2. A Plot Of Normalized Load Versus Time
For Two Preliminary Load Decay Tests Of Virgin O-rings, Normalized Stress Relaxation
Data (2), and Finite Element Analysis Results (5) On Logarithmic Scales.
Conclusions
The conclusions from this review of the load decay tests and comparison of
experimental results with finite element analysis results are:
1 . The load decay tests are repeatable.
2. Minor changes in the test procedure are recommended, i.e., create a reference datum
on the test article to ensure alignment is the same from test to test; use a consistent
shimming method; and re-evaluate time intervals used between measurements to
reduce data file size.
3. Temperature fluctuations should be controlled as much as possible to minimize impact
on load decay testing.
4. Another mechanism other than simple stress relaxation is present, causing the load
decay response to deviate from results predicted by finite element analysis.
5. Additional data processing capability is needed within EP43 to analyze the test results.
References
1. Bower, M. V, Seal Life Testing, NASA/MSFC, 1993.
5. Bowman, D., Internal communication, Parker Seals and NASA/MSFC, 1993.
2. Hibbitt, Karlsson & Sorensen, Inc., ABAQUS Theory Manual, Version 5.2,
Pawtucket, RI, 1992.
3. Mitchell, B. K. and Flatt, L. W., Design Parameter Test Plan for MSFC Integrated
Seal Test Program, NASA/MSFC, 1992.
4. Rogers, P., Internal communication, NASA/MSFC, ED24, 1993.
III-4
a
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
CAPE for CaPE
Prepared by:
Academic Rank:
Institution:
Department:
MSFC Colleague(s):
Joni Brooks
Assistant Professor
Columbia State Community College
Computer Information Systems
Steve Goodman, Ph.D. - NASA
Bill Crosson, Ph.D. - USRA
NASA/MSFC:
Laboratory:
Division:
Branch:
Space Science
Earth Science & Applications
Earth System Processes and Modeling
IV
In an effort to improve short-term forecasting for the Kennedy Space Center region,
Holle et al. (1992) investigated the effects of low level wind regimes on the distribution of
cloud-to-ground lightning in central Florida. With a study period of 455 days, Holle et al.
(1992) found "southwest flow contributed 66% of the total network flashes while also
occurring on the most days (142)." Relationships among mesoscale thermodynamic variables
and precipitation and/or lightning have been addressed in previous studies in Canada
(Zawadzki, et al. 1981) and the Tennessee valley (Buechler, et al. 1990). Zawadzki et al.
(1981) found "soundings, surface pressure, temperature and humidity obtained from a
standard observation network were correlated with rain rates given by raingages and radar."
Buechler et al. (1990) found "a fair relationship between CAPE (convective available
potential energy) and daily cloud-to-ground activity" with a correlation coefficient of r =0.68.
The present research will investigate the relationships among rainfall, cloud-to-ground (CG)
lightning, CAPE, and low level wind flow using data collected during the CaPE (Convection
and Precipitation/ Electrification Experiment) field program. The CaPE field program was
conducted in east central Florida from July 8, 1991 to August 18, 1991.
To investigate low-level wind flow the present research uses the same wind regime
classifications defined by Holle et al.(1992). For each day of the study period the mean wind
vector was calculated, as described by Watson et al. (1987), from rawinsonde measurements
from .3 km to 3 km (1000 ft. - 10,000 ft.). These data were obtained from the Cape Canaveral
sounding nearest to 1000(GMT). When Cape Canaveral soundings were unavailable, CLASS
soundings from Ti-Co Airport were used. Seven classes were defined as follows; Calm (wind
speed <= 2.0 m/s); NE(23° - 113°); SE(113° - 158°); SO(158° - 203°); SW(203° - 293°);
NW(293° - 338°); NO(338° - 023°). The phrase 'disturbed sea breeze' will be used to refer to
days classified as SW and 'undisturbed' will be used to refer to days classified in the
remaining six categories.
ffiSK
SSK-
8¥S
•A T*^;£--"
■««>!
«;«
Daily area mean rainfall and rainrate
maxima over one hour intervals were
obtained from 83 raingages operated during
the CaPE field program. The locations of
the raingages sites are shown in Fig. 1 . In
an attempt to assess whether large-scale or
local forcing dominates in determining the
distribution and amount of precipitation,
three subdivisions of the CaPE domain were
defined and the number of raingages in each
cluster were as follows: Merritt Island
cluster - 20 gages; Coastal cluster - 25
gages; and Inland cluster - 38 gages.
Figure 1
IV-1
Daily lightning frequency was obtained from archive data from the National
Lightning Detection Network. Daily lighting frequency was calculated for the entire domain
and for each of the three clusters described above. The interval 12Z-12Z was used to define a
day for daily lightning frequency and daily area mean rainfall.
The sounding data used to define the day according to wind regime were also used to
calculate CAPE and Bulk Richardson Number (Rib). CAPE is a measurement of instability
and is also referred to as available buoyant energy. The Richardson number represents the
ratio of buoyant energy input into turbulence to the energy input from the shear of the mean
wind flow (Fleagle and Businger, 1980). Calculations of CAPE and Rjb were made using
SUDS (System for User-editing and Display of Soundings) software from the Atmospheric
Technology Division of the National Center for Atmospheric Research.
In an attempt to determine if CAPE will be a better nowcasting tool than low level
wind flow, this study examined the dependence of CAPE on wind direction in the lower
troposhere. Fig. 2 is a plot showing this relationship for each Cape Canaveral sounding. A
similar plot was created for each sounding at five locations from the CaPE data sets. For each
location soundings were plotted according to time of day intervals defined as follows:
Morning [0400-1300) GMT; Midday [1300-2 100)GMT; and Evening [2100-0400) GMT. In
all cases, there does not appear to be a correlation between CAPE and low-level wind flow.
All CCAFS Soundings
5000 -I
■
4000
■
■ ■ . ■ ■ ■
*. . B -U-- ■■■■
■
c u
A i
P 1
3000
■
■
E ■
2000
1000
n
■ ■
-90
u
(
) 90 180 270
Mean Wind Flow (0.3 - 3 km)
360
Calm
NE SO/SE SW
NO/NW
(-90)
(023-1 13) (11 3-203) (203-293)
(293-023)
Figure 2
The next analysis attempts to answer the question "What is the correlation among
rainfall, lightning, CAPE and R^b for this study period?" Cape and R^b were calculated for
each day based on the sounding nearest to 1000 GMT from Cape Canaveral or Ti-Co. As
shown in Table 1, poor correlations were found between CAPE and both rainfall and
IV-2
lightning. Similar poor correlations were found when comparing Rjb to both rainfall and
lightning.
CAPE vs.
Max. RF
CAPEvs!
MeanRF
CAPE vs.
Lightning
Mean RF vs.
Lightning
Entire Area
-0.22
-0.39
0.05
0.44
Merritt Island
-0.33
-0.34
-0.01
0.44
Coastal Cluster
-0.27
-0.31
0.03
0.62
Inland Cluster
-0.15
-0.36
0.05
0.50
Table 1
The final analysis investigates the relationship among rainfall, lightning and low-level
wind flow. Table 2 shows the distribution of CG lightning and rainfall based on low-level
wind flow for the entire study area and each of the three cluster areas.
Area
Wind
Flow
#of
Days
%of
Days
Tot. Lgt.
Flashes
% of Tot
Flashes
Tot. Mean
RF(mm)
% of Tot
RF
Entire
Disturbed
18
43.90
46132
61.67
124.87
55.52
Undisturbed
23
56.10
28677
38.33
100.03
44.48
Merritt
Disturbed
18
43.90
679
87.39
107.80
53.33
Undisturbed
23
56.10
98
12.61
94.34
46.67
Coastal
Disturbed
18
43.90
3833
69.20
141.54
64.86
Undisturbed
23
56.10
1706
30.80
76.68
35.14
Inland
Disturbed
18
43.90
41620
60.77
122.83
50.38
Undisturbed
23
56.10
26873
39.23
120.97
49.62
Table 2
For the entire study area 62% of lightning and 55% of rainfall occurred on SW-flow
days which made up 43.9% of the study period. For the Merritt Island cluster 87% of the
total lightning frequency occured on SW-flow days. These results support the earlier of
findings of Holle et al. (1992).
In conclusion, for this study area it appears that the sea breeze propagates instability
therefore larger values of CAPE are common. The low-level wind flow seems to be the better
tool for nowcasting. Further study of daily rainfall and daily convection zones may increase
the understanding of the role of the sea breeze in this study area.
REFERENCES
1 . Buechler, D.E., Wright, P.D., and Goodman, S.J., 1990:Lightning/Rainfall Relationships
During COHMEX. Preprints Conf. on Atmos. Electricity. Kananaskis Provincial Park,
Alta, Canada.
IV-3
2. Fleagle, Robert G. and Businger, Jvost A., An Introduction to Atmospheric Physics.
Academin Press, NewYork, 1980.
3. Holle, RL., Watson, A.I., Lopez, RE., Howard, K.W., Ortiz ,R, and Li.,L., 1992:
Meteorological Studies to Improve Short-range Forecasting of Lightning/Thunderstorms
within the Kennedy Space Area; Final Report for Memorandum of Agreement between
the Office of Space Flight, NASA and The National Severe Storms Laboratory, NOAA,
Boulder, Colorado, 4-5.
4. Watson, A.I., Lopez, R.E., Ortiz, R, and Holle, RL., 1987: Short-term forecasting of
lightning at Kennedy Space Center based on the surface wind field. Proceedings,
Symposium on Mesoscale Analysis and Forecasting Incorporating "Nowcasting,"
Vancouver, British Columbia, Canada, European Space Agency, Paris, Frace, 401-406.
5. Zawadzki, I., Torlaschi, E, and Sauvageau, R., 1981: The relationship between
mesoscale thermodynamic variables and convective precipitation, J. Atmos. Sci., Vol.
38, 1535-1540.
6. Scientific Overview and Operations Plan for the Convection and Precipitation/
Electrification Experiment, National Center of Atmospheric Research, June 1991.
IV-4
*A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
A REVIEW OF ISEAS DESIGN
Prepared by:
Academic Rank:
Institution:
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
Alex Bykat, Ph.D.
Professor of Computer Science
Department of Mathematics and Computer Science
Armstrong State College
Savannah, GA 31419
Dawn Trout
Systems Analysis & Integration Lab
Systems Definition Division
Electromagnetics and Environments Branch
V
1. Introduction.
The Space Station Freedom will offer facilities for experimentation and testing
not available and not feasible or possible on earth. Due to a restricted space availabil-
ity on board, the experimentation equipment and its organization will be frequently
changing. This requires careful attention to electromagnetic compatibility between
experimentation and other SSF equipment. To analyze the interactions between
different equipment modules, a software system ISEAS [6] is under development.
Development of ISEAS was approached in two phases. In the 1st phase a PC
version prototype of ISEAS was developed. Li the 2nd phase, the PC prototype will
be adapted to a VAX range of computers. The purpose of this paper is to review the
design of the VAX version of ISEAS, and to recommend any suitable changes.
2. Architecture of ISEAS.
ISEAS consists of the following components: interactive interface, analysis
module, output module, and data base containing data used by analysis routines.
ISEAS user communicates via the interface his requests for analysis instances, types
of analysis result displays, and supplies appropriate data. The interface will be
implemented using ORACLE/SQL relational database environment running on a
VAX platform. User's requests are passed to the control module which performs the
analysis. The analysis routines will be implemented in C. The output module offering
different ways of result's presentation will be implemented in Fortran. The data base
will be created using the services of the ORACLE/SQL running on a VAX platform.
3. Design methodology
ISEAS is to be developed using a structured software approach [3, p.4]. Struc-
tured methodology offers a methodical approach to development, yielding good
system design, correct and efficient data model, smooth implementation, and a basis
for ease of maintenance. The methodology spans a whole software life cycle which
consists of essentially sequential phases: Project initiation, Requirements elucidation,
Feasibility study, System Analysis, System Design, Implementation, Testing, Installa-
tion, and finally System Review. Once fielded, the System Maintenance follows.
The review is limited to Analysis and Design Stages. The purpose of the Analysis
stage is to consider what is to be done and what are the system's data requirements,
while the purpose of the Design stage is to consider how it is to be done.
3.1 Deliverables
At each of the Software Life Cycle stages, structured software methodology
V-l
predicates a number of deliverables. The System Analysis stage requires the following
deliverables: 1) Context Diagram which presents a top level design of the system
addressing its purpose and main functions, 2) Data Flow Diagrams which present
processes within the system, functions that system will perform, and flow of data as
these processes and functions are invoked, 3) Data models expressed via Entity
Relationship Diagrams (ERD) which present the various data entities and relation-
ships that are recognized by the system, 4) Decomposition Diagrams which present
the logic of the system through hierarchical structure of the modules of the system.
The System Design stage requires: 1) Transition Diagrams which present a reorga-
nized Decomposition Diagrams after taking into account the module types in addition
to their functionality, 2) Structure Charts which present the structure of system
modules and their data interfaces, 3) Pseudocode or Action Diagrams which present
the actions defining the modules.
3.2 Data Normalization
A Relational Database consists of data items, relations among them and opera-
tions that can be applied. Whereas "primitive" operations are defined by a chosen
relational database - in ISEAS case it is ORACLE ~ definitions of relations are left
to the system designers. To assure efficiency, to avoid data redundancy and inaccessi-
bility, to protect against data loss, etc., the data must be normalized. It is a common
practice to expect the relation tables to satisfy at least the first three (Codd) Normal
Forms which present essentially steps for relationship transformation with the aim of
assuring elimination of various anomalies (in particular data modification anomalies).
4. Review findings
The following summarizes review findings. A more detailed exposition can be
found in [Bykat, 1993].
[3] presents ERD, Structured Charts, and Pseudocode. Omission of the other four
items results in missing documentation of modules, inconsistencies, and possibilities
for module design improvement. There are violations of the Structured Charts
presentation semantics. Pseudo-code for a number of modules is missing.
Database table are presented but show violations of the first three normal forms.
Backup requirements as well as requirements concerning support of local and
remote users [2, p.53] have not been addressed.
5 Conclusions.
ISEAS is a needed and timely project. The proposed version offers fundamental
V-2
capabilities but requires further technical (EMC) development and evaluation of the
offered capabilities and their functionality. In particular, current calculations view the
equipment and their components as "point sources". The size of the equipment, the
location of a component within the equipment are not taken into consideration.
EMC analysis recommendations are essentially diadic: pass or fail. This could be
addressed by calculations for repositioning the equipment to find a location in which
initially failing equipment would pass the EMC criteria. A further extension would be
finding optimal configuration of a given equipment for EMC purposes.
6 Recommendations.
The recommendations fall into three categories. The first category relates to the
strategy of ISEAS development, the second category relates to technical issues, while
the third category presents a path for further development of ISEAS.
6.1 Strategy.
The main goal of the ISEAS project is to provide a tool for evaluation and
analysis of EMC for the Space Station Freedom. This goal should be enlarged to
"provide a tool for evaluation and analysis of EMC for the EMC community at large
and for the Space Station Freedom in particular".
There are a variety of applications where ISEAS (or its descendant) delivered on
a workstation platform would be of considerable benefit. Many of these applications
are in commercial areas (aircraft manufacturers, land/water-based vehicle manufactur-
ers, etc.), while other are in government agencies (Navy, Air Force, etc). In such
applications EMC considerations are important, if not critical (eg. interference with
navigational equipment, etc.).
NASA is now at crossroads, searching for ways serve broader national needs".
This will lead the agency towards much greater involvement in private sector through
attempts to "push technology through the federal door and into commercial market-
place" 1 . Such involvement has to be contemplated and planned a priori rather than as
an afterthought. ISEAS offers an opportunity for such involvement.
I recommend therefore development of a VAX version in parallel with development
of a workstation version of ISEAS operating in a multitasking Unix environment
supporting XI 1 windowing environment, and with a suitable relational database.
1 speech by Rep. Alan Mollohan (D-W.V.) delivered at the 31st Goddard Memorial
Symposium, 3/9/93 (Space News, 6/14/1993, p. 19)
V-3
6.2 Technical.
Rl. Develop Data Flow Diagrams and Decomposition Diagrams. Revise and com-
plete Design Phase documentation. Gain: Lead to correct structure of the system.
R2. Complete the normalization of data. Gain: Avoid data modification anomalies.
R3. A user interacts with ISEAS in two distinct modes: define and select. ISEAS code
should adopt the same philosophy in presentation of forms and screens for da-
ta/request entry. Gain: ISEAS code will be much shorter and much more efficient.
R4. Partial description of entities during data input should not be accepted by ISEAS.
Gain: ISEAS code will be much shorter and much more efficient.
R5. Before the input of new data affected files should be preserved as prior versions.
Gain: efficient restoration of prior version.
R6. Extend analysis selection capability to allow any combination of analyses to be
performed. Gain: Batch mode execution of analyses.
63 Future development.
An intelligent object oriented interface for ISEAS should be developed to offer
ease of use and functionalities which current version lacks. It should offer a graphical
mouse-relocatable component and connectivity icons to aid graphical environment
data input, visual validation, and reconfiguration of analyzed environments. It would
allow improved presentation of results through a 2-dimensional "interference regions",
easing subsequent graphical modification of equipment configurations.
Electro-magnetic compatibility analysis is a ripe candidate for further automation
through knowledge based methodology [4, 5]. Development of EASE-MagIC, an
Expert Analysis System of Electro-Magnetic Interference and Compatibility, would
serve this purpose. Such a knowledge based system can offer evaluations controlled
through heuristic rules, on demand instructive explanations of the analysis and its
conclusions, and through such explanations ~ coupled with the proposed graphical
interface ~ it would offer a sophisticated tool for EMC training.
7. References.
1. Bykat A., "A detailed look at ISEAS design", NASA TR, 1993
2. BCSS, "Integrated Space Station Freedom Electromagnetic Compatibility Analysis System (ISEAS)
VAX Requirements Specification Document (DS04)", NASA ISEASVAX-DS-04-1.0, April 1993
V-4
3. BCSS, "Integrated Space Station Freedom Electromagnetic Compatibility Analysis System (ISEAS)
VAX Version Design Specification (DS08)", NASA ISEASVAX-DS-08-1.0, June 1993
4. Drozd AL., "Overview of Present EMC Analysis/Prediction Tools and Future Thrusts Directed at
Developing AI/Expert Systems", in IEEE EMC Symposium, Annaheim 1992, pp. 528-529
5. LoVetri J., Henneker W.H., "Fuzzy Logic Implementation of Electromagnetic Interactions
Modelling Tool", in IEEE EMC Symposium, Annaheim 1992, pp. 127-130
6. Pearson S.D., Smith D.H., "A System Engineering Approach to Electromagnetic Compatibility
Analysis for the Space Station Freedom Program", in EMC Symposium 1991, pp.
V-5
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVHXE
FINITE ELEMENT BASED ELECTRIC MOTOR DESIGN OPTIMIZATION
Prepared by:
Academic Rank:
Institution and
Department:
MSFC Colleagues):
C. Warren Campbell, Ph.D., P. E.
Associate Professor
The University of Alabama in Huntsville
Department of Civil and Environmental Engineering
Charles S. Cornelius
Rae Ann Weir
NASA/MSFC:
Laboratory:
Division:
Branch:
Propulsion Lab
Component Development Division
Control Mechanisms and Propellant
Delivery Branch
VI
I . INTRODUCTION
The purpose of this effort was to develop a finite element
code for the analysis and design of permanent magnet electric
motors. These motors would drive electromechanical actuators in
advanced rocket engines. The actuators would control fuel
valves and thrust vector control systems. Refurbishing the
hydraulic systems of the Space Shuttle after each flight is
costly and time consuming. Electromechanical actuators could
replace hydraulics, improve system reliability, and reduce down
time.
The organization of the code is shown in Figure l. The
motor preprocessor is a routine that does the following:
1) Receives data on the motor geometry, materials,
windings, and currents
2) Generates the meshes and elements for the motor for
different rotor positions
3) Renumbers the nodes for minimal storage using the
minimum degree ordering algorithm
4) Dynamically allocates storage for coefficient arrays
for the finite element analysis
The finite element model calculates the magnetic vector
potential and stores the results in a file that can be accessed
by the postprocessor.
The postprocessor will do the following:
1) Calculate flux densities and field intensities
2) Calculate torques and back emfs for the motor
3) Plot the results
The optimizer will take torques and information from the
postprocessor and calculate a general objective function with
internal penalty function constraints. Constraints could
include magnitude of current densities, motor weight and
volume, and cogging torque. Based on previous values of the
objective function, the optimizer will select motor geometry for
the next iteration, optimization will continue until the motor
design is optimized.
The optimization will begin with an initial motor design
and will proceed toward an improved design. Care must be taken
in the design of the mesh. Sometimes in finite element
structural optimization, a mesh is generated which gives an
accurate solution to the initial design, but as optimization
proceeds, the mesh becomes too coarse for an accurate solution.
Then the "optimized" design is invalid.
Clearly, the code will be very long running. Consider using
VI-1
I
FEMOPT CODE ORGANIZATION
MOTOR
GEOMETRY
OPTIMIZER
TORQUE
EMF
MOTOR
PREPROCESSOR
MESH
REORDERED
NODES
•
VECTOR
FEM
SOLUTION
POTENTIAL
POST
PROCESSOR
Figure 1. FEMOPT Code Structure
cogging as a constraint. For each value of the objective
function the finite element code must find several solutions for
different positions of the rotor.
The finite element code developed in this effort was based
on the models in Silvester and Ferrari (3). The sparse matrix
algorithms were taken from George and Liu (1). The optimizer
will be an adaptation of code available from Numerical Recipes
in C by Press , et al . ( 2 ) .
II. APPROACH
The objective of this effort was to develop a finite
element code with optimization that could run on a 386- or 486-
class machine with up to 15,000 nodes in a two-dimensional
problem. Since motors are very long compared to airgap widths
and since we will not use rotor or stator skewing of magnets or
teeth, the problem can be assumed to be two-dimensional.
Also, these goals should be achievable without making users buy
thousands of dollars of software.
Because of the ambitious goals for this project, as many of
the routines as possible were based on existing code. At the
beginning, I did not realize that the code in Silvester and
Ferrari (3) was learning code in which coefficient arrays were
dimensioned to the maximum number of nodes, that is A(maxnod,
maxnod). For a 15,000 node problem (the goal for this effort),
the coefficient array alone would require 15,000 by 15,000 = 225
Megawords of storage! For 4-byte words, this is a gigabyte of
storage. Clearly, sparse matrix methods are required.
The need for sparse matrix methods significantly slowed the
progress of the effort. Even though George and Liu is an
excellent reference for solutions of finite element problems and
though it has Fortran subroutines in the text, progress was
extremely slow. This is because the routines in the text are
spaghetti code that are extremely hard to debug and understand.
The code uses variables that perform several functions and have
values that change in mysterious ways at different places in the
program. For these reasons, direct application of the routines
would make the code difficult to understand, debug, and
maintain. For these reasons, algorithms presented in George and
Liu were used to write new code that was understandable,
structured, and maintainable.
Borland c and C++ was chosen as the development language
for many good reasons. The Borland package is
inexpensive (~$300), well documented, and well written. It
permits tracing line by line through the code viewing values of
any variable at any point. It also allows the setting of
breakpoints. The code can be executed to the breakpoints where
VI-3
each variable of interest can be examined. This capability
minimizes debugging effort. C was chosen because of its power.
Desirable features include dynamic memory allocation, ability to
implement data structures easily while writing readable code,
and accessibility of computer graphics capabilities. Dynamic
memory allocation means that large arrays can be created as
needed, used, and then the memory deallocated for other uses.
In c this is done cleanly without impact to any of the desirable
features of the code. The same thing can be done in Fortran
using equivalence statements, but the process can cause
unexpected and untraceable errors in the code.
A strategy was found to be very useful for code
development. The first step was to take simple test problems
and use Hathcad (a mathematical spreadsheet easy to use and
understand) to calculate values of the variables at every point
in the execution of a program. With the line-by-line tracing
ability of Borland C, values of the variables in the code and
those calculated with Mathcad could be compared.
I also adapted an array dynamic allocation strategy from
Press , et al . ( 3 ) . C normally dimensions arrays from to n -
1, where n is the array dimension. By the Numerical Recipes
approach, arrays can be allocated from nlow to nhigh where nlow
and nhigh are any values with nhigh > nlow. This is very useful
in translating Fortran code with arrays dimensioned from 1 to n.
III . SUMMARY
In the first year of this task, work was done on the
preprocessor and on the finite element solver. Next year the
goal will be to add a nonlinear equation solver, a motor
preprocessor, post processor, and optimizer.
IV. ACKNOWLEDGEMENT
Thanks are due to Charlie Cornelius and Rae Ann Weir whose
support and encouragement were invaluable*
V. REFERENCES
1. George, Alan, and Liu, Joseph W. , Computer Solution of
Large Spar se Positive Definite Systems . Prentice-Hall, Englewood
Cliffs, NJ, 1981.
2. Press, William H., et al., Numerical Recipes in £,
Cambridge University Press, New York, 1990.
3. Silvester, P. P., and Ferrari, R. L. , Finite Elements for
Electrical Engineers. 2nd Edition, Cambridge University Press,
New York, 1990.
VI-4
4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILE
CHARACTERISTICS OF PRODUCTS GENERATED BY SELECTIVE SINTERING
AND STEREOLITHOGRAPHY RAPID PROTOTYPING PROCESSES
Prepared By:
Academic Rank:
Institution and
Department :
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch :
Vikram Cariapa, Ph.D,P.E.
Associate Professor
Marquette University, Department of
Mechanical and Industrial Engineering.
Floyd E. Roberts III.
Materials and Processes Laboratory
Non-Metallic Materials and
Processes (EH31)
Ceramics and Coatings (EH3 4)
VII
I. INTRODUCTION
The trend in the modern global economy towards free
market policies has motivated companies to use rapid
prototyping technologies to not only reduce product
development cycle time but also to maintain their competitive
edge. (1) . A rapid prototyping technology is one which combines
computer aided design with computer controlled tracking of a
focussed high energy source (eg. lasers, heat) on modern ceramic
powders, metallic powders, plastics or photosensitive liquid
resins in order to produce prototypes or models. At present,
except for the process of shape melting (2) , most rapid
prototyping processes generate products that are only
dimensionally similar to those of the desired end product.
There is an urgent need, therefore, to enhance the
understanding of the characteristics of these processes in
order to realize their potential for production. Currently,
the commercial market is dominated by four rapid prototyping
processes, namely selective laser sintering, stereolithography,
fused deposition modelling and laminated object manufacturing.
This phase of the research has focussed on the selective laser
sintering and stereolithography rapid prototyping processes. A
theoretical model for these processes is under development.
Different rapid prototyping sites supplied test specimens
(based on ASTM 638-84, Type I) that have been measured and
tested to provide a data base on surface finish, dimensional
variation and ultimate tensile strength.
Further plans call for developing and verifying the
theoretical models by carefully designed experiments. This
will be a joint effort between NASA and other prototyping
centers to generate a larger database, thus encouraging more
widespread usage by product designers.
II. PROCESS CHARACTERISTICS
All rapid prototyping processes start with the
development of a CAD model (usually a three dimensional solid
model) of the finished part. This model is then "sliced" into
different layers starting from the bottom of the part upwards.
Each slice is then downloaded to the control computer for the
actual creation of the part in the selected rapid prototyping
machine .
A schematic view of a selective laser sintering machine
is shown in Fig 1. The process is initiated by depositing a
thin uniform layer of powder under carefully controlled
temperature and atmosphere conditions (3) . The levelling drum
maintains the thickness of the layer between .003" and .010".
The computer controlled laser beam rasters the top surface of
VII - 1
LASER SOURCE
OPTICAL IIND01
LASER BEAU
k
CONTROL MIRRORS
COVER PLATE
PLATFORM
SUPPORT
&
ELEVATOR
RESIN LEVEL
VAT
STEREOLITIIOCRAPIIY PART
SUPPORT
1 — REMOVABLE PLATFORM
Figure 1. Schematic of Selective Laser Sintering Process.
Figure 2 Setup of a SlcrcoLilhography liachinc.
the powder bed according to the geometry of the "slice that is
being processed. A typical diameter for the laser beam is
0.02" and its working output power ranges from 5 W to 50 W.
This fuses the powder together at the interface of the beam.
The scanning velocity of the laser beam on the surface of the
powder ranges from 0.8-2.4 inches/s for metal and ceramic
powders to 40 inches/s for polymers and waxes. At the end of
the first layer, a second layer of loose powder is deposited
and the process continues with the sintering material in the
second layer binding to the previous layer. The process
continues until the part is completed. Since the laser fuses
only powder that it contacts, the finished part may be removed
quite easily from the chamber.
A cross section of the stereolithography machine is shown
in Fig 2 . The process is initiated by- raising the platform
above the level of the resin by a predetermined amount. After
a suitable waiting period, the laser traverses across this
thin film to create what is known as a "supports" structure.
This structure is created between the desired part and the
platform to facilitate part removal without damaging it. The
laser contacts the resin and polymerizes it thus creating a
semi-rigid form of the desired geometry. The platform lowers
below the resin layer for a recoating process and then raises
again to a level that is one layer thickness below that of the
VII - 2
again to a level that is one layer thickness below that of the
previous layer, and the laser is activated again to scan the
new layer. This process continues until the support is
completed. The product is created on this support structure in
a similar fashion with the additional step in the process
sequence of moving the wiper across the surface of the resin
to maintain a uniform layer thickness of 0.005" to 0.010"
after each recoating step. In addition, the scanning pattern
may be changed to suit product geometry. After the product is
completed, it is then gently removed from the platform ,the
supports are carefully scraped away and the product is placed
in a post cure chamber for the final curing stage where it
attains its final properties.
III THEORETICAL BACKGROUND
The principles behind the SLS process (3) indicate that
the lasing action melts the powder and a resulting binding
mechanism is a combination of melting of the powder and
viscous flow of the molten phase. Other contributing factors
include powder particle size and shape, powder properties at
different temperatures , laser power density, and chamber
atmosphere control.
The SLA process is based on the principle that laser
scanning initiates the release of free radicals in the
photopolymeric resin. A chain reaction that results causes
polymerization of the resin (4,5). Important parameters that
also contribute to this process include hatch spacing, cure
depths, wait time and post cure strategies.
IV EXPERIMENTAL SETUPS
Tensile test specimens (ASTM D638-84,Type 1) for the SLS
machine were created by Rocketdyne Inc. (CA) , using
polycarbonate powder as the raw material. Parameters that were
varied were laser power (low and high) , build direction (face
and edge) and use of sealant (no sealant and sealant) , Surface
finish, gage length dimensions and ultimate tensile strength
were the obtained for each specimen.
Similar test specimens made by the SLA process were
obtained from Pratt and Whitney (FL) and DEI (VA) . Parameters
that were varied were the build direction (edge, face and
vertical) and layer thickness (0.005" and 0.010") .Other
parameters were maintained at their default values.
V. SUMMARY OF THE RESEARCH.
Since critical information on these two processes is
proprietary the theoretical models require further
development. Testing of the samples has allowed certain
deductions to be made. For example, surfaces of the SLS
process, parallel to the powder bed surface had a superior
surface finish (65 - 520 microinches) than those produced
VII - 3
perpendicular to the powder bed surface (144 - 840
microinches) . In addition sealed products had better finishes
than unsealed products. Dimensional deviations were in the
range of ).003" to 0.007". Ultimate tensile strength ranged
from 1904 to 5616 psi. A statistical model predicted that the
product with the highest strength (5378 psi) could be built
with low laser power, flat orientation and be sealed with an
epoxy. This was comparable to ASTM D3935-87 for polycarbonate
material (5800 psi) .
Only the Pratt and Whitney stereolithography samples were
statistically satisfactory and generated products with a
surface finish range of 42 - 240 microinches. The ultimate
tensile strength values ranged from 2263 to 3162 psi (std.
dev. range was 94 to 330 psi) . Since the standard deviations
of tensile strength was large ,no deductions can be made about
the contribution of the individual process parameters. Also,
since the post processing involved clamping of the parts,
surface finish measurements must be treated with caution.
VI. CONCLUSIONS.
Some quantitative measures have been established about
the SLS and SLA rapid prototyping processes . Further
development on the theoretical models is required in order to
enhance the quality of predictions about these processes. The
range of parameters in rapid prototyping processes and
corresponding variety in materials add complexity to this
endeavor. Despite these issues rapid prototyping offers a
tangible trend towards reduction in product development times.
VII ACKNOWLEDGEMENTS
The author and NASA colleague wish to gratefully
acknowledge the contribution made by Rocketdyne Division,
Pratt and Whitney Ltd, and DEI towards this research.
VIII REFERENCES
1. Kutay, A. /'Strategic Benefits of Rapid Prototyping
Technology", Proceedings of the National Conference on
Prototyping, Dayton, OH, June 4-5,1990, pp 101-110.
2. Proceedings of the National Conference on Rapid
Prototyping, Dayton, OH, June 4-5, 1990.
3. Bourell,D.L. , Marcus, H.L. , Barlow, J. W. ,Beaman,J. J. ,
"Selective Sintering of Metals and Ceramics", International
Journal of Powder Metallurgy, v 28, n 4, 1992, pp369-381.
4. Jacobs. P. F. "Rapid Prototyping and Manufacturing", SME
Press, Dearborn, MI, 1993.
5. Gatechair,L.R. , Tiefenthaler,A.M. , "Depth of Cure Profiling
of UV Cured Coatings", Radiation Curing of Polymeric
Materials, C.E.Hoyle and J.F.Kinstle, Eds, American Chemical
Society, Washington, D.C., 1990.
VII - 4
2£ -£
1993 NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
PERFORMANCE OF THE ENGINEERING ANALYSIS AND DATA
SYSTEM II COMMON FILE SYSTEM
Prepared By:
Linda S. DeBrunner, Ph.D.
Academic Rank:
Assistant Professor
Institution and Department: University of Oklahoma
School of Electrical Engineering
MSFC Colleagues:
NASA/MSFC:
Marcellus Graham
Sheila Fogle
Office:
Division:
Branch:
Information Systems Office
Systems Development and Implementation
Data Systems Branch
VIII
Introduction
The Engineering Analysis and Data System (EADS) was used from April, 1986 to July,
1993 to support large scale scientific and engineering computation (e.g. computational fluid
dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP
(2) in June 1991, after which a contract was awarded to Cray Grumman. EADS JJ was installed
in February 1993, and by July 1993 most users were migrated.
EADS JJ (3) is a network of heterogeneous computer systems supporting scientific and
engineering applications. The Common File System (CFS) is a key component of this system.
The CFS provides a seamless, integrated environment to the users of EADS JJ including both
disk and tape storage. UniTree software is used to implement this hierarchical storage
management system. The performance of the CFS suffered during the early months of the
production system. Several of the performance problems were traced to software bugs which
have been corrected. Other problems were associated with hardware. However, the use of NFS
in UniTree UCFM software limits the performance of the system.
The performance issues related to the CFS have led to a need to develop a greater
understanding of the CFS organization. This paper will first describe the EADS JJ with emphasis
on the CFS. Then, a discussion of mass storage systems will be presented, and methods of
measuring the performance of the Common File System will be outlined. Finally, areas for
further study will be identified and conclusions will be drawn.
EADS II
EADS U is a high performance computing network supporting scientific and engineering
computing. The functions and implementation of EADS U are described in (2) and (3). The two
key computing components of EADS JJ are the Vector Processor Compute System (VPCS) and
the Virtual Memory Compute System (VMCS). The VPCS, a Cray Y-MP 81/6128, is used for
applications suitable for vector processing, while the VMCS, an SGI 4D/480, is used for
applications with large memory requirements. In EADS I, the predecessor to EADS JJ, the
VPCS needs were met by a Cray X-MP and the VMCS needs were met by an IBM 3084. Image
processing applications are supported by the Image Processing System (IPS). The IPS consists of
an SGI 4D/480 RE hub with 3 attached workstations. Mini-Supercomputers (MSCs) may be
included at a future time to reduce the loading of the VPCS. Although there are no MSCs
installed at this time, long term plans include the possibility of including small Cray Y-MP
machines (Cray Y-MP 2E) to meet specific laboratory needs. These MSCs would be used for
VPCS program development and for smaller applications.
A unique feature of EADS JJ is the integration of shared resources through the Common
Output System (COS) and the Common File System (CFS). The COS provides printing
capabilities for the users. Most printing facilities are located in the laboratories, while print
queues are maintained on the VMCS. The Common File System (CFS), which provides
hierarchical storage to all the EADS II machines, is the most interesting aspect of the EADS JJ
architecture. Restoration of files to disk from tape is automatic. The CFS hardware consists of 2
VIJJ-1
IBM RS/6000-970 servers, 4 Maximum Strategy Disk Arrays (172 GB total), and 2 STK 4400
automatic cartridge libraries or silos (2.4 TB total). NSL UniTree software is used.
The CFS has 4 principal functions: Private Processor Storage (PPS), User File Storage
(UFS), backup storage, and Archival Information Storage (AIS). The PPS consists of rotating
magnetic disk storage (RMDS) and is used to store active user programs, operating system
software, command procedures, and data. The UFS is RMDS which is allocated to users. The
backup storage is used for routine backup of the PPS and UFS to tape. The AIS is used for long-
term storage of information. Backup and archive management tools are also provided.
The EADS n computing components and shared resources are connected by a 3-level
network. At the lowest level, Ethernet LANs connect systems within a building. Better
performance is provided by the High Speed Network Backbone (HSNB), which uses the Fiber
Distributed Data Interface (FDDI) technology. The HSNB provides access between central site
and remote facilities. There are 2 FDDI rings which are interconnected by routers to each other
and the building LANs. The highest level of performance is provided by the Back End High
Performance Interconnect (BEHPI) network which is based on UltraNet. The BEHPI is used
almost exclusively for moving data between central site computers and the CFS.
Mass Storage Systems
The IEEE-CS Technical Committee on Mass Storage Systems and Technology developed a
"reference model" in the eighties which is used by manufacturers of mass storage products to
describe the functions of their systems (1,6). Although the reference model is not an IEEE
standard, it is an important consideration in the development of mass storage systems.
The UniTree software is sold to companies by Open Vision (previously by DISCOS). The
companies then port the code to their chosen platform. The product chosen to implement the
EADS II CFS is NSL UniTree supplied by IBM. Most companies marketing UniTree products
make modifications to improve performance or to add features. For example, Control Data
Systems focuses on supporting a wide range of peripherals and has tuned their system to
improve performance for various peripherals. On the other hand, Convex rewrote portions of the
code that control the way the processes communicate. IBM has implemented Multiple Dynamic
Hierarchies which allow multiple hierarchies on a single machine. They also have implemented
a 3rd party transfer capability, called Network Attached Storage, which allows hosts to send
data directly to the disk array without going through UniTree. Several other companies have
developed mass storage systems including Epoch storage management tools, NetArchive, and
Cray's Data Migration Facility.
Research facilities and universities have pioneered much of the work in the mass storage
arena. For example, UniTree was developed at Lawrence Livermore National Laboratory (5).
There are currently two mass storage systems developed by research facilities that are of
particular interest-NAStore and AFS. NAStore, developed by NASA Ames Research Center,
only blocks read operations until the first part of the data is available. So, for large files, access
to the first byte of data is significantly faster. The Andrew File System (AFS) was developed by
Carnegie-Mellon University to support distributed file access. It has been adapted by the
Pittsburgh Supercomputing Center to include mass storage capabilities (4). AFS was chosen
vm-2
since it is more scalable than NFS. NFS requires clients to communicate with the server to
complete each transaction, but AFS maintains state information. Clients assume that they are
using the most current version of a file's date until they are notified by a server. However, AFS
was developed without consideration for the mass storage reference model.
Measuring the Performance of the Common File System
Three measurements of the CFS performance are currently being collected. All of these
measurements are similar. Each measures the time required to perform several operations. None
of these metrics generates statistics which can be readily compared to the expected performance
or the performance of other systems. The principal function of these measurements is to identify
degraded system performance relative to past system performance.
Every 10 minutes, Boeing Computer Support Services (BCSS) runs a script which checks for
degraded system performance. This "10-Minute Metric" script measures the time required to
change to a UniTree subdirectory ("cd") and list the directory contents ("Is"). In addition, Cray
Grumman runs the "UNITREE Metric" hourly. Like the 10-Minute Metric, this metric measures
the time required to perform simple file manipulations. It measures the time required to perform
"Is", "Is -1", and to "tail" a file. Cray Grumman also runs a program every 3 minutes to check for
degraded performance of the CFS. At this time, different programs are run on the VMCS and the
VPCS. On the VPCS, the "3-Minute Metric" program measures the time required to open a file
in a UniTree subdirectory and write a line to it. The corresponding program on the VMCS
provides more complete information. It measures the number of NFS users, performs simple
operations using NFS, and performs simple operations using FTP. Using NFS, the program
performs a directory listing and copies a small file to a UniTree subdirectory. Using ftp, it
"puts" a file in a UniTree subdirectory, performs a directory listing, and deletes the file. These
measurements are inadequate for evaluating the overall performance of the CFS. A performance
measurement tool is needed to allow EADS n to be compared to other systems.
Areas for Further Study
Several areas have been identified for future work. The most important is the development of
a performance measurement tool. After measurement capabilities are developed, UniTree can be
tuned to improve its performance in the EADS II environment.
The lack of knowledge about parallel processing on the SGI should also be remedied. By
understanding the differences between parallel processing on the Cray and the SGI, users could
be advised about the execution of their applications which are suited to parallel implementation.
This should allow more users to use the SGI effectively.
Finally, a method of modeling networked computer systems should be investigated. This
modeling would allow performance to be predicted before changes are made. Consequently, the
effects of hardware changes and software load changes could be evaluated in a "what if' format.
vm-3
Conclusions
The EADS II mass storage requirements are aggressive. Existing products have
shortcomings with respect to these requirements. Since the EADS II CFS requires the most
current technology, the efforts of the Storage System Standards Working Group will effect the
future of mass storage technology. Awareness of standards will give system architects a better
understanding of mass storage systems.
Current methods of measuring the performance of EADS n are inadequate. In the future,
more meaningful measurements will be needed. As a beginning, EADS n should be evaluated
using the tests run at Ames Research Center. In addition, a performance measurement tool
tailored to the needs of EADS n should be developed. This tool will allow system administrators
to evaluate the effects of hardware and software modifications, as well as changes in loading. It
will also support comparisons with other mass storage systems.
Methods for modeling the system are needed to predict the effects of system modifications
before implementation. Such a model will also support the analysis of predicted changes in
loading. The model would allow various scenarios to be considered to choose the best solution.
Acknowledgment
I would like to thank Sheila Fogle and Marcellus Graham for providing feedback throughout
this work. I would also like to thank Amy Epps being an unending source of useful information.
Finally, I would like to thank Paul Allison for his support.
References
(1) Coyne, R. A., "An Introduction to the Mass Storage System Reference Model, Version 5,"
Proceedings of the Twelfth IEEE Symposium on Mass Storage Systems, Monterey,
California, April 26-29, 1993, pp. 47-53.
(2) Engineering Analysis and Data System II (Class VII Computer System), Request for
Proposal, MSFC, NASA, RFP #8-l-9-AI-00120.
(3) Engineering Analysis and Data System II Users Guide, MSFC, NASA.
(4) Goldick, J. S., Benninger, K., Brown, W., Kirby, C, Maher, C, Nydick, D. S., Zumach, B.,
"An AFS-Based Supercomputing Environment," Proceedings of the Twelfth IEEE
Symposium on Mass Storage Systems, Monterey, California, April 26-29, 1993, pp. 127-132.
(5) McClain, F., "DataTree and UniTree: Software for File and Storage Management,"
Proceedings of the Tenth IEEE Symposium on Mass Storage Systems, Monterey, California,
May 7-10, 1990, pp. 126-128.
(6) Miller, S. W., "A Reference Model for Mass Storage Systems," Advances in Computers, Vol.
27, Yovits, M. C, editor, pp. 157-210.
vrn-4
1993
§4-24414
NAS A/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTS VILLE
WATER CYCLE RESEARCH ASSOCIATED
WITH THE
CaPE HYDROMETEOROLOGY PROJECT (CHymP)
Prepared by:
Academic Rank:
Claude E. Duchon, Ph.D.
Professor
Institution:
Department:
University of Oklahoma
School of Meteorology
MSFC Colleague:
Steven J. Goodman, Ph.D. - NASA
NASA/MSFE:
Laboratory:
Division:
Branch:
Space Science
Earth Science and Applications
Earth System Processes and Modeling
IX
I. Introduction
One outgrowth of the Convection and Precipitation/Electrification (CaPE) experiment that
took place in central Florida during July and August 1991 was the creation of the CaPE
Hydrometeorology Project (CHymP). The principal goal of this project is to investigate the
daily water cycle of the CaPE experimental area by analyzing the numerous land and
atmosphere in situ and remotely sensed data sets that were generated during the 40-days of
observations.
The water cycle comprises the atmospheric branch and the land branch. In turn, the
atmospheric branch comprises precipitation leaving the base of the atmospheric volume under
study, evaporation and transpiration entering the base, the net horizontal fluxes of water vapor
and cloud water through the volume and the conversion of water vapor to cloud water and
vice- versa. The sum of these components results in a time rate of change in the water vapor or
liquid water (or ice) content of the atmospheric volume. The components of the land branch
are precipitation input to and evaporation and transpiration output from the surface, net
horizontal fluxes of surface and subsurface water, the sum of which results in a time rate of
change in surface and subsurface water mass. The objective of CHymP is to estimate these
components in order to determine the daily water budget for a selected area within the CaPE
domain.
This work began in earnest in the summer of 1992 and continues. Even estimating all the
budget components for one day is a complex and time consuming task. The discussion below
provides a short summary of the rainfall quality assessment procedures followed by a plan for
estimating the horizontal moisture flux.
II. Daily Rainfall
The first step in any data analysis is to assess the quality of the data. With respect to the
precipitation data, a quality assessment program began in June, 1992 and has taken one year to
complete. Through this program reliable measurements of daily rainfall are now available for
212 raingages, most of which are in the area bounded by 27° and 29°N and 80° and 82°W.
Fig. 1 shows the gage locations that resulted and the associated sponsors. Some of the
raingages were operated specifically for the duration of the CaPE experiment and others were
(and still are) continuously maintained by federal and state agencies and individual cooperators.
III. Water Vapor Flux
The estimation of atmospheric horizontal water vapor flux requires analyzing both
rawinsonde and satellite data. The sounding sites, identified by hexagons in Fig. 1, are located
within and around the water budget area (outlined by heavy line). The satellite data come from
two sources, AVHRR on the NOAA polar orbiting satellites and VAS (VISSR Atmospheric
Sounder) on GOES-7. The objective is to produce estimates of the divergence of water vapor
flux every three hours for selected sequences of days. The plan for estimating the water vapor
flux is outlined below.
In the early part of the CaPE experiment numerous problems arose with CLASS (Cross-
Loran Atmospheric Sounding System) soundings so that only from 20 July to 12 August are
IX-1
82W
All Raing^ages
**Vg 1W
212
27N
82W
81W
20 40
60
80 100 km
80W
29N
-28N
27N
80W
+
KSC/TRMM -
20
X
SWFWMD - 50
X
USGS - 14
A
USDA - 1
o
USJRB - 19
M
MSFC - 2
/L
NWS USAF -
- 20
G
U GA - 2
¥6
SFWMD - 3
S
FSU - 2
O
PAMII - 40
F
U Fla - 5
IX-2
there an adequate number of high quality soundings available for analysis. The time of
soundings is linked to studies of large scale and small scale weather systems. The four outer
CLASS sites (Dunnellon, Ruskin, Fellsmere and Daytona Beach) are connected to the large
scale with 5 daily soundings taken every 3 hours beginning at 1 100 UTC and ending at 2300
UTC. Soundings at Fellsmere and Dunnellon were taken also at 0800 UTC. The Deer Park
and Tico Airport locations as well as the mobile CLASS unit were part of the small scale
weather system study so that soundings were taken at variable times related to the current
daytime storm activity. Cape Canaveral Air Force Station, Orlando and Tampa provided
numerous additional soundings, mainly during daytime. During the 24 day period the number
of soundings per day ranged from 28 to 48 with the vast majority of the soundings between
1000 and 2400 UTC. The maximum number of soundings between 0000 and 1200 UTC was
5 on one day; typically there were 2. Accordingly, there is a large gap in radiosonde coverage
for this 12 hour period.
For many reasons, sondes are not always released at the scheduled time. Also, as noted,
some stations have no set schedule. Thus, in order to develop a 3-hourly moisture and wind
fields a scheme to incorporate data from surrounding times has to be developed.
Within the 24 day period noted above there are comparatively few days in which data are
more or less continuously available from all observational systems~the optimal situation for
calculating the daily water budget. Based on the following criteria each day was rated on a
scale of 1 (poor) to 5 (good):
a. number of hours of WSI radar coverage given that it is raining (based on gages).
b. percent cloud cover around 1200 UTC derived from visual inspection of GOES visible
imagery.
c. total number of atmospheric soundings and the number between 0000 and 1000 UTC.
d. number of times data from the 1 l|i.m and \2\lto. split-window channels on GOES-7
VAS (VISSR Atmospheric Sounder) are available.
e. number of hours of profiler winds.
The larger the value for each criterion, the higher the rating for that day. At this writing the
split-window criterion has not been invoked because the selection of data to be ordered is in
progress. Based on the remaining criteria the best periods are 26-30 July and 7-9 August.
A rawinsonde provides vertical profiles of wind and water vapor content which begin at a
specific time and location at the surface. As the balloon rises its horizontal position changes in
response to the wind field. If we consider 400 mb (about 7.5 km) to be the upper level of
moisture calculation, which corresponds to about 98% of the integrated water content (IWC),
and a balloon rise rate of 5 ms" 1 , it will take 1400 seconds (23 minutes) to reach that altitude.
With an average wind speed of 10 ms" 1 , the drift will be 14 km. This is a significant fraction
of the water budget analysis area so that, in general, balloon position must be taken into
account. In addition, an accounting of time differences between soundings must be made.
IX-3
The first step in rawinsonde data reduction is a vertical interpolation of each sounding to
evenly spaced O" levels (a = P/P sfc ). A resolution of Ac - 0.01 (=10 mb) will provide 40
levels of wind and water vapor content. Next, the data at each level are linearly interpolated in
time and horizontal distance with data from the previous or following ascent to a common time.
The result of the interpolations in space and time should be that all data for each level are valid
at a single time.
The next step is to perform an objective analysis of the wind and water vapor content on
each of the 40 <j-surfaces such that the gridded analysis extends beyond the water budget area.
At this point information from VAS and AVHRR will be incorporated into the analysis.
Gridded fields of IWC will be obtained using the physical split- window (PSW) technique
developed by Dr. Gary Jedlovec at MSFC. The idea is to vertically distribute the VAS- and
AVHRR- derived IWC at the same grid points as above according to the water content profile
at those grid points derived through linear interpolation from the rawinsonde locations, as
discussed above. The reason for incorporating satellite-derived IWC is to provide improved
estimates of water content between rawinsonde stations. This may be especially important if
there are significant spatial variations of IWC.
The final step is to integrate the moisture flux normal to the boundary around the exterior
of the water budget at each level. The summation over all levels is equal to atmospheric water
vapor divergence for that time. Assuming that 3-hourly estimates are available they are then
summed to obtain the divergence for that day.
IV. Conclusion
After one year of quality assessment, a credible 42 day set of daily rainfall data for 212
stations has been produced. Thus the daily area-average precipitation component of the
atmospheric branch has been essentially completed.
A strategy has been formulated to analyze the horizontal flux of water vapor employing
rawinsonde and satellite data. Priority time periods have been selected so that satellite data can
be now ordered. It is anticipated that the creation of a 3-dimensional grid of moisture and wind
will be developed at OU and coordinated with Dr. Bill Crosson at MSFC. IWC data files will
be produced by Drs. Jedlovec, Guillory and Crosson at MSFC.
V. Acknowledgments
Many thanks to Dr. Crosson and Joni Brooks for their major contributions to the raingage
quality assessment and stimulating discussions on the water vapor analysis.
DC-4
N9
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTS VELLE
FOUL BEARINGS
Prepared By:
Academic Rank:
Institution and Department:
MSFC Colleague:
NASA/MSFC:
Laboratory:
Division:
Branch:
David A. Elrod, Ph. D.
Assistant Professor
The University of Alabama in Huntsville
Mechanical and Aerospace Engineering Department
Henry P. Stinson
Propulsion
Component Development
Turbomachinery
INTRODUCTION
The rolling element bearings (REB's) which support many turbomachinery rotors offer
high load capacity, low power requirements, and durability. Two disadvantages of REB's are:
• rolling or sliding contact within the bearing has life-limiting consequences; and
• REB's provide essentially no damping.
The REB's in the Space Shuttle Main Engine (SSME) turbopumps must sustain high static
and dynamic loads, at high speeds, with a cryogenic fluid as lubricant and coolant. The pump
end ball bearings limit the life of the SSME high pressure oxygen turbopump (HPOTP).
Compliant foil bearing (CFB) manufacturers have proposed replacing turbopump REB's with
CEB's. CFB's work well in aircraft air cycle machines, auxiliary power units, and refrigeration
compressors. In a CFB, the rotor only contacts the foil support structure during start up and
shut down. CFB damping is higher than REB damping. However, the load capacity of the
CFB is low, compared to a REB. Furthermore, little stiffness and damping data exist for the
CFB. A rotordynamic analysis for turbomachinery critical speeds and stability requires the
input of bearing stiffness and damping coefficients.
The two basic types of CFB are the tension-dominated bearing (Figure 1) and the
bending-dominated bearing (Figure 2). Many investigators have analyzed and measured
characteristics of tension-dominated foil bearings, which are applied principally in magnetic
tape recording. The bending-dominated CFB is used more in rotating machinery.
This report describes the first phase of a structural analysis of a bending-dominated,
multileaf CFB. A brief discussion of CFB literature is followed by a description and results of
the present analysis.
Housing
Foil Segment
Journal
Figure 1. Tension-dominated foil bearing
Figure 2. Bending-dominated foil bearing
X-l
ANALYSIS
Most of the analyses of bending-dominated CEB's have the following common
characteristics:
• fluid inertia effects are considered negligible;
• the fluid film is compressible (as in most applications); and
• the equations for the compliant walls and fluid film are coupled in an iterative solution.
In addition, some investigators declare that the foil leaves in a multileaf CFB are more
important than the fluid film in determining:
bearing stiffness and damping;
load capacity as a function of eccentricity;
preload between the leaves and journal; and
startup torque.
In a rocket engine turbopump application, the fluid film is incompressible, and inertia effects
may be appreciable. However, the present model is an analysis of the multileaf structure only.
In a manner similar to the analyses of Oh and Rohde (1) and Trippett, Oh, and Rohde
(2), the present model first solves for the assembly of overlapping leaves in the bearing
housing. The solution is iterative, and is a function of the bearing housing radius r b , the radius
of curvature of the pre-formed leaves 77, and the number of leaves «/. Figure 3 shows the
result for an input of r b = 0.8125inch, r/ = 0.915inch, and w/ = 8. For a valid solution, the
distance from the center of the bearing housing to the end of a leaf must equal the distance
from the center to a point on the leaf 27t/«/ radians away.
Input data:
Leaf radius
0.9150 inch
Housing radius
0.8125 inch
Leaf length
1.102 inches
Number of leaves
8
Figure 3. Compliant foil bearing assembly, no rotor
X-2
After the foil leaves are assembled in the housing, rotor installation requires deformation
of the leaves. The forces deforming the leaves are the rotor forces and leaf reaction forces.
The constraints for rotor installation are:
• the minimum distance from the bearing housing center to the leaf is equal to the rotor
radius;
• the distance from the housing center to a point on the overlapping part of one leaf must
be less than the distance to the "overlapped" part of the next leaf; and
• the leaves can only push (not pull) on one another at contact points.
The application of Castigliano's theorem provides compliance functions which relate the
deflection of each point on a leaf to rotor forces and leaf forces. The foil leaves are curved
beams with one end fixed. The additional input data required for calculating the effect of
rotor installation are the rotor radius r n the second moment of the area of the leaf cross
section I, and Young's modulus for the leaf material E. The analysis calculates the rotor force
required to satisfy the above list of constraints. Figure 4 is a plot of the housing center to leaf
distance before and after installation of a 0.7885 inch rotor into the foil bearing of Figure 3.
The leaves in the analysis are one inch wide, 0.006 inch thick, with a Young's modulus of 30
Mpsi. The arrows on the "after" leaf represent the locations of the forces required to install
the leaf. Figure 5 shows the geometry of the bearing with the rotor installed.
10 20 30 40 50 60 70
Angular Position from Housing Attachment, degrees
80
90
Figure 4. Compliant foil bearing - leaf distance from housing center
CONCLUSIONS
This report describes an analysis of the geometry of a multileaf, compliant foil bearing.
The analysis solves for the assembly of preformed leaves in a bearing housing, and the
installation of a rotor in the assembly. The analysis will be modified to include interleaf
X-3
friction forces, leaf backup support options, and an analysis of the deflection of the rotor due
to an applied load. Predictions will be compared to MSFC test data. Future developments
will include the interaction of the bearing fluid film.
Input data:
Leaf radius
0.9150 inch
Housing radius
0.8125 inch
Leaflength
1.102 inches
Number of leaves
8
Rotor radius
0.7885 inch
I (area moment)
1.8E-8 in 4
E (Young's mod)
30E6 psi
Rotor force
0.72 Mat 74 degrees
Leaf forces
0.39 Mat 38 and 83 degrees
0.45 Mat 25 and 70 degrees
Figure 5. Compliant foil bearing, rotor installed
REFERENCES
(1) Oh, K. P., and Rohde, S. M., "A Theoretical Investigation of the Multileaf Journal
Bearing," ASME Journal of Applied Mechanics, Vol. 98, No. 2, June 1976, pp. 237-242
(2) Trippett, R. J., Oh, K. P., and Rohde, S. M., "Theoretical and Experimental Load-
Deflection Studies of a Multileaf Journal Bearing," Topics in Fluid Film Bearing and
Rotor Bearing System Design and Optimization, 1978, pp. 130-156
X-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
DESIGN AND SPECIFICATION OF A CENTRALIZED
MANUFACTURING DATA MANAGEMENT AND SCHEDULING SYSTEM
Prepared By:
Academic Rank:
Institution and
Department:
Phillip A. Farrington
Assistant Professor
The University of Alabama in Huntsville
Department of Industrial and Systems Engineering
MSFC Colleagues: Paul Gill and Eutiquio Martinez
Laboratory:
Division:
Branch:
Materials and Processes
Fabrication Services
Process Automation & Modeling
XI
Introduction
As was revealed in a previous study [1] the Materials and Processes
Laboratory's Productivity Enhancement Complex (PEC) has a number of
automated production areas/cells that are not effectively integrated, limiting the
ability of users to readily share data. The recent decision to utilize the PEC for
the fabrication of flight hardware [2] has focused new attention on the problem
and brought to light the need for an integrated data management and
scheduling system. This report addresses this need by developing preliminary
design specifications for a centralized manufacturing data management and
scheduling system for managing flight hardware fabrication in the PEC.
This prototype system will be developed under the auspices of the Integrated
Engineering Environment (IEE) Oversight team and the IEE Committee. At
their recommendation the system specifications were based on the fabrication
requirements of the AXAF-S Optical Bench.
AXAF-S Optical Bench - Production Requirements
AXAF-S has a number of parts and components of which the Optical Bench
Assembly is a key structural element. As shown in Figure 1 the Optical Bench
Assembly consists of four primary components: the telescope tube, the telescope
cone, the mounting pads, and the star tracker mounts. All of these, except the
titanium mounting plates, will be fabricated from graphite cyanate composite
materials. It is anticipated that all components will be fabricated in the PEC.
Optical Bench Assembly
Telescope Telescope Mounting Star Tracker
Tube Cone Pads [2] Mounts (2)
Titanium Composite
Mounting Mounting
Plates [2] Pads [2]
Figure 1: Bill of Material for Optical Bench Assembly
Analysis of preliminary process plans indicates that five work areas will be
required to fabricate and assemble the optical bench. The work areas utilized in
4707, as illustrated in Figure 2, include the fiber placement machine, the hand
lay-up area, the autoclave(s), the automated ultrasonic test system and an as yet
undefined assembly area. The machine shop in 4705 will also be required,
however, it will not be directly linked to the system. Instead the scheduling
system, described in this document, should have the capability to pass data to
and receive data from the Integrated Manufacturing Planning and Control
System (IMPACS) used by NASA planning personnel (EH52).
XI- 1
NDE
Tnt
CM
Final
AwnUy
ompk
Fabrication
Figure 2: Centralized Data Management and Scheduling System for
Productivity Enhancement Complex
Note that in addition to the five fabrication cells two additional workstations
will also be linked to the centralized server. The SLA 250 - Stereolithography
machine and a composite material freezer in 4707 used for storage of the
graphite cyanate material for the optical bench. The SLA 250 was included
because it may be used extensively in the early stages of design and prototype
development. The link for the freezer was included in order to implement a
inventory management system for monitoring composite material usage.
The choice of hardware and software platforms were driven primarily by the
current systems in use at MSFC and the prevailing move away from mainframe
computing systems. MS-DOS based PC's were chosen as the hardware platform
because of their capability and cost effectiveness. In order to minimize overall
system costs it is recommended that existing hardware be used where possible.
The basic configuration should be an upgradeable 386 or 486 based PC with 8
megabytes of RAM, a 100-200 Mb hard disk drive, 2 floppy disk drives, MS-DOS
5.0 and Windows 3.1. Given the choice of PC's as the hardware platform it is
recommended that Novell Netware be chosen as the networking platform. Novell
was selected because of its extensive use throughout MSFC and its proven
performance capabilities.
This system will require the integration of a scheduling system and a relational
database management system. It is recommended that the scheduling system be
developed using Microsoft Project, a Microsoft Windows based scheduling
package and that Oracle be the choice for the relational database management
system. Both packages were pragmatic choices because of there widespread use
throughout MSFC. MS-Project is available on WPS and is the scheduling
package of choice for the AXAF-S program office. Likewise Oracle was chosen
because it is currently used for other applications, such as the IMPACS system
used by EH52 (Planning and Control Branch) and the 4707 Tool Crib Inventory
XI-2
system, which could be integrated with the PEC system in the future. Overall,
MS-Project and Oracle satisfy the performance requirements of the PEC system
and should increase its compatibility with other systems at in place at MSFC.
System Functionality
The PEC data management and scheduling system will have three functional
aspects: scheduling, file management and inventory management. This section
will review the functional and data requirements for each.
Scheduling
The PEC scheduling system will integrate MS Project and Oracle into an
application that allows NASA personnel to plan, monitor, and control the
fabrication activities taking place within the PEC. This application will have
three levels of functionality: planner level functionality, operator level
functionality, and management/engineering level functionality.
The planner level is where detailed schedules will be developed, work order
data input, and scheduling and planning reports developed. The primary task at
this level is development and maintenance of the planning database. The type of
data that will be input at this level includes: the project or work order number,
the date the work order was received, the originator, the originator's
organization, a description of the project, the desired start and completion dates,
the resources/work stations required to complete the task(s), the work
breakdown structure (WBS) code, the UPN number, and the CCBD number.
Based on this information the planner will develop a base line schedule for the
project being initiated. In order to reduce data redundancy and minimize data
re-entry the schedules and data maintained in the PEC system should be
transferable to other scheduling/planning systems currently used by NASA
and/or NASA contractors, including: Open Plan, Time Line, Artemis, Primavera,
and IMPACS. Initially, a full time planner will not be required for this system,
however, as more fabrication projects come on-line a dedicated planner will
become imperative. Given that the fabrication of flight hardware is a relatively
new activity within the PEC the processes and procedures for the creating and
management of planning and processing data have not been completely defined.
Follow-on activities related to this project are being initiated that will address
the requirements of the planning level of the system in greater detail.
At the operator level the primary concerns are with documenting the
execution of scheduled tasks and providing the operator with the information
required to complete the task at hand. At this level, initial entry into the system
would involve presenting the operator with a prioritized list of tasks to be
worked at their respective work area. Selecting a particular item, via a menu or
mouse operation, should bring up the work order log-on window. At this point
the operator would enter their name, organization, and identification number,
with the system automatically capturing the log-on date and time from the
system clock. Logging-off would entail a similar procedure with the system
XI-3
querying the operator for their name, identification, number, organization, the
level of completion of the task (i.e., 25%, 50%, 100%, etc.), then automatically
recording the log-off date and time and updating the project schedule. After
logging-on a task the operator would be presented with a screen showing
processing information for the task. Information provided should include the
current drawing number and revision, processing sheets/recipes, and the listing
of NC files required for any fabrication equipment in their work area. In
addition to providing the operator with access to the basic fabrication
information the system should also provide the capability for capturing
engineering and quality sign-off on fabrication setups and inspections. At
present these are captured on paper, however, it is technologically feasible to do
this electronically and it makes sense to build the basic functionality into the
proposed PEC scheduling and data management system.
Finally at the management/engineering level the primary concern is project
management. Users at this level are interested in the current status of
component fabrication as well as material and resource usage. They will need
access to Gantt charts and Pert networks showing the status of specific
programs and projects. Three primary reports will need to be developed, a
project status report, and resource and material usage reports. The project
status report should indicate where a particular component is in its processing
sequence, when fabrication was initiated, and the expected completion
date/time. The resource usage report should provide information on work area
usage (i.e., manpower and equipment) by project, while the material usage
report should indicate the type and quantity of material used by project. In
addition to reporting the system should also allow managers to perform what-if
analysis on schedules to assess the impact of processing delays on the schedule.
File Management
In addition to the planning and scheduling capability outlined above, the
PEC data management and scheduling system should also provide users with
the capability to quickly and easily access input and output files from each
process. Each workstation associated with an automated piece of equipment (i.e.,
the fiber placement machine, autoclave, and NDE automatic ultrasonic test
system) should have the capability to access and down load control programs
(e.g., NC programs in the case of the fiber placement machine) and to upload
processing data from the controller.
Inventory Management
The inventory management aspect of the system will provide a computer
based system for more effectively monitoring and tracking data on material
information and usage history for all composite materials stored in the PEC. It
is anticipated that the freezer inventory management system will be written in
Oracle but will be accessed through the MS Project based scheduling system.
The information stored in the system should include a NASA designated
XI-4
material control number, the material description, the material type, the
supplier name, the manufacture date, certification/recertification date, the lot
number, the roll or spool number, the storage location (i.e., freezer number), the
date initially stored in the freezer, current quantity in storage, cumulative time
in the freezer, cumulative time out of the freezer, maximum allowable time out
of the freezer and/or the maximum allowable age of the material, the
identification number for the person withdrawing material, program number
being charged, project/work order number being charged, the removal date and
time, the identification number of the person returning the material, the
quantity being returned, and the date and time the material was returned.
The freezer inventory management system should flag the user if the
material has exceeded its maximum allowable age and/or the maximum
cumulative time allowed outside the freezer. The system should also maintain a
usage history on the material (i.e., quantity of material used for each program by
project). Two basic reports, the material usage history report, and the freezer
inventory report, will also be required to effectively manage the materials
inventory. The material usage report should provide information on the quantity
of each material type used by program and project/work order number. The
freezer inventory report will provide information on the material currently
stored in the freezer. The primary information presented should include the
material control number, material description, material type, quantity in
storage, and the cumulative time in and out of each freezer. This report should
also flag items close to their expiration date (i.e., within two weeks, etc.).
Conclusion
This study is a first step in the transition of the PEC from a research and
development facility to a production facility. As with all changes it will have its
moments of pain and confusion, however, these can be minimized through
effective planning. The centralized data management and scheduling system
described herein is the beginning of this planning process. While this study has
addressed many of the technical aspects of the system there are still several
administrative issues that must be addressed. The most prominent issues to be
addressed include the identification of the lead planning organization, and the
delineation of processes and procedures for: development and maintenance of the
planning database, the electronic capture of engineering and quality sign-off, the
transfer of scheduling data to and from the AXAF-S program office, and the
transfer of work order data to and from IMP ACS. Follow-on activities are being
initiated that will address these issues in greater detail.
References
(1) Farrington, P. A. "Evaluation and recommendations for work group
integration within the Materials and Processes Lab," Research Reports -
1992 NASA/ASEE Summer Faculty Fellowship Program, pp. XIV1-4.
(2) Turner, J, "AXAF-S SRR Kick-Off Meeting", March 29, 1993.
XI-5
A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
TECHNOLOGY UTILIZATION OFFICE
DATA BASE ANALYSIS AND DESIGN
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague (s)
NASA/MSFC:
Office:
Directorate:
Stephen A. Floyd, Ph.D.
Assistant Professor
University of Alabama in
Huntsville
Department of MIS
Ken Fernandez, Ph.D.
Martha Nell Massey
Technology Utilization
Institutional and Program Support
XII
INTRODUCTION
NASA Headquarters is placing a high priority on the
transfer of NASA and NASA contractor developed technologies and
expertise to the private sector and to other federal, state and
local government organizations. The ultimate objective of these
efforts is positive economic impact, an improved quality of life
and a more competitive U.S. posture in international markets.
The Technology Utilization Office (TUO) currently serves seven
states with its technology transfer efforts. Since 1989 the TUO
has handled over one-thousand formal requests for NASA related
technological assistance. The technology transfer process
requires promoting public awareness of NASA technologies,
soliciting requests for assistance, matching technologies to
specific needs, assuring appropriate technology transfer and
monitoring and evaluating the process. Each of these activities
have one very important aspect in common: the success of each is
highly dependent on the effective and efficient access, use and
dissemination of appropriate high quality information. The
purpose of the research reported here was to establish the
requirements and develop a preliminary design for a database
system to increase the effectiveness and efficiency of the TUO's
technology transfer function. The research was conducted
following the traditional systems development life cycle
methodology and was supported through the use of modern
structured analysis techniques. The next section will describe
the research and findings as conducted under the life cycle
approach.
ANALYSIS AND DESIGN
The purpose of the detailed analysis phase was three-fold:
1. the complete and thorough understanding of the TUO's
technology transfer process, 2. the analysis of the feasibility
of computer system support for the process and 3. the
definition of scope for the system to be addressed by the
research. The necessary understanding of the technology transfer
process was gained using both traditional and structured
methodologies. Information concerning the process was compiled
from TUO documentation and report examination, personal
interviews with all TUO and relevant contractor personnel
(including Boeing Computer Support Services personnel),
attendance at meetings and presentations, observation of day to
day activities and through structured analysis modeling
techniques. The process was modeled using the process and data
modeling techniques of data flow diagramming and entity-
relationship diagramming, respectively. The key processes and
the necessary data/ information flows and data stores necessary
to support them were identified. The high level processes were
then hierarchically decomposed down to the primitive process
level. Concurrent with this effort the key business entities
were identified and the required data were mapped to them.
XII-1
The results of the analysis described above defined the
business processes and entities falling within the established
project scope. The scope of the project was defined by the TUO's
Technology Assistance Board (TAB) process and more specifically
by the problem request (PR) tracking and reporting requirements.
The PR's are submitted by the client and can be likened to a
customer order in a traditional business system. Receipt of a PR
triggers the transaction process. At a high level the process
consists of the following subprocesses : log-in (assignment of
log- in number and entry into spreadsheet) , evaluation (for scope
and completeness) , TAB review (consisting of further evaluation,
assignment of a technology category, assignment of a responsible
principal engineer (PE) and identification of appropriate MSFC
lab and personnel possessing technology and or expertise to be
applied), PE coordination and status reporting for active PR's
and PR closure process. Each of the processes comprising a PR
transaction were analyzed as to their input, process, output and
data storage requirements.
The data modeling aspect of the analysis served to identify
and define the key business entities and their relationships.
The primary entities are the client - the individual or
organization submitting a PR, the problem request, the
technology source - the MSFC lab or individual that will address
the problem request, and the principal engineer - the TUO
individual with assigned responsibility for a given PR. The
nature of the relationships among the entities were defined and
the entity attribute specifications were developed. The data
models were then used to develop the structure of the TUO
database .
The analysis of the current TUO system led to the
identification of several process related problems and issues.
Key among these issues were:
- inability to effectively track PR's
- incomplete PR files
- lack of "strategic marketing" information
- processes heavily dependent on human information
resources
- excessive time spent generating correspondence and
management reports
- lack of information on technology resources and
- difficulty in coordinating TUO activities.
Based on these problems and issues and other information
compiled during the analysis several opportunities for process
improvement via computer support were identified. Major among
these were the following:
- more effective PR tracking
- more precise and complete PR files
XII-2
- existence of a non-volatile "corporate" database
- more comprehensive and readily available supporting
information resources
- flexible and facilitated correspondence and report
generation
- exception reporting
- more formalized procedures for transaction processing and
- facilitated information sharing.
Evaluation of currently available and "to be delivered" hardware
and software coupled with an analysis of the operational
capabilities of the TUO established the feasibility of
developing and implementing a local area network based
relational database system to address the problems and
opportunities cited above. Such a system will allow
implementation of a formal transaction processing system with
the degree of information sharing, information archiving,
application flexibility, data integrity, and ease of use defined
by the end-users during the analysis process.
The recommended system would be developed using Microsoft's
FoxPro for Windows relational database management system. This
would provide multi-platform use across the PC's and Mac's
currently used in the TUO. The Window's network environment
would be provided by the Workstation Presentation System (WPS)
currently being made available through Boeing Computer Services.
The TUO has three such stations currently in operation with
several more scheduled for the near future. This system will not
only provide information and data sharing among TUO personnel
but will serve as a window to the current and proposed E-mail
systems which will link personnel to other MSFC organizational
units, other NASA centers and to other outside government and
private sector organizations. This linkage is of paramount
importance in assuring the future effectiveness of the
technology transfer process. Additionally, the WPS environment
will provide TUO personnel with standard applications packages
such as word processing, graphics, project management,
presentation software and spreadsheet which afford opportunities
for additional support, coordination and information sharing
with respect to other aspects of the TUO function than those
addressed by this research *
The recommended relational database environment will
provide a Windows based, menu driven user interface which should
allow easy transition for those TUO personnel currently using
the Data General environment for word processing, data table (a
limited spread-sheet type application) and e-mail applications.
The relational architecture has been designed to offer the
highest degrees of application flexibility, data integrity,
maintainability, and future expandability. The data tables aire
designed to consolidate comprehensive information on an entity
basis and to provide flexibility in establishing current and
XII-3
potential future relationships among entities. The designed
applications such as standard queries, correspondence
generation, report generation and status monitoring were
developed to meet the current end-user specified needs. The
FoxPro Windows environment provides an applications generator
which should allow TUO personnel to develop future applications
with only a minimal amount of training. This will allow the TUO
to more rapidly and effectively respond to the increasing demand
for the transfer of technological expertise from NASA's
laboratories .
CONCLUSIONS
This research has involved the analysis of the current
process for transferring technologies from MSFC and contractor
laboratories to the private and public sectors. The analysis has
shown that the technology transfer process is heavily dependent
on the timely and effective utilization of distributed
information and has provided models to document the process.
Most importantly it has established the feasibility and
necessity for providing process support through the
implementation of a networked database system. A recommended
relational database system design has been developed which
satisfies the defined end-user requirements and provides
capability to handle future projected needs. The eventual
implementation of such a system will hopefully serve as a model
from which a comprehensive inter-agency system can be developed.
Such a system is essential if we hope to render the technology
transfer process as effective as it need be to help the country
regain our preeminence in technologically driven markets.
ACKNOWLEDGMENTS
I wish to thank all the MSFC and contractor personnel
associated with the Technology Utilization Office for their
hospitality, time and honesty. Systems analysis methodologies
are highly dependent on the willingness of end-users to share
information and opinions with the analyst. The TUO personnel are
to be commended for their participation in this process. Their
hospitality allowed me to feel as "one of the family" during my
ten week project. I also wish to extend individual thanks to my
NASA colleagues Dr. Ken Fernandez and Ms. Nell Massey for
initiating this effort and serving as points-of-contact for my
information gathering efforts. Finally, to Mr. Ismail Akbay, the
Director of the Technology Utilization Office, I extend my
gratitude and appreciation, first as an educator, for providing
me a meaningful fellowship opportunity, and second, as a
citizen, for his dedication and devotion to the important
mission of transferring federally funded technologies to help
improve quality of life and provide a return on investment to
taxpaying citizens.
XII-4
44
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
A STUDY OF THE CORE MODULE SIMULATOR FLOOR
CAPABILITY
Prepared By:
Academic Rank:
James W. Foreman
Assistant Professor
Institution and
Department:
Alabama A & M University,
Department of Civil Engineering
MFSC Colleagues:
Charles R. Cooper
David Long
NASA/MSFC:
Office:
Division:
Branch:
Systems Analysis and Integration Laboratory
Systems Test Division
Development Test Branch
XIII
ABSTRACT
The floor of the Core Module Simulator(CMS) is required to
support various combinations of dead load and live load during the
testing process. Even though there is published data on the
structural capability of the grating it is not always evident if the
combined loadings with point loads will cause structural failure.
TECHNICAL APPROACH
A mathematical model of the 36 inch by 40 inch floor section
was developed. The analysis was performed using finite element
techniques. Unit loads were separately placed at the 15 locations
shown in Figure 1. The internal moments at all 15 locations locations
were determined for each load location yielding a 15 by 15 influence
matrix. The total response at any location is determined from the
following relationship:
{M} = [m]{P}
where {M} is a 15 by 1 matrix of the resultant moments at the 15
locations as shown in Figure 1. The 15 by 15 influence moment
matrix [m] is developed by placing unit loads at the 15 locations
shown in Figure 1, and {P} is a 15 by 1 matrix of the applied loads.at
these locations.
Once the influence matrix for the internal moments were
determined, a BASIC computer program was developed to perform
the matrix multiplication and select the maximum internal bending
moments of the members.
The program is adaptable to the IBM PC or Mcintosh computers
The required input is the magnitude and location of the loads. The
program also allows for the superposition of a uniform load over the
entire floor area. This program written for this unique configuration
provides a simplified method for determining the floor capability.
XIII- 1
CONCLUSIONS
The solution of the CMS floor capability illustrates how the PC
may be used to simplify problem solutions which require a higher
level of expertise in a particular area such as structural analysis, this
technique can be used in other fields such as electrical or fluid
mechanics.
XIII-2
Free
Edge
Supported
36"
Supported
Figure 1 CMS Floor Layout
XIII-3
A A -* ^
4 4 1 Ji
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE OHIO UNIVERSITY
A MINIMUM COST TOLERANCE ALLOCATION METHOD
FOR ROCKET ENGINES
and
Robust Rocket Engine Design
Prepared By:
Academic Rank:
Institution and Department:
MSFC Colleague:
NASA/MSFC:
Laboratory:
Division:
Branch:
Richard J. Gerth, Ph.D.
Assistant Professor
The Ohio University
Department of Industrial and Systems Engineering
David Seymour
Propulsion Laboratory
Motor System
Performance Analysis
XIV
Minimum Cost Tolerance Allocation
Rocket engine design follows three phases: systems design, parameter design, and
tolerance design. Systems design and parameter design are most effectively conducted in a
concurrent engineering (CE) environment that utilize methods, such as Quality Function
Deployment and Taguchi methods. However, tolerance allocation remains an art driven by
experience, handbooks, and rules of thumb.
It was desirable to develop an optimization approach to tolerancing. The case study
engine was the STME gas generator cycle. The design of the major components had been
completed and the functional relationship between the component tolerances and system
performance had been computed using the Generic Power Balance model. The system
performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were
an initial set of component tolerances. However, the question was whether there existed an
optimal combination of tolerances that would result in the minimum cost without any
degradation in system performance.
The optimization model seeks to minimize the total system cost as determined by
component tolerances subject to constraints on the tolerances:
MIN[totalcost] = MIN
JjCitoh)
subject to
. i
[1]
Tl^Gl-tolf (i = l,K,n)
i=l
tolli ^ to/ ; < tolUi [2]
where:
C(toli) Cost of producing toll;
toli Tolerance of the ith component performance variable;
tolli,tolui Lower and upper limit of tolf;
Tk The kth system performance tolerance;
Gik Is the gain of the ith component performance variable to the kth system
performance variable.
Equation [2] is a statistical tolerancing equation that models non-linear systems through
a first order Taylor expansion where the gains Gik are the first order partial derivatives. The
linear Taylor approximation is generally valid for tolerance allocation problems since
tolerances typically vary only by a small amount The gains matrix was obtained from the
generic power balance model mentioned above.
The greatest problem was determining the cost tolerance relationships, C(toli). There are
numerous models for cost tolerance equations, the most common of which are the reciprocal
or inverse, reciprocal squared, and the negative exponential. However, these models have
always been applied to specific manufacturing processes where the cause effect relationships
between the process and tolerance were conceptually well understood. The conceptual
difficulty at the high level of design in the STME study involved imagining how to tighten or
loosen a component's performance, e.g., efficiency and how much such a change would cost.
It is much easier to conceptualize changing the tolerance on a specific component element,
such as the turbine blades, or the nozzlette diameter. The difficulty in part reflects the
xrv-i
relationship between systems designers who think of components as inputs and characterized
by component performance variables, and component designers who think of component
performance variables as outputs.
Two approaches were taken to relating cost and tolerances, and for lack of imagination
termed the top-down and the bottom-up method. Both methods were implemented in Excel
4.0 for Windows and the optimization problem was solved using Excel's solver function.
In the top down method, the optimization model changes the component performance
tolerances directly to minimize cost and satisfy the system constraints. The method is called
top down because the changes in the component performance tolerances represent top level
changes that are conceptually propagated down to the element level. The cost is, however,
computed at the element level and proportioned out to the performance variables through a
cost-contribution matrix.
The Top-Down method has several problems. First it assumes that tightening a particular
component performance tolerance is achieved by tightening all the elements that affect it by
the same amount. This clearly leads to contradictions when the same component affects two
performance variables, one which tolerance is being tightened, and the other loosened.
Thus, the top down method fails to model physical reality, namely that cost gains are
achieved because tolerances are loosened on component elements which result in different
component performance variations.
Second, the element-performance cost contribution matrix is likely to be difficult if not
impossible to obtain. This is in part because the method does not model reality well, and in
part because companies typically do not track costs in this manner. To rectify some of these
problems, the Bottom-Up approach was developed.
In the bottom-up approach the solver here varies the low level component element
tolerances and computes their impact on system performance through a two phase statistical
stackup analysis (see eq. [2]). This requires two gains matrices: from system to component
performance, and from component performance to component element tolerance.
The cost for each tolerance is determined from a family of cost-component-element-
tolerance curves. The curves are computed for each element from a set of five standard cost-
tolerance curves that were then scaled to match the initial design conditions. The five curves
were created in conjunction with the component designers and range from a 1/4 reciprocal to
a cubed reciprocal function with differing parameters. The scaling to the initial conditions
involved knowing how much a particular element cost, how much of its total cost was due to
creating a component of that functipnality (nominal design) versus creating the same
component with tighter tolerances, and the initial design tolerance. There were instances
where going to tighter tolerances would require changing manufacturing processes with
drastically different cost-tolerance behavior. Li these cases the resulting cost tolerance curve
had both a "jump" (discontinuous) as well as a change in slope.
The Bottom-Up approach appears to be the preferred method because it models reality
more accurately, the data is more readily obtainable, and it is conceptually more appealing.
The major difficulties are 3 fold. First, one must be able to obtain good gains matrices;
second, it is imperative to have good cost estimates; and third, which is related to 2, it is
necessary to better understand and estimate the standard cost tolerance curves for each
element.
However, it is believed that in the tolerance design phase these estimates are typically not
well known. Thus, the answer from the optimization problem will, in all likelihood, not be
XTV-2
the best possible answer. However, it is believed that by encouraging engineers to run the
program they will have the necessary data to make informed decisions based on cost, and
gain insight into the relationships between the variables at a systems level. Thus, the
minimum cost tolerancing algorithm, when used by a cross functional team with other
concurrent engineering tools, could have a significant impact on the cost of a design.
ROBUST ENGINE DESIGN
The purpose of the research was to develop a method for determining the set of optimal
nominal design parameters that results in a system response that is least sensitive to
variations in inlet conditions and between-component variations (manufacturing variations).
Should the method prove to be successful, it could be expanded to include different cycle
configurations, or become a means of evaluating the relative merits of different cycles.
Data were generated from a computer simulation program called The Generic Power
Balance Model developed by RocketDyne Corporation. The program was specifically
designed to aid rocket engine designers determine design configurations that would optimize
system performance while ensuring conservation of mass and energy.
The particular cycle chosen for this project was a gas generator (GG) cycle to be used as
an upper stage space engine. The primary system response variables of interest were thrust,
mixture ratio (MR), and specific impulse (Isp). The various component environments were
also considered to be important to design decisions since the environments often determine
the maximum design conditions (MDCs) for the components. However, they were
considered secondary to the system performance variables.
The method involved generating a series of on-design hardware configurations by
altering control variables according to an L8 orthogonal array. The control variables used in
the study are shown in Table 1. They were selected based on engineering knowledge and do
not necessarily represent the most important design variables.
Variable
level 1
level 2
A
Chamber Pressure
800 psia
1000 psia
B
Fuel Pump Head Coef
0.55
0.60
C
LOX Pump Head Coef
0.50
0.55
D
Fuel Turbine % Admission
50%
100%
E
LOX Turbine % Admission
50%
100%
F
Fuel Turbine Blade Angle
15°
30°
G
GG Temperature
1400°R
1600°R
Table 1. Control Variables for GG Cycle Engine.
A total of 14 noise factors representing the inlet conditions and random fluctuations in
component efficiencies and resistances were considered. Creating an L16 noise array,
however, would require an excessive number of simulation runs (8x16=128). Since an
analysis on noise effects is meaningless, they were combined in a "worst case" fashion to
ensure that the expected variability in system response is captured, thereby, reducing the
number of required simulation runs. However, some factors affected the response variables
in a different manner. For example, a decrease in the LOX inlet pressure would result in a
decrease in thrust and MR and an increase in Isp. A decrease in the fuel inlet pressure would
XIV-3
also decrease thrust, but increase MR and decrease Isp. The following method was devised
to determine which factors could be combined to ensure that the system would be exposed to
the full range of potential noise conditions.
A gains matrix obtained from the STME study (a GG cycle low cost engine) indicated the
direction of system response change with an increase in each of the noise factors. The signs
of the gain factors were tabulated and all noise factors which induced a similar system
response were grouped into the same class. This resulted in four classes, of which one was
omitted because it contained only a single variable which gain value was very small. Thus,
the outer array (noise array) was an L4 matrix with 3 noise variables.
The eight on-design configurations were run under each of the noise conditions as an
open-loop off-design condition resulting in 8x4=32 off-design simulation runs. For each of
me dependent variables the following statistics were computed and analyzed: average,
variance, and signal to noise ratios. The ANOVAs showed that none of the control factors
were significant (F=0) and the error term contributes over 90% of the variation in the data.
This means that the noise factors had a greater effect on system performance than any of the
control factors. This was true for all of the system performance variables as well as die
component environment variables: GG temperature, the fuel pump discharge pressure, LOX
pump discharge pressure, and MCC pressure. The analysis of the variation also showed that
it could not be substantially reduced by any of the control factors.
The conclusion drawn from the results is that calibration of the engines is necessary to
reduce the impact of component variations. The impact due to inlet conditions, however, will
remain. Calibration of the engine is performed by running the off-design simulation under
closed loop control by specifying two control parameters, typically the GG LOX injector
resistance and the LOX turbine bypass orifice resistance. The control authority for each of
these two resistances is defined here to be the full range of resistance required to balance the
engine at nominal thrust and MR under worst and best case conditions.
There has been some difficulty in developing a calibration method, however, because
under some on-design conditions there is insufficient flow to accommodate the necessary
control authority, i.e., where the resistances are already so low under the on-design case that
opening of the valves completely is not sufficient to balance the engine. Since the original
on-design cases did not have a pressure drop across the control points, it may be necessary to
compute a nominal pressure drop and include it in the on-design runs. This could possibly be
done from the off-design data and knowing the thrust and MR gain as a function of* the
resistances. Since the system response ranges are known from the open-loop off-design runs,
it would be straightforward to compute the required control authority and nominal resistance
assuming a linear relationship between resistance and system response.
In summary, it appears that it is possible to use the generic power balance model to
generate a robust design. It also appears that a certain amount of iteration may be necessary
to simulate engine calibration. It is believed that it may be possible to predict the required
control authority from the open-loop off-design runs alone, without further iteration. If this
is true, then the optimal design can be determined and the calibration simulations need only
be performed on that single design, thus eliminating the need for repeated iterations.
XIV-4
4 4 9
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHAL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
VALIDATION OF A NONINTRUSIVE OPTICAL TECHNIQUE
FOR THE MEASUREMENT OF LIQUID MASS DISTRIBUTION
IN A TWO-PHASE SPRAY
Prepared By:
Roy Hartfield, Jr.
Academic Rank:
Assistant Professor
Institution and
Department:
Auburn University
Aerospace Engineering
MSFC Colleagues:
Charles Schafer, Ph.D.
Richard Eskridge
NASA/MSFC:
Office:
Division:
Branch:
Propulsion Laboratory
Motor Systems Division
Combustion Physics Branch
XV
VALIDATION OF A NONINTRUSIVE OPTICAL TECHNIQUE FOR THE
MEASUREMENT OF LIQUID MASS DISTRIBUTION IN A TWO-PHASE SPRAY
Roy J. Hartfield, Jr.
Aerospace Engineering, Auburn University
Introduction
The work presented herein is the continuation of an optical technique
development program initiated as part of the 1992 Summer Faculty Fellowship Program.
The 1992 work consisted of the formulation and implementation of a technique involving
the spatial deconvolution of fluorescence data from a uniformly illuminated, seeded
dense spray to obtain quantitative measurements of the liquid density profiles. This
measurement approach largely overcomes substantial scattering problems associated with
other optical approaches for two-phase flows. 1 However, to apply this measurement
approach with confidence to unknown flows, the technique must be validated.
Consequently, technique validation using classical grid patternator techniques has been
the focus of the current work. This work has included the design and construction of a
patternator rig and the implementation of a test program designed for the comparison
of patternator data with the deconvolved optical data. The flow field used for the
validation is the plume of an axisymetric swirl coaxial LOX injector being considered for
use in the Space Transportation System Main Engine. The flow facility is an improved
version of the test rig which was constructed in 1992 for the initial technique
development. This report includes a brief description of the optical measurement
technique and the patternator rig and a presentation of the data comparisons.
Optical Technique and Patternator Rig
Several optical techniques for quantitatively investigating specific liquid spray
plumes have been developed? 3 ' 4 A phase/Doppler interferometer has been used to
determine drop-size and velocity components in a plume similar to the plume
investigated herein. 5 However, these previously-developed techniques are primarily
applicable to spray plumes in which the droplet distribution is sparse and the signal from
one drop is not substantially interfered with by the presence of the remainder of the
spray. The optical measurement approach employed herein involves the uniform
iUumination of the axisymetric plume and a subsequent inversion of the measured
fluorescence from R6-G dye seeded into the water used for the LOX simulate. By
illuminating the plume uniformly, scattering, which inherently limits the quantitative
applicability of planar imaging and interferometric schemes, is made more uniform and
nonuniform contributions associated with scattering are minimized. Uniform
iUumination, however, does not provide a direct measure of the mass distribution in a
particular plane. The radial distribution of the signal collected using uniform
illumination may be determined using any of a variety of deconvolution techniques
provided the distribution is known to be axisymetric. For this work, the Abel inversion
procedure was chosen. For the problem at hand, it may be shown that the Abel integral
equation to be solved can be reduced to
e(r) = -l/^^
xv-i
where e(r) is the radial signal distribution, R is the maximum plume diameter at the
deconvolution height, y is the distance from the center of the plume measured on the
raw data and l s (y) is the derivative of the measured signal at location y. 6 Deconvolution
techniques such as this are inherently dependent on the derivative of the measured
distribution. This makes the determination of the distribution sensitive to noise in the
data. To minimize this effect, an even-ordered polynomial curve fit is applied to the
data. Equation 1 is then applied numerically to the curve fit using FORTRAN. The
data for the deconvolutions is collected using a RCA video camera and an EPIX frame
grabber card installed in an IBM compatible 386 personal computer.
The mechanical patternator is composed of the head, which is a linear array of
twenty-three square 1/8 brass tubes, and the collector, which is a bank of 1/2" glass
tubes connected by a low pressure manifold. The head, shown in the photograph in
Fig. 1, is fitted with a hinged flap which can cover or uncover all of the tubes in the
?■ iay c'rpuUaueousi/ The collector, shown in Fig.2, is fitted wittijndMdual scales for
each tube and
flapper doors on
the bottom of
each tube allow
the patternator to
be quickly reset
after each run.
The patternator is
operated by
establishing the
flow to be probed
with the head
covered, lowering
the pressure in
the collector
manifold (to
insurethat Figure 2: Patternator collector
droplets falling
on the patternator head are captured), uncovering the
head until at least one collector tube is nearly full, re-
covering the head and then stopping the flow.
Measurements
An image of the fluorescence signal in the
swirl spray with a drive pressure of 50 psi resulting
from uniform illumination is presented in Fig. 3.
Note that, although the mass density is known to be
nearly zero at the plume center, a substantial signal
is present near the center of the image. This signal
comes from the near and far edges of the plume.
Several radial sections of this fluorescence data were
inverted and a representative inversion compared
with patternator data is shown in Fig. 4. The peaks
of the data have been artificially forced to match and Figure 3: Fluorescence Signal.
?afteiiatc - head
XV-2
c-4
the agreement in profiles is reasonably
good; however, it was believed that a
lack of atomization in the plume and
problems with low signal and
background correction were degrading
the quality of the data. To address this
issue, 1 jus shadowgraphs were taken at
the 50 psi drive pressure and at 300 psi
drive pressure (which is closer to
projected operating conditions). These
shadowgraphs are shown in Figs. 5 and
6 respectively. Clearly, at 50 psi, the
injectant plume has atomized very little
in the near field of the injector;
however at 300 psi, atomization has
progressed much closer to the injector
exit. For this reason, additional
RADIAL POSITION <WD)
Figure 4: Comparison of Data at Z/D = 20.
Figure 5: 1 /xs shadowgraph at 50 psi. Figure 6: 1 us shadowgraph at 300 psi.
fluorescence data and patteraator data
were obtained at the higher operating
pressure. In addition to increasing the
atomization, some adjustments were
made in the optical arrangement. The
laser power was increased to obtain
better signal to noise ratios and the
background was substantially reduced.
The comparison between the data at
300 psi is shown in Fig. 7. With the
improved signal levels, no need for
background correction, and the
improved atomization, the deconvolved
signal, which is a measure of the mass
density profile, agrees functionally quite
well with the mass flux distribution
measured using the patternator.
RAOIAL POSITION (R/Q)
Figure 7: Comparison of data at Z/D = 20 for
300 psi drive pressure.
XV-3
Summary and Future Work
Developmental work for a nonintrusive LIF measurement technique for mass
distribution in dense sprays has been conducted. A grid patternator has been designed,
constructed and operated as part of an effort to validate the optical measurement
approach. Good agreement between the profiles of mass flux obtained using the
patternator and the mass density distribution obtained using the optical measurements
was obtained in a high pressure spray.
Planned future work includes additional optical technique development including
the extension of the technique to multiangular imaging for use with non-symmetric flows.
Additional improvements in the technique may include the use of a higher quality
detector and improvements in the deconvolution algorithm. The investigation and
potential development of additional nonintrusive techniques, including X-RAY
absorption, nuclear magnetic resonance and neutron beam absorption is also planned.
Acknowledgements
The substantial contributions to this work by Mr. Richard Eskridge and the
guidance provided by Dr. Charles Schafer are noted and appreciated.
References
1. Hartfleld, R. and Eskridge, R., " Experimental Investigation of a Simulated LOX
Injector Flow Field," AIAA paper 93-2372, AIAA/SAE/ASME/ASEE Twenty-
Ninth Joint Propulsion Conference and Exhibit, June 28-30, 1993, Monterey, CA.
2. Melton, L. A., and Verdieck, J. F., "Vapor/Liquid Visualization in Fuel Sprays,"
Combustion Science and Technology. 1985, Vol 42, pp. 217-222.
3. Chraplyvy, A. R., "Nonintrusive Measurements of Vapor Concentrations Inside
Sprays," A pplied Optics. Vol. 20, No. 15, August 1, 1991.
4. Ingebo, R. D. and Buchele, D. R., "Small-Droplet Spray Measurements With a
Scattered-light Scanner," NASA Technical Memorandum 100973, prepared for
ASTM Second Symposium on Liquid Particle Size Measurement Techniques,
Atlanta, GA, November 1988.
5. Zaller, M. and Klem, M. D., "Coaxial Injector Spray Characterization Using
Water/ Air as Simulants," The 28th JANNAF Combustion Subcommittee Meeting,
Vol. 2. pp. 151-160.
6. Shelby, R. T., "Abel Inversion Error Propagation Analysis," Master of Science
Thesis, The University of Tennessee, June 1976.
XV-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA
IMPACT DAMAGE IN FILAMENT
WOUND COMPOSITE BOTTLES
Prepared By:
Academic Rank:
Institution and
Department:
NASA/MSFC:
Office:
Division:
Branch:
Alton L. Highsmith
Assistant Professor
University of Alabama
Department of Aerospace Engineering
Materials and Processes Laboratory
Non-Metallic Materials Division
Polymeric Materials Branch
MSFC Colleague(s): Frank Ledbetter
XVI
Increasingly, composite materials are being used in advanced structural applications
because of the significant weight savings they offer when compared to more traditional
engineering materials. The higher cost of composites must be offset by the increased
performance that results from reduced structural weight if these new materials are to be used
effectively. At present, there is considerable interest in fabricating solid rocket motor cases
out of composite materials, and capitalizing on the reduced structural weight to increase
rocket performance. However, one of the difficulties that arises when composite materials are
used is that composites can develop significant amounts of internal damage during low
velocity impacts. Such low velocity impacts may be encountered in routine handling of a
structural component like a rocket motor case. The ability to assess the reduction in structural
integrity of composite motor cases that experience accidental impacts is essential if composite
rocket motor cases are to be certified for manned flight. While experimental studies of the
post-impact performance of filament wound composite motor cases have been performed
(2,3), scaling impact data from small specimens to foil scale structures has proven difficult. If
such a scaling methodology is to be achieved, an increased understanding of the damage
processes which influence residual strength is required.
The study described herein was part of an ongoing investigation of damage development
and reduction of tensile strength in filament wound composites subjected to low velocity
impacts. The present study, which focused on documenting the damage that develops in
filament wound composites as a result of such impacts, included two distinct tasks. The first
task was to experimentally assess impact damage in small, filament wound pressure bottles
using x-ray radiography. The second task was to study the feasibility of using digital image
processing techniques to assist in determining the 3-D distribution of damage from stereo
x-ray pairs.
For the first task, the experimental determination of impact damage in filament wound
bottles, 5.75 in. diameter bottles were used. The bottles were wound with a pattern
XOOXOO, where X represents a layer of helical windings (in this case, a layer with strands
oriented at +11.5° to the cylinder axis) and O represents a single layer with strands oriented in
the hoop direction. Note that a helical layer has twice the thickness of a hoop layer, since a
helical layer represents strands oriented in two directions. Three different material systems
were studied, all of which were reinforced with IM7 carbon fibers. The three different matrix
systems were a standard epoxy (3501-6ATL) and two toughened epoxies (X8553-45, 977-2).
A drop tower-type impact testing machine was used to impact the specimens, which were
placed in a removable cradle which was attached to the bottom of the test frame for impact
testing. Impact energy was controlled by adjusting the height from which the crosshead
assembly was dropped. Based on some preliminary impact tests, three impact energies — low
(3.0 in. -lb.), intermediate (5.0 in.-lb.) and high (7.0 in. -lb.) were used. Each bottle used in the
damage documentation study was subjected to three impacts (one at each of the three levels)
at locations evenly spaced around the circumference of the bottle. Dynamic impact data was
collected from the 0.5 in. diameter instrumented impact tup during impact.
XVI-1
After being impacted, the domes were cut off of the bottle and the cylindrical region was
cut into 3 segments, with each segment containing a single impact site. Each segment was
then inspected via dye-penetrant enhanced x-ray radiography (1). The dye penetrant used was
a zinc iodide solution (60 g zinc iodide, 10 ml. water, 10 ml. isopropyl alcohol, 10 ml. Kodak
"Photo-Flo 200"). A small dam encircling the impact site was made using plumbers putty.
This dam was filled with the zinc iodide solution, which was allowed to seep into the specimen
for at least four hours. The dye penetrant filled those damage events (matrix cracks,
delaminations) which it could flow into. The zinc iodide thus rendered these areas more
opaque to x-rays that the surrounding undamaged regions. Three radiographs were taken of
each segment using different angles of incidence of the x-ray beam — one with an angle of
incidence of 82.5°, one with an angle of incidence of 90°, and one with an angle of incidence
of 97.5°. The same cradle used for the impact tests was used to hold the x-ray film and
segment during radiography, so that the x-ray film was wrapped around the curved segment.
The 90°, or normal incidence x-ray provided a planform view of damage in the specimen. The
other two x-rays formed a stereo pair and, when viewed using a stereo viewer, provided a
three dimensional view of damage in the specimen (1). Using such a stereo imaging process,
it was possible to resolve the location of damage through the thickness of the specimen.
A normal incidence x-ray radiograph taken from a specimen with the standard epoxy
matrix subjected to a high energy impact is shown in Fig. 1. Note that the horizontal direction
in the radiograph corresponds to the hoop direction. Also, in an undamaged specimen, the
radiograph should have a darker tone at the left and right edges because of the curvature of
the cylindrical segment. The sharp lines that appear in the radiograph correspond to matrix
ply cracks that were decorated with dye penetrant. Such features are evident in all three of
Figure 1. X-ray radiograph of Specimen C 067-068, high energy impact.
XVI-2
the filament winding directions. The oval region that is centered on the actual impact site
corresponds to the delaminated area of the specimen. A stereoscopic inspection of the
damage reveals that delaminations occur at every interface, and that the overall oval geometry
results from the "superposition" of the distinct delaminations.
The delamination seen in Fig. 1 is quite extensive, covering almost the full height of the
cylindrical portion of the pressure bottle. This is typical of the specimens with the standard
epoxy matrix. Similar damage states are seen in the specimens with toughened epoxy
matrices, but the size of the damaged region is smaller in the toughened systems than in the
standard epoxy system. In addition, lower impact energies generally (but not always) yield
smaller delaminated areas.
Figure 1 also shows two heavily damaged (very dark) areas located away from the central
impact site. A close stereoscopic inspection of these regions located to the left and right of
the impact site reveals that there is fiber fracture at these locations. The fiber fracture
developed in the helical layers, especially in the innermost helical layers. The location of this
fiber fracture was apparently governed by the deflected shape assumed by the pressure bottle
during impact. While this type of fiber fracture was most common, a second type of fiber
fracture, as represented by the radiograph in Fig. 2, was also observed. This second fiber
fracture mode has fiber fracture in the exterior hoop layers emanating from the impact site.
The delaminated area is relatively small, even for a toughened epoxy, and closely follows the
line of fiber fracture. At present, the factors influencing which fiber fracture mode will
dominate are not well understood. It is believed that preexisting flaws can promote hoop
direction fiber fracture.
Figure 2. X-ray radiograph of Specimen C 113-114, medium energy impact.
XVI-3
The second task undertaken in the present study was to assess the feasibility of
detennining the 3-D distribution of damage using digital image processing of stereo
radiographs. In this preliminary effort, attention was focused on extracting damage
information from a single radiographic image, and representing that information in digital
form. Reconstruction of the 3-D damage state would ultimately be accomplished by
reconciling such digital information from two or more views of the composite.
To this point, efforts have focused on extracting ply crack information from radiographs.
First, the radiograph is digitized using a scanner, and stored using the Tagged Image File
Format, i.e., a the digital image is stored as a TIFF file. An 8 bit digitization was used,
resulting in a 256 shade gray scale. A variety of image processing routines were written in the
Turbo C++ programming language, for "enhancing" such digital images and for extracting
features from the image. In this preliminary study, the best results were obtained by first
sharpening the digitized image using an unsharp filter [4]. Then, a constant gray value (about
85% of the image average was found useful) was subtracted from the image. This eliminated
extraneous features in the largely uniform gray area surrounding the damaged zone. Finally, a
line detection routine was developed for extracting lines of a prescribed orientation from the
image. Using this line extraction routine, it was possible to isolate hoop direction, or +0
direction, or -6 direction ply cracks. The extracted lines correlated quite well with features in
the original image.
In summary, the experimental program has shown that toughened epoxy systems do
reduce the amount of matrix damage, especially delamination, that develops during impact.
Fiber fracture has been found to follow one of two modes — one mode has fiber fracture in the
interior helical layer at locations dictated by the deflected shape of the pressure bottles, and
one mode has fiber fracture in the exterior hoop layers emanating from the impact site. In
addition, a preliminary study has indicated that digital image processing techniques show
promise for extracting the 3-D damage distribution from stereo radiographs.
REFERENCES
1. Jamison, R.D., "Advanced Fatigue Damage Development in Graphite Epoxy Laminates,"
Ph.D. dissertation, Virginia Polytechnic Institute and State University, Aug. 1982.
2. Madsen, C.B., Morgan, M.E., and Nusimer, R.J., "Scaling Impact Response and Damage
in Composites. Damage Assessment for Composites - Phase I Final Report,"
AL-TR-90-037, Hercules Aerospace Co. for Astronautics Laboratory, AFSC, Edwards
AFB, CA, August 1990.
3. Morgan, M.E., Madsen, C.B., and Watson, J.O., "Damage Screening Methodology for
Design of Composite Rocket Motor Cases," JANNAF Propulsion Meeting, Indianapolis,
Feb. 1992.
4. Pratt, William K. Digital Image Processing. 2nd Ed. . John Wiley and Sons, New York,
1991.
XVI-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
OCTAVE: A MARSYAS POST-PROCESSOR FOR
COMPUTER-AIDED CONTROL SYSTEM DESIGN
Prepared by:
Academic Rank:
Institution and _
Department
MSFC Colleague(s):
NASA/MSFC:
Office:
Division:
Branch:
A. Scottedward Hodel, Ph. D.
Assistant Professor
Department of Electrical Engineering
Auburn University
D. P. Validly
Structures and Dynamics Laboratory
Control System Division
Mechanical Systems Control Branch
XVII
1 Introduction
MARSYAS is a computer-aided control system analysis package for the simulation and anal-
ysis of dynamic systems. In the summer of 1991 MARSYAS was updated to allow for the
analysis of sampled-data systems in terms of frequency response, stability, etc. This update
was continued during the summer of 1992 in order to extend further MARSYAS commands
to the study of sampled-data systems. Further work was done to examine the computation
of openat transfer functions, root-locii and w— plane frequency response plots. At the con-
clusion of the summer 1992 work it was proposed that control-system design capability be
incorporated into the MARSYAS package. It was decided at that time to develop a separate
"stand-alone" computer-aided control system design (CACSD) package. This report is a
brief description of such a package.
A popular CACSD design environment is provided with commercial versions of Matlab,
e.g., Simulink (tm) by the Math Works. The Matlab design environment comprises (1) a
compiled main program with a command parser and necessary intrinsic functions for matrix
data manipulation, and (2) command scripts, called m-files, which may be used in a fashion
similar to Unix shell scripts in order to create an increased function set for the user. The
MathWorks has developed several "toolboxes," or sets of such m-files, for specific purposes
such as signal processing, state-space control system design, robust control, etc. Since m-files
are text-file scripts, their source code is available for viewing by the user. However, source
code for any commercial Matlab is proprietary to the vendor and is not available.
In 1992, John Eaton, a post-doc at the University of Texas, began development of a
free- ware Matlab look-alike program to be made available under the same licensing terms as
that of the Free Software Foundation. That is, the program cannot be sold in whole or in
part, and its source code must be freely made available. The numerical routines in Octave
are taken from accepted FORTRAN routines in packages such as EISPACK, UNPACK,
LAPACK, and the user interface and command execution routines are written in C++
and C. Under a follow-on grant from MSFC, work was begun at Auburn University on
preliminary versions of Octave to incorporate new functions into Octave that would aid in
the development of a control systems toolbox for this program. This work was continued
during the Summer Faculty Fellowship Program during summer of 1993; all code developed
was submitted and incorporated into the official Octave distribution. The code development
is still ongoing; however, the design environment provided by the current version (0.74.5) is
sufficiently functional that it can be used for a wide variety of applications. Version 1.0 of
Octave is expected to be released shortly (prior to the end of 1993).
The remainder of this report is organized as follows. In Section 2 is presented a description
of the planned MARSYAS design environment. Following this, in Section 3 a design example
using current MARSYA/ OCTAVE functions is presented. Finally, in Section 4 we discuss
planned enhancements to the MARSYAS/OCTAVE system.
XVII- 1
ABD
Problem
Description
Text edito]
p — »
r
\
Maxsyas Model Description
MARSYAS !
controller i
description L
MARSYAS
— *
<e
Octave
\
simulation
r output '
I
r
i
mfile
toolboxes
Octave
»
-
J.
Automated
activity
user
commands
Figure 1: Desired MARSYAS design environment
2 Planned MARSYAS Design Environment
The desired MARSYAS design environment is shown in Figure 1. Those portions that are
under development are shown in dashed-lines, those that are planned are shown in dotted
lines. The user, once determining their problem description, writes a MARSYAS model
description of the corresponding dynamic system. MARSYAS is run as a batch process;
while not currently implemented, it is planned to modify MARSYAS in order to allow the
Marsyas analysis phase to make use of Octave. The results of the analysis and simulation
phase of MARSYAS are read into OCTAVE via m-file marsyas_in.m, which currently loads
the system linearization (A,B,C,D) for either continuous or discrete-time systems. From
within Octave, the user interactively uses m-file scripts in order to design a controller that
meets desired design criteria, and then uses the m-file marsyas_out to store a MARSYAS
model description of the designed controller. This controller may be then be verified with
the nonlinear MARSYAS model description with a subsequent MARSYAS run, and further
controller modifications may be made interactively from within OCTAVE.
XVII- 2
I////////// / / //
o
y = l
Figure 2: Magnetically suspended ball
3 Design example
The Octave design toolbox currently contains only one function: linear quadratic Gaus-
sian (LQG) controller design. As an example of the MARSYAS/Octave design environ-
ment, consider the magnetically suspended ball system shown in Figure 2 The corresponding
MARSYAS description module is
CONSTANT: G = 9.8$
MODEL : BALIADYNAMICS , EQUATI0N$
INPUTS: IM $
OUTPUTS: X,XD0T$
EQUATION: X" = G - (IM**2)/(X**2)$
: XDOT « X' $
END$
A MARSYAS simulation was run to obtain a linearization of the above non-linear system,
and the resulting data were employed by the following Octave m-file:
[a,b,c,d] = marsyas_in()
[n,m] = size(b) ;
[p,m] = size(d);
dispC'open loop poles:')
poles= eig(a) '
*/, state feedback design
[k,x,e] = lqr(a,b,eye(n),10*eye(m));
disp( ; closed-loop state-feedback poles are')
poles = eig(a-b*k) ;
'/, state estimator design
[l,x2,e] = lqe(a,eye(n),c,eye(n),0.01*eye(p))
XVII- 3
be = 1 J ;
cc = k;
dc = zeros (m,p);
ac = a - l'*c - b*cc;
marsyas_out (ac ,bc , cc ,dc)
The commands m,arsyas_in and marsyas_out are used to interact with the MARSYAS
program, and the Octave m-files lqr and lqe are Octave scripts that solve the appropriate al-
gebraic Riccati equations in order to obtain the desired controller. The MARSYAS controller
description thus obtained is
MODEL: OCTAVE, EQUATION $
INPUTS: Ul, U2$
* 1: X $
* 2: XDOT $
OUTPUTS: Yl$
* 1: I\MAG $
EQUATION: XI > = -2.700417E+01 * XI -2 . 653148E+01 * X2
+ 6.831738E+00 * Ul + 1.792014E+01 * U2 $
: X2 J = -5.831738E+00 * XI -8 . 184793E+00 * X2
+ 8.184793E+00 * Ul + 6.831738E+00 * U2 $
: Yl = -1.450893E+00 * XI -6.276922E+00 * X2 $
END$
and is incorporated into the original simulation by adding a main model block:
MODEL: MAIN, EQUATI0N$
INPUTS: I\MAG $
OUTPUTS: X,XD0T$
EQUATION: IM = I\MAG - ID $
: XERR = X-l $
SUBMODEL: BALL\DYNAMICS ; INPUTS: IM; OUTPUTS: X,XD0T $
SUBMODEL: OCTAVE; INPUTS: XERR, XDOT; OUTPUTS: ID $
END$
4 Planned Work
Planned enhancements to the MARSYAS Octave environment include
1. advanced design options,
2. improved user documentation (on-line and off-line), and
3. absorption of MARSYAS analysis phase into Octave
Ultimately, it is expected that Octave will prove itself as a good production code for use in
control system design at MSFC.
XVII- 4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
ON THE ANALYSIS OF CLEAR AIR RADAR ECHOES
SEVERELY CONTAMINATED BY CLUTTER
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague:
NASA/MSFC:
Laboratory:
Division:
Branch:
H. Mario Ierkic V., Ph.D.
Associate Professor
University of Puerto Rico-Mayaguez
Electrical and Computer Engineering Department
Steve Smith, Ph.D.
Space Science
Earth Science & Applications
Earth System Processes and Modeling Branch
xvm
Introduction
Many radar systems work in environments where clutter returns overwhelm the at-
mospheric echoes. Sometimes by as much as 50 dB.
At the Arecibo Observatory (AO), for example, clutter levels are conspicuously high.
This situation greatly reduces its usefulness for lower atmospheric studies. It is not pos-
sible in general, to observe height profiles of the vertical component of the wind velocity.
This parameter is important to understand planetary scale circulation, mountain and lee
waves, turbulence, tropospheric and stratospheric interactions and vertical transport of
horizontal momentum. Moreover, and to show another aspect of the problem, it has been
suggested (Gonzalez and Ierkic, 1993) that clutter returns may sometimes be confused for
atmospheric echoes.
There is growing interest to find practical ways to counteract the deleterious effects of
clutter, noise, interference, and of non-ideal radar equipment. Techniques that have been
proposed include Adaptive Radar Signal Processors ( Farina and Studer, 1987) and Least
Squares Fitting Methods (Yamamoto et al., 1988). Of course, these techniques are non
exclusive.
Few workers have recognized the importance of understanding the origin and propa-
gation characteristics of the various contaminating signals,in particular of the clutter. This
understanding can be used to formulate the Rules of a Knowledge- Based System that di-
rects the Data Analysis Process (Gowrishankar and Bourbakis, 1992; Sigillito and Hutton,
1990). It is convenient that the resulting Expert System operates in the frequency domain
and that the data analysis consists of the parameterization of spectra using non-linear
fitting methods (Numerical Recipes, 1992). The analysis should yield yield echo intensi-
ties, average Doppler velocities and spectral widths. Visualization methods are required
to guide the fitting process with user intervention.
Clutter propagation characteristics
Improved understanding of the various detected signals will help devise optimized
radar processors capable of compensating for propagation effects.
Measurements of fading and phase variation of microwave and optical signals have
been carried out for a period over three decades now. Janes et al. (1970), for example,
compared simultaneous line of sight signals at 9.6 and 34.5 GHz propagated over distances
close to 65 km long in Hawaii. They found that the power spectra of fading were similar
in shape at the two radio frequencies but with higher spectral density content at 34.5 GHz
than at 9.6 GHz particularly in the range from 0.1 to 5 Hz. On the other hand, the power
spectra of the phase variation- expressed in terms of parts per million change in radio path
length- show identical power spectra from 0.01 to 5Hz and follow a power law f~ n with
n ?s 2.6.
It is convenient to write the detected signal c in terms of its amplitude and phase,
c = \c\exp(iip) . (1)
These results can be extrapolated to describe fading and phase variation characteristics
at frequencies of interest to us. For example, at 50 and 430 MHz \c\ will vary appreciably
xviii-1
only for time scales longer than one minute. Phase changes, on the other hand, have the
same functional form at all radio frequencies and they are linearly proportional to the
probing frequency. Phase excursions will be 430/50 or 8.6 times bigger at 430 than at 50
MHz. Due to the exponential dependence in (1) -reminiscent of the FM communication
mode- the bandwidth ratio at the two probing frequencies is bigger than 8.6.
Another source of clutter alteration that needs to be considered is the one produced
by foliage disturbed by surface wind speeds.
At AO these effects can be studied simultaneously at the two frequencies mentioned
previously. Moreover at 430 MHz it may be possible to detect two circular polarizations
and use the one devoid of atmospheric echoes to neutralize the clutter. Of course, this
procedure only works if both clutter polarizations are independent and proportional.
For completenes, albeit not related directly to clutter, let us mention that it is worth
looking into the evaluation of the relative contribution of propagation vis-a-vis turbulence
in the doppler widening of the signals in the GHz range.
Knowledge-based spectral analysis system
The knowledge-based system controls data processing. It is driven by data and it is
responsive to a World Model. The model is defined in term of hypotheses and rules based
in the the knowledge of specialists.
The expert system transforms the data as required using appropriate algorithms and
verifies that the results comply with the rules of the world model. It is also capable of
making inferences aimed toward conflict resolution.
Gradually, the expert system can grow in the learning curve and consequently demand
less user assistance. Alternatively, it can grow to take on more complicated scattering
environments for example, precipitation, lightning, foliage, ocean clutter etc. .
It is is assumed here that the radar spectra can be described by Gaussian functions.
Some of the rules that can guide the analysis process are mentioned next. Before, note
that they are not as yet complete and that they will vary from station to station.
The following rules should help verify data integrity: a) adequate System Temperature
values, b) S/N indicating system is in fact operating, c) real time quality flags to help
document contingencies and to complement the observer's Log book.
Some further rules to assist in data processing are: a) reasonable upper bounds for
the spectral widths of clutter, b) estipulation of plausible wind shears, c) acceptable time
variability in the various parameters, and d) checks for frequency aliasing.
It is worth saying that sometimes fairly simple hints (e.g. Doppler shift is positive)
can be valuable in the data reduction.
xviii-2
Signal Analysis System
To maximize the success of the signal processing algorithms the radar hardware has to
work according to specifications and the experiments have to be well designed. At Arecibo
for example, and as a matter of fact, it is not wise to use coded pulses to monitor the
troposphere, or to carry on measurements while moving the antenna beam.
A brief description of the processing sequence is now in order.
The time series that results from coherently adding the returns needs to be examined
first in order to subtract the clutter. Early subtraction of clutter has a double purpose:
a) it reduces the distortion of the spectra of the atmospheric returns and of the noise,
b) it presents the fitting algorithms with spectral data of comparable range of values. A
sensitive issue here is the width of the notch filter to be used.
Proceed to obtain the spectra with the FFT algorithm, possibly weighting the data.
And in the later case overlap data points to restore their information content. Optionally,
run a median filter accross the spectra to account for outliers. Estimate and subtract the
noise. Note that noise can be height dependent. Correct for coherent integrations (Farley,
1983). Display 2-D (frequency vs range) color or gray scale spectral profiles. To help focus
on the true velocity profile this image can be examined with pattern recognition techniques
to reject suspect features. At user request generate plots of spectral profiles. These plots
should be flexible to allow diverse representations: Linear, log, normalized relative to a
peak, normalized relative to the noise. Add a baseline value of a couple of dB to the 2-D
periodograms to compensate for echo strength loss with range.
Interactively provide first guesses using the displays just described and proceed with
the parameterization of the spectra. Fitting should be done locally around the frequency
bins with spectral densities larger than the noise. Initially the fitting scheme should have
at most 7 parameters: dc (1), Gaussians for clutter and atmospheric echoes (6). Overlay
the results of the parameterization over the data plots. Assess quality of results using
spatial- ranges above and below- and temporal- periods before and after- consensus cri-
teria (Wilfong et al., 1992).
Accept or reject results of the analysis. In the former case save the parameters and
the variables used in the analysis. Otherwise restart analysis procedure.
Gradually the expert system should control the analysis more exhaustively.
xviii-3
Conclusions
This work provides a framework to develop a robust data driven expert system to
retrieve useful results from contaminated radar data. It summarizes some of the com-
mon wisdom dispersed in the literature (e.g. Wilfong et al., 1992) and intends to engage
colleagues to contribute fresh approaches. It also constitutes the basis for a proposal for
telescope time to the AO to study the effects of clutter and the means to ameliorate them.
In order to devise a knowledge-based system it is important to have adequate under-
standing of the various signals present at the receiving end. Similarly important is the
formulation of rules whose compliance will guide the data reduction algorithms. Note that
here there are three modules intervening in the analysis: data, inference system, and the
algorithms.
It is worth stating that the verification of the rules of the expert system is a non
trivial procedure and requires careful consideration. It is in general a difficult step to im-
plement. It may use techniques borrowed from Pattern Recognition and rely on Interactive
Visualization to permit effective user intervention.
Acknowledgement
It is a pleasure to acknowledge useful discussions with Allan Johnson formerly at
Clemson and with R. Creasey from USRA. This work was carried out under the auspices
of the SFFP of NASA/ASEE.
References
[1] Farina A., F. A. Studer, (1987) " Adaptive implementation of the optimum radar
signal processor" IEE Radar, Sonar, Navigation and Avionics Series. Peter Peregrinus
Ltd.
[2] Farley D. T., (1983) " Coherent integration", Handbook for MAP, 507.
[3] Gonzalez D. A. J., H. M. Ierkic V. (1993) " Tropospheric refraction and hard backscat-
tering in 430 MHz observations of the middle atmosphere at Arecibo," Poster presen-
tation at the CEDAR workshop in Boulder, Colorado.
[4] Gowrishankar T. R., N. G. Bourbakis (1992) " Specifications for the development of a
knowledge based image understanding system," Chapter 18 of: Artificial intelligence
methods and applications; World Scientific Publishing Co., 571-589.
[5] Janes H. B., M. C. Thompson, D. Smith, A. W. Kirkpatrick (1970) " Comparison of
simultaneous line of sight signals at 9.6 and 34.5 GHz," IEEE Trans. Antennas and
Propagation, 18, 447-451.
[6] Press W. H., S. A. Teukolsky, W. T. Vetterling, B. P. Flannery (1992) " Numerical
Recipes," Cambridge University Press, 994pp.
[7] Sigillito V. G., L. V. Hutton (1990) " Case study II: radar signal processing," Chapter
11 of: Neural Networks PC tools; Academic Press, 235-250.
[8] Wilfong T. L., R. L. Creasey, S. A. Smith, (1992) " High temporal resolution velocity
estimates from the NASA 50 MHz winf profiler," American Institute of Aeronautics
and Astronautics, AIAA 92-0719.
xviii-4
[9] Yamamoto M., T. Sato, P. T. May, T. Tsuda, S. Fukao, S. Kato, (1988) " Estimation
error of spectral parameters of mesosphere stratosphere troposphere radars obtained
by least squares fitting method and its lower bound," Radio Sri., 23, 1013-1021.
xviii-5
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
4 4
A COMPILATION OF TECHNOLOGY SPINOFFS FROM THE U.S. SPACE
SHUTTLE PROGRAM
Prepared By:
Academic Rank:
Institution and
Department:
David Jeff Jackson, Ph.D.
Assistant Professor
The University of Alabama,
Department of Electrical Engineering
MSFC Colleagues:
Alex McCool
Jim Ellis
NASA/MSFC Office: Space Shuttle Projects Office
XIX
Introduction
As the successful transfer of NASA-developed technology is a stated mission of
NASA, the documentation of such transfer is vital in support of the program. The purpose
of this report is to document technology transfer, i.e. "spinoffs", from the U.S Space Shuttle
Program to the commercial sector. These spinoffs have their origin in the many scientific
and engineering fields associated with the shuttle program and, as such, span many diverse
commercial applications. These applications include, but are not limited to, consumer
products, medicine, industrial productivity, manufacturing technology, public safety,
resources management, materials processing, transportation, energy, computer technology,
construction, and environmental applications.
To aide to the generation of this technology spinoff list, significant effort was made
to establish numerous and complementary sources of information. The primary sources of
information used in compiling this list include: the NASA "Spinoff publication, NASA
Tech Briefs, the Marshall Space Flight Center (MSFC) Technology Utilization (TU) Office,
the NASA Center for Aerospace Information (CASI), the NASA COSMIC Software Center,
and MSFC laboratory and contractor personnel. A complete listing of resources may be
found in the bibliography of this report. Additionally, effort was made to insure that the
obtained information was placed in electronic database form to insure future access, and
subsequent updating, would be feasible with minimal effort.
Technology Transfer Information Resources
As stated, the spinoff compilations were obtained from several sources. A listing of
these sources including the number of items from each is given in Table 1.
Information Source
Items
MSFC TU Office
15
NASA "Spinoff
74
NASA Tech Briefs
235
COSMIC Software Center
146
Laboratory and Contractor Personnel
6
Table 1. Information Sources for Compilation of Technology Spinoffs
Although these resources are broad in their coverage of technology spinoffs, the author
believes that this listing represents only a small fragment of the actual successful technology
transfers that have taken place throughout the life of the shuttle program. The true number
of spinoffs may be impossible to document due to initially insufficient recording during early
years of the program and the natural tendency of the technology transfer process to dilute
itself.
Each information resource contributes to the overall documentation of the technology
transfers, however, the information obtained from the MSFC TU office and the NASA
"Spinoff publication represent those spinoffs which are most likely to enthuse the typical
XIX- 1
citizen about the wealth of products and services whose origins lay in the shuttle program.
The other information resources represent potential spinoffs, emerging spinoffs, or those
spinoffs of a sufficiently technical nature as to ^differentiate the reader as to their origin.
Data specific to each information source is described below.
The spinoff items documented from the MSFC TU office are diverse and among the
best documented in the form of the office's Technology Transfer Reports and the TU Office
Annual Report. However, in the interest of spinoff traceability, several improvements may
be made to the form of these reports. Specifically, the inclusion of specific laboratories and
contact points within the laboratories and contractor personnel will make accountability and
traceability of the technology transfer process more complete. Additionally, contract
numbers and periods of performance, where applicable, will insure proper credit is given to
original technology developers.
The NASA "Spinoff publication represents the broadest documentation of
technology spinoffs available. However, at this point in the transfer process, many good
examples remain undocumented. It is therefore not sufficient to rely only upon the "Spinoff
publication to document technology transfers. Tables 2, 3, and 4 give additional details
concerning spinoffs documented.
Focus Areas
Number of Items
Industrial Productivity
23
Public Safety
13
Health & Medicine
6
Computer Technology
2
Energy
4
Transportation
2
Consumer/Home/Recreation
15
Technology Demonstration
1
Manufacturing Technology
13
Environmental
3
Resources Management
2
Construction
1
Table 2. Distribution of Spinoff Areas
Clearly the information contained in Table 3 indicates that additional effort is necessary in
documenting the technology transfer process. This documentation is critical to the continued
growth and visibility of the technology spinoffs.
Information Source
Number of Items
UNKNOWN
30
Clipping Service
21
NASA Field Center
12
Other
11
Table 3. Sources of Spinoff Information
XIX-2
Transfer Mechanism
Number of Items
NASA Tech Brief
13
NASA Contract
4
Contractor Diversification
18
Personnel Transfer
4
Technology Demonstration
3
COSMIC
4
UNKNOWN
15
Table 4. Technology Transfer Mechanisms
The NASA Tech Briefs publication represents the largest number of potential
spinoffs in all the resources documented. More than 200 items published have their origin in
or were used and modified in the shuttle program. Additionally, the number of requests for
information, in the form of Technical Support Packages (TSPs), is quite large. For those
items which have a TSP available through CASI an average of approximately 200 requests
per item have been processed. If only a small percentage of these requests have resulted in a
successful technology transfer, then a large number of potential "success stories" remain
undocumented. Additional research into these requests, through information available from
CASI, is necessary to document this hypothesis.
The COSMIC Software Center has documented a large number of programs whose
origin are in or related to the shutde program. Additionally, many requests for this software
or documentation have been processed through the COSMIC Center. Approximately 600
requests for shutde software and 1500 requests for documentation have been processed to
date. Additional research into these requests, through information available from COSMIC,
is necessary to document properly the potential technology transfers.
Technology transfer information has also been provided through contractor and
laboratory personnel at the Marshall Space Flight Center. Although not always mature, these
cases represent emerging technologies available for technology transfer. Specific
technologies which show promise for successful technology transfer include environmental
applications, new materials testing procedures including nondestructive evaluation, new
welding processes including weld seam tracking and defect minimization procedures, and
others. The research efforts at Productivity Enhancement Complex at the Marshall Space
Flight Center are representative of these advancements and should be appropriately noted.
Additional resources for documenting technology transfer, which have not been used
but are available, include the NASA patent licensing process, additional electronic databases
(NTB Online, Spacelink, etc.), and the Technology 2000 Conference series. Each of these
resources hold promise for documenting additional technology transfer.
Conclusions and Recommendations
Although this report is viewed, by the author, as a success in initially documenting
examples of technology transfer, a number of improvements may be made to insure
XIX- 3
continued growth and successful documentation of the NASA spinoffs. These include: an
incorporation, expansion, and updating of existing electronic databases for documenting
technology transfer (NASA RECON, CASI databases, NTB Online, COSMIC, Spacelink,
etc.) to a single point of documentation; an updating and standardization of the technology
transfer reporting process across the NASA field center TU offices (the MSFC TU office
could be used effectively as a model for this change); and, a procedure adopted to insure new
technology development is properly documented with information necessary to document
promote new technology transfers and subsequent database documentation.
Bibliography
1. Gurney, Gene Space Technology Spinoffs New York: Franklin Watts, Inc., 1979
2. Directory of Federal Technology Transfer, National Science Foundation, NSF 75-
402, 153-164, June 1975.
3. TABES90, 6th Annual Technical and Business Exhibition and Symposium, May 15-
16, 1990. Von Braun Civic Center, Huntsville, AL.
4. Grissom Jr., Fred, Chapman Richard, Mining the Nation's Brain Trust: How to Put
Federally-Funded Research to Work For You, Reading, Massachusetts: 1992.
5. Chapman, Richard, An Exploration of Benefits From NASA "Spinoff", June 1989
6. Focus on the Future: Advancing Today's Technology, NASA Marshall Space Flight
Center
7. NASA Tech Briefs, (numerous issues)
8. NASA Spinoff, (numerous issues)
9. Technology 2000 Conference Proceedings, 1990
10. Technology 2001 Conference Proceedings, 1991
1 1 . Technology 2002 Conference Proceedings, 1992
XIX-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
WELD FRACTURE CRITERIA FOR COMPUTER SIMULATION
Prepared By:
Wartan A. Jemian, Ph. D.
Academic Rank:
Professor
Institution and
Department:
Auburn University,
Materials Engineering
MSFC Colleague:
Arthur C. Nunes, Jr., Ph. D.
NASA/MSFC:
Office: Materials & Processes Laboratory
Division: Metallic Materials & Processes Division
Branch: Metallurgical Research
XX
Introduction
Due to the complexity of welding, not all of the important factors
are always properly considered and controlled. An automatic system is
required. This report outlines a simulation method and all the
important considerations to do this. As in many situations where a
defect or failure has occurred, it is frequently necessary to trouble shoot
the system and eventually identify those factors that were neglected.
This is expensive and time consuming. Very frequently the causes are
materials-related that might have been anticipated. Computer
simulation can automatically consider all important variables. The
major goal of this presentation is to identify the proper relationship of
design, processing and materials variables to welding.
Welding
An arc welded structure is usually described in terms of a fusion
zone, a heat affected zone (HAZ) and the base metal. The properties of
the fusion zone are dominated by details of the solidification process
and the HAZ is a modification of the base metal by prolonged exposure
to elevated temperatures. Welding also produces changes in geometry
that are manifest in visible features.
There are three stages in the simulation. The first stage is to
determine the geometry of the welded structure, which is based on the
welder's input of part thickness, welding power and speed. Residual
stress is also a significant factor in welding and must be computed. The
simulationist, who must also understand welding, sets the parameters
for arc efficiency, partitioning between point and line source and
physical properties of the system. A grid is assigned to the weld in the
first stage and is followed throughout the simulation. Figure 1
illustrates the shape of the weld bar and its regions of microstructure.
The goal of the second stage operations is to assign a flow curve to
each element. This involves the simulation of microstructure and
properties. The width and geometry of the fusion zone and the
determination of temperature gradient in the liquid lead to a
specification of property controlling features. The changes in the HAZ
are computed from thermal exposures.
XX- 1
The final stage is the determination of fracture details. Each step
is based on the concept that the response of each element in the
structure is governed, solely, by its condition and loading. The program
uses object oriented programming methods, see Booch (1). Thus, the
simulation of weld structure is planned for a number of source code
classes in a library organized into objects that define shape, regional
structure, operational parameters and microstructural parameters. The
simulation third stage uses these objects to reach the final result.
Weld Structure
Weld structure is established by simulating the geometry of the
weld pool. The equilibrium phase diagram and other materials-specific
reference tools provide information about melting point, freezing range,
chemical partitioning and solubility. The principal operating parameter
is the energy input which is the ratio of total input power to welding
speed. Easterling (3) describes the thermal distribution in welding
which is characterized by the flow of heat away from a moving source.
The governing equation is equation 2. Equations (1) and (2) define the
information that must be provided.
q = 7] E I HI
Where q is the total input power
H is the arc efficiency
E is the arc voltage
and I the beam current.,
The boundary conditions for integrating equation (2) are based on the
geometry of the base metal.
dX 2 dy 2 dz 2 ~ UV W l2]
XX- 2
Where X, y and z are a Cartesian coordinate
system fixed to the motion of the arc along X,
T is the absolute temperature
X is the thermal conductivity
and v is the heat capacity.
The size of the reinforcement depends on the base metal preparation,
distortion during welding, width of the fusion zone and amount of filler
added.
Flow Curve
The key parameters of the flow curve are the elastic slope, the
strain hardening exponent, and coordinates of the UTS and breaking
point. Each of the latter parameters on processing. Cottrell (2) reviews
the governing principles. The results of a tensile test can be presented
as an engineering stress-strain curve.
Strain
Figure 1. Stress-strain curves of typical parts of the welded
structure.
The flow curve is different at each point as shown in Figure 1.
The base metal has the optimum values of strength and ductility since it
has been heat treated to the optimum prior to welding. The alloy in the
fusion zone is completely changed with the development of a dendritic
structure. The HAZ is that part of the unmelted base metal that has
been subjected to elevated temperatures for enough time to allow
changes. Each process is represented by one or more governing
relations which are used to adjust the features of the flow curve.
XX- 3
The stress on an element varies inversely with section area. The
initial deformation is elastic. As the loading increases, the stress is
reached where significant plastic flow occurs, represented by the strain
hardening exponent. At higher levels of deformation vacancy
production becomes important. This counteracts and limits work
hardening, resulting in the UTS that is a prominent part of engineering
stress-strain curves. Each parameter of the flow curve is considered
separately.
Properties
The mechanical test is simulated in incremental steps of sample
extension as shown by the strain increments in figure 1. Every element
is evaluated at each step. The calculated stress is compared with the
failure stress of each element. Eventually there will be an element that
is the first to reach its failure stress and this will be marked for the
point of fracture initiation. The sequence and positions of the other
elements that fail will also be recorded to describe the shape of the
failure surface. The overall extent of sample elongation determines
weld ductility.
Conclusions
The integrated weld simulation system is planned to provide
information about welding with a specified alloy that is equivalent to
actually making a weld in the shop. The proposed system includes all
details of materials properties and behavior that are required in trouble
shooting and are too complex to include in most specifications. The
simulation is planned for speed and accuracy and produces reports with
lists of results, parameters used in the simulation, and approximations
that were invoked. This is more information than is usually available.
References
1. Booch, G. C, Object Oriented Design with Applications, The
Benjamin/Cummings Publishing Co., (1991).
2. Cottrell, A. H., The Mechanical Properties of Matter, John Wiley &
Sons., Inc., (1964).
3. Easterling, K. E., Introduction to the Physical Metallurgy of
Welding, Butterworths & Co., (1983).
XX -4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
MEASURING THE DYNAMICS OF STRUCTURAL CHANGES IN
BIOLOGICAL MACROMOLECULES FROM LIGHT SCATTERING DATA
Prepared By:
Academic Rank:
Institution and
Department :
Adriel D. Johnson, Ph.D.
Assistant Professor
University of Alabama in Huntsville
Department of Biological Sciences
MSFC Colleague:
David A. Noever, Ph.D.
NASA/MSFC:
Office:
Division:
Branch :
Space Sciences Laboratory
Microgravity Sciences and Applications
Biophysics
XXI
Examining techniques to study the dynamics of structural
changes in various molecules has been an on going goal of the
space program. How these phenomena occur in biological systems
would be necessary for life to remain functional in the space
environment. Hierarchy of biological organization is attained
when cells join together small organic molecules to form larger
and more complex molecules. Characterizing the architecture of
a particular macromolecule helps determine how that molecule
works in the living cell and is fundamental to the diversity of
life. Understanding this arrangement involves the correlation
of the structure of macromolecules with their functions.
A light scattering photometer was developed for detecting
continuous measurement of the angular spectrum of light
scattered by dynamically changing systems ( 2 ) . The analysis of
light scattered by biological macromolecules can be used to
determine concentration, size, shape, molecular weight, and
structural changes of cells, such as erythrocytes (2). Some
light scattering photometers can collect and store 120 angular
scattering spectra per minute, with an angular resolution of
. 2 degrees which can be displayed with computer graphics ( 2 ) .
The light scattering photometer functions to produce and detect
scattered light, determines scatter angles, and collects,
stores, analyzes data.
The summer project involved the theoretical development of
a system which could be used to measure the dynamic changes of
erythrocytes during ground based studies and under conditions
of low-gravity on the KC-135 research plane. Previous ground
laboratory studies and space shuttle studies have shown
differences in the kinetics and morphological aggregation of
erythrocytes from patients with specific pathophysiological
conditions (1). The erythrocyte aggregates formed in space
from these patients showed a rouleaux formation while the same
samples showed severe clumping and sludging on the ground (1).
Erythrocytes from normal individuals showed a rouleaux
formation (3) on the ground while having a random swarm-like
pattern in space ( 1 ) .
Developing a system using the light scattering photometer
may provide a technique to evaluate the dynamic changes
observed in space from erythrocytes representative of various
pathophysiological conditions and different animal species. A
primary objective would be to determine the relationship of the
functional organization and the spatial arrangement of the
erythrocytes. Procedures for both ground based and space
studies need to be developed for erythrocyte collection,
preparation, and storage; incorparating the erythrocytes from
storage into the light scattering photometer; measuring the
erythrocyte angular changes and computer analyzing the data;
and collecting, preparing, and storing the erythrocytes for
histological evaluation. These developmental procedures will be
XXI -1
employed for both ground based studies and studies in the KC-
135 research plane. The ultimate goal will be to prepare a
system which could evaluate the dynamic changes for any
macromolecule during future space shuttle missions and for the
space station.
References
1. Dintenfass, L., Osman, P., Maguire, B. and Jedrzejczyk, H.
Experiment on aggregation of red cells under microgravity on STS
51-D, Space Research, Vol. 6, No. 5, 1986, 81-84.
2. Morris, S.J., Shultens, H.A., Hellweg, M.A., Striker, G. and
Jovin, T.M. Dynamics of structural changes in biological
particles from rapid light scattering measurements, Applied
Optics, Vol. 18, No. 3, February 1979, 303-311.
3. Tuszynski, J. A., and Kimberly Strong, E. Application of the
Frohlich theory to the modelling of rouleau formation in human
erythrocytes, Journal of Biological Physics, Vol. 17, 1989, 19-40.
XXI- 2
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA
WELD JOINT CONCEPTS FOR ON-ORBIT REPAIR OF SPACE STATION
FREEDOM FLUID SYSTEM TUBE ASSEMBLIES
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague(s):
NASA/MSFC:
Office:
Division:
Branch:
Steven D. Jolly, Ph.D.
Research Associate
University of Colorado at Boulder,
Department of Aerospace Engineering Sciences
Clyde S. Jones m
Carolyn K. Russell
Materials and Processes Laboratory
Metallic Materials and Processes
Metals Processes
xxn
INTRODUCTION AND BACKGROUND
Because Space Station Freedom (SSF) is an independent satellite, not
depending upon another spacecraft for power, attitude control, or thermal
regulation, it has a variety of tubular, fluid-carrying assemblies on-board. The
systems of interest in this analysis provide breathing air (oxygen and nitrogen),
working fluid (two-phase anhydrous ammonia) for thermal control, and mono-
propellant (hydrazine) for station reboost.
The tube assemblies run both internally and externally with respect to the
habitats. They are found in up to 50 ft. continuous lengths constructed of mostly
AISI 316L stainless steel tubing, but also including some Inconel 625 nickel-iron and
Monel 400 nickel-copper alloy tubing. The outer diameters (OD) of the tubes range
from 0.25-1.25 inches, and the wall thicknesses between 0.028-.095 inches. The
system operational pressures range from 377 psi (for the thermal control system) to
3400 psi (for the high pressure oxygen and nitrogen supply lines in the ECLSS).
SSF is designed for a fifteen to thirty year mission. It is likely that the TA's
will sustain damage or fail during this lifetime such that they require repair or
replacement. The nature of the damage will be combinations of punctures, chips,
scratches, and creases and may be cosmetic or actually leaking. The causes of these
hypothetical problems are postulated to be:
1. Faulty or fatigued fluid joints — both QD's and butt-welds;
2. Micro-meteoroid impacts;
3. Collision with another man-made object; and
4. Over-pressure strain or burst (system origin).
While the current NASA baseline may be to temporarily patch the lines by
clamping metal c-sections over the defect, and then perform high pressure injection
of a sealing compound, it is clear that permanent repair of the line(s) is necessary
[Anderson 1991]. This permanent repair could be to replace the entire TA in the
segment, or perhaps the segment itself, both alternatives being extremely expensive
and risky. The former would likely require extensive EVA to release TA clamps an
pose great risk to other engineering subsystems, and the latter would require major
de-servicing of the Station.
DESIGN CONSIDERATIONS
For joining TA's in thin-walled pressure vessel applications the butt-weld is
the preferred method because the resulting tube can be considered to transmit stress
in the same manner as the original TA. The truth is, however, that when a metal is
welded both the weld and the heat affected zone (HAZ) have different material
properties than the base metal. This is true whether the application is tube welding
or plate welding, or any other welding [Davies 1984, Masubuchi 1980, ASM 1985].
XXH-1
' Mantel Span FagMCanw
Designing Weld Joints for On-Orbit Repair Requires
Consideration of AH Systems & Structures Issues
Q Vacuum/Micro-g Welding
♦ process characteristics, weld pool behavior, thermal requirements, weld quality
D Design Strength
♦ dominant stresses, concentrations, post-weld properties, margins of safety
Q Preparation of Tube Assembly
♦ removing: oils, dirt, oxidation, outgassing accretions, contaminants, residual fluid
Q Cutting
♦ burrs, bevels, chips, squareness, accuracies
Q Cleanliness
♦ purge schedules, weld contamination, system contamination, materials interactions
Q Inspection/Verification
« weld in-process, weld post-process, leak tests, system testing
D Special Issues
♦ access, jigs, gap, thermal, lighting, safety, simplicity, reliability, time, sequencing, interruption, /
^— ^~— NASAMSEE Summer Family Falloimhip Program Siuaaos
vibration
Figure 1. Issues for Design of Weld Joints for In-Space Repair
Figure 1 illustrates the drivers for the weld joint design. The conclusions of
these considerations became then, the design criteria for the study.
The criteria are:
1. The weld joint design for in-space repair applications must provide much greater
compliance (with respect to cutting the TA and the replacement) than the maximum
allowable gaps of the standard butt-weld (.008 inches), perhaps on the order of .5
inches.
2. This compliance must be gained without surrendering weld quality and post-
weld structural performance such that positive margin exists using the standard
factor of safety for SSF.
3. The weld joint needs to be self-aligning and self-latching, as much as possible.
4. The hardware should be designed and fabricated with the astronaut's glove in
mind, i.e. as large as is feasible, easy to handle.
5. The repair procedure and associated hardware design should minimize the
required orbital support equipment.
6. If possible, the weld joint and weld procedure should minimize contact of the
weld pool with the inside diameter of the tube assembly assuming that the fluid
residuals are degrading to the weld process, or that subsequent cleaning of the TA
interior is required to return to service.
DESIGN CONCEPTS
Considering the above design criteria, the most logical, generalized weld joint
design to consider for in-space TA repair applications appears to be like that shown
in Figure 2.
XXH-2
f Ear
■mm few* ngM omm
The Simple Union (Sleeve, Coupling, . . .) Used for
Earth-Repair of Low Pressure Fluid Systems Seems Ideal
Fillet or Seam Welds?/
Max. Length of Union?
Thickened midsection to
accommodate seal recess?
' MISAASCESuninwrFacuHrFatloiMMpPrOOTm 1
Figure 2. Family of Concepts Using Either Fillets and Seams (With or Without Seals)
The primary stresses in this concept are a result of internal pressure on a thin-
walled vessel. Commonly called hoop and axial stress they can be predicted with
thin shell theory of classical mechanics. For values below the elastic limit Figure 3
shows a simple model for computer evaluation and allows "quick look" design
analysis..
/Hoop Stress in the Structural System Causes Dilation ^v
' of the Union, Tube, and Lap, each a unique /(radius,wall) *
Model
T filet Wekj Geometry
Sean Wdd Geometry
t-theoretiCBLThroet
iCBjjthroe 1
(Axiah
PR
IT
MathcRisticel Approximation
Of We/d Geometry
Ru-(Rm +Ti>)
1 JH 1 L »
□ Dilation of thin-walled cylinder "^> O Dilation-Induced Shear on the weld
is given by: 8 = (2 ~ v ^ f
□ Union can be designed to have
the same expansion at P
-ft
throat is: S, = PL-j-(l
_&_
2/.^ (^,+r H ) 2 r a
■ NASAUSEE Summer FMutty FalcwaMp Program
Q Mathematically Optimal Thickness
ofUnionis: T^T a ^ M^* T J .
Figure 3. Stress Analysis Model of Weld-Union Concept
Summary and conclusions
Q Overall, it is clear that a large portion of the complexity of on-orbit, permanent
repair of high pressure, thin-walled tubing is not really a function of the joint design
being utilized in the repair.
Q The fillet or seam welded union such as that introduced in this paper would
appear to provide the best weld joint from an all-around process perspective. The
XXH-3
butt-weld used for terrestrial manufacturing of the SSF hard lines is definitely
superior from a structural perspective compared to a union with T v < T Uoptimal , but it
is a difficult in-space repair technique for TA's.
100000
S.
^ 80,000
40,000
-20,000
1.0 1.S 2.0
Union Length, inches
Figure 17. Analysis Yields Positive Margins for Near-Optimal Union Thichnesses
In summary, when:
1) T v < T UOptinal the weld throat is shear stressed radially outward;
2) T v = T Uoptimal the weld throat has no shear stress (just hoop and axial stress); and
3) T v > T UOptima i the weld throat is shear stressed radially inward.
ACKNOWLEDGMENT
The author would like to acknowledge the help of his NASA colleagues Chip
Jones and Carolyn Russell, Dr. Arthur Nunes who was very helpful and finally, Mr.
Ray Anderson of MDSSC who has been an invaluable resource of information,
documents, and all around help.
REFERENCES
1. Anderson, R. H., "EVA/Telerobotic Fluid Line Repair Tool Development",
Welding In Space and the Construction of Space Vehicles by Welding, proceedings,
American Welding Society, 1991, Miami, FL
2. ASM, Metals Handbook. Desk Edition, American Society For Metals, 1985, OH
3. Davies, A.C., The Science and Practice of Welding , Vol. 2, Cambridge University
Press, 1984, Bath, Great Britain, p.41
4. Masubuchi, K, Analysis of Welded Structures, International Series on Materials
Science and Technology, Vol. 33, Pergamon Press, 1980, New York, N.Y.
xxn-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA
DIFFUSION ON Cu SURFACES
Prepared by:
Academic Rank:
Institution:
Department:
MajidKarimi, Ph.D.
Assistant Professor
Indiana University of
Pennsylvania
Physics
NASA/MSFC
Office:
Division:
Branch:
EH22
Metallic Materials
Metallurgical & Failure
Analysis
MSFC Colleague:
Ilmars Dalins, Ph.D.
XXIII
Introduction
Understanding surface diffusion is essential in understanding surface phenomena,
such as crystal growth, thin film growth, corrosion, physisorption, and chemisorption.
Because of its importance various experimental and theoretical efforts have been directed
to understand this phenomena. Field Ion Microscope (FIM) has been the major experime-
ntal tool for studying surface diffusion. FIM have been employed by various research
groups to study surface diffusion of adatoms. Because of limitations of the FIM such
studies are only limited to a few surfaces; nickel, platinum, aluminum, iridium,
tungsten, and rhodium (4, 5). From the theoretical standpoint, various atomistic
simulations are performed to study surface diffusion. In most of these calculations the
Embedded Atom Method (EAM) of Daw and Baskes(2) along with the molecular static
(MS) simulation are utilized. The EAM is a semi-empirical approach for modeling the
interatomic interactions. The MS simulation is a technique for minimizing the total
energy of a system of particles with respect to the positions of its particles.
One of the objectives of this work is to develop the EAM functions for Cu and
use them in conjunction with the molecular static (MS) simulation to study diffusion of a
Cu atom on a perfect as well as stepped Cu(100) surfaces. This provide a test of the
validity of the EAM functions on Cu(100) surface and near the stepped enviroments. In
particular, we construct a terrace-ledge-kink (TLK) model (figure 1) and calculate the
migration energies of an atom on a terrace, near a ledge site, near a kink site, and going
over a descending step. We have also calculated formation energies of an atom on the
bare surface, a vacancy in the surface, a stepped surface, and a stepped-kink surface.
Our results are compared with the available experimental and theoretical results.
Methodology
Pair potentials suffer at least from two major problems. Cauchy pressure
Cl 1-Cl2=0 and single vacancy formation energy is equal to the cohesive energy El v =Ec.
ForametalCn^Ci2 andEi v *E c . To overcome these and other shortcomings, the
EAM potential is developed for Cu. In the EAM, energy of each atom is approximated
with sum of the embedding and two body contributions,
Ei=Fi(pi)+.5L<Krij), (1)
whereFj(pi) is the embedding energy of atom i which can be interpreted as the energy
that is required to embed an atom into the electronic charge created by the other atoms,
Pi is the charge density at site i, <Krij) is the two body potential between atoms i and j,
and rjj is the separation distance between atoms i and j. pi is approximated with the
superposition of atomic charge densities(l, 2). Functional forms are considered for F
and (j) and their parameters are determined by fitting to the bulk properties of crystalline
solid (1, 2).
In our calculations, we have employed two sets of EAM potentials one developed
by us(2) and the other one developed by Adams et.al.(3). We have utilized the above
EAM potentials along with the MS simulation to calculate formation energies of an atom
on the surface, a vacancy on the surface, stepped surface, and stepped Mnk surface. We
have also calculated migration energies of an atom on the bare surface, near a ledge,
near a kink, and over a descending step.
XXIII-1
Results
a) Adatom formation and migration energies
Our lattice is a slab of 12 parallel layers with 144 atoms per layer. An atom is
placed on the surface layer and the formation and migration energies of the adatom are
calculated from the following formulas(4, 5),
Efi a =E(N+U)-E(N,0)+Es , (2a)
E m la=E sa d-E m in , (2b)
where E r i a is the formation energy of an adatom, E(N+1,1) is the total minimized
energy of the lattice of N atoms and one adatom, E(N,0) is the minimized energy of the
lattice of N atoms, Es is the sublimation energy(negative of cohesive energy), E m i a is
the migration energy of an adatom, E sa d is the minimum total energy of the system with
adatom at the saddle point , and E mm is the minimum total energy of the system with
adatom in a lowest energy binding site. Our results for E^ia, E m la, and activation
energy Qi a =E I ia + E m ia are.71ev, .48ev, and 1.19 ev, respectively.
b) Vacancy formation and migration energies
A vacancy is created in the surface of the slab in part (a) and formation E^lv and
migration E m lv energies of the vacancy are calculated from the following formulas
(4, 5),
Efiv=E(N-l,l)-E(N,0)-Es , (3a)
E m lv=E S ad-Emin , (3b)
where E(N-1,1) is the minimized energy of the lattice of N atoms and one vacancy. Our
results for E^ i v , E m i v , and Qiv are .59 ev, .35 ev , and Qlv=.95 ev , respectively.
c) Formation energies of steps
A step similar to one in figure 1 is constructed and its formation energy is
calculated using the following formula (4, 5),
Estep=E-NiE u + NEs , (4)
where E is the total minimized energy of the system of N atoms with step, Nl is the total
numbers of atoms of upper and lower terraces, and Eu is the surface energy. Our results
for the formation energies of steps with and without kink are .11 ev/A and .05 ev/A ,
respectively.
d) Migration energies of an atom for various moves
Migration energies of an atom for various moves on a stepped surface(shown in
figure 1) are calculated using formula 2b. Our results for migration energies of moves
XXIII-2
a, b, c, d, e, f are .485 ev, .246 ev, .507 ev, .834 ev, .522 ev, .and 355 ev,
respectively.
e) Migration energies of an atom on bare surfaces
Migration energies of an atom on Cu(100), Cu(l 10), Cu(l 1 1) are calculated
using formula 2b. Our results are E m i a =.48ev, E m i a (H0)ll=.23ev, E m i a (110)L=.30
ev, andE m i a (lll)=.026evfor(100), (110), and (111) surfaces.
Summary and conclusion
a) Vacancy diffusion is dominant diffusion on Cu(100) surface. This is in
agreement with another simulation results.
b) Migration energies of an adatom follows the following trend,
E m ia(100)> E m i a (110> E m i a (lll) . This is consistent with other simulations and
experiments.
c) The formation energies of an adatom, a vacancy, a step without kink, a step
with kink are calculated. The trend is consistent with other simulations.
d) Migration energy of an atom along the ledge on a Cu(100) stepped surface is
smaller than its corresponding value on a bare Cu(100) surface. This is consistent with
another simulation.
e) Migration energy of an adatom over a descending step is slightly larger than
its corresponding value on a bare Cu(100) surface. This result is in qualitative agreement
with another computer simulation.
References
1. M. S. Daw and M. I. Baskes, Phys. Rev. B29, 6443(1984).
2. M. KarimiandM. Mostoller, Phys. Rev. B45, 6289(1992).
3. J. B. Adams, S. M. Foiles, andW. G. Wolfer, J. Mater. Res. 4, 102(1989).
4. C. L. Liu, J. M. Cohen, J. B. Adams, and A. F. Voter, Surf. Sci. 253,334
(1994).
5. C. L. Liu and J. B. Adams, Surf. Sci. 265, 262(1992).
XXIII-3
oooooo.ooooo.o
>••••©•©••••
oooooooooooo
»••••••••••©
oooooopooooo
»•••©• ©V • • • ©
oooooooooooo
>•©•••••••©•
00000000*000
> • • • ©I © ♦ © ©.ft© • ©
O O O OJO o o o/o o o o
> © © © ©J ,©?» ©s> +
o o o o cr*o o o o o o o
oooooooooooo
oooooooooooo
>a
oooooooooooo
oooooooooooo
oooooooooooo
Fit). 1
XXIII-4
N94-2442
1993
NASA/ASEE SIMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
J-INTEGRAL PATCH FOR FINITE ELEMENT ANALYSIS OF DYNAMIC
FRACTURE DUE TO IMPACT OF PRESSURE VESSELS
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague:
NASA/MSFC:
Laboratory:
Division:
Branch:
Boris I. Kunin, Ph.D.
Assistant Professor
University of Alabama in Huntsville
Department of Mathematical Sciences
Rene Ortega
Structures and Dynamics
Structural Analysis
Thermostructural Analysis
XXIV
1. Introduction
Prediction of whether a pressurized cylinder will fail catastrophically
when impacted by a projectile has important applications ranging from
perforation of airplane's skin by a failed turbine blade to meteorite impact of a
space station habitation module. This report summarizes the
accomplishment of one task for a project, whose aim is to simulate
numerically the outcome of a high velocity impact of pressure vessels. A finite
element patch covering a vicinity of a growing crack has been constructed to
estimate the J-integral (crack driving force) during the impact. Explicit
expressions for the J-integral through the nodal values of displacement,
strain, and stress have been written. The patch is to be used repeatedly to
estimate the amount of crack growth during the the time of the impact. The
resulting crack size is to be compared to an estimated critical crack size for the
pressurized cylinder.
A literature search produced a number of papers dealing with evaluation
of J-integral within finite element environment. Most of the research reports,
however, present the shape of the finite element mesh only, with no detail on
node locations. Such information was hard to utilize in the absence of an
automated mesh generator. As a result, the simplest mesh was chosen for the
patches, following (2). The same search turned up studies of the accuracy of
finite element J-integral evaluations as well as the effect of the choice of the
contour of integration. This provided a rational basis for the choices made in
the present work.
A complementary literature search has been done to collect data on
fracture toughness of 2219 aluminum alloys, since this material property
enters the employed crack growth criterion.
The third literature search concerned reports on high- and hypervelocity
impact studies (both experimental and theoretical) to form a basis for
comparison with the numerical simulations produced by the entire project.
Complete computational details and the three literature reviews have
been left with Rene Ortega.
2. Circumferential and Axial Patches
Both patches have the shape of a rectangle with an edge crack mapped
onto a portion of the cylinder's surface as shown in Fig 1. The finite element
mesh consists of 8-node isoparametric elements (1). Of these only the four
which surround the crack tip are distorted, namely, the five nodes neighboring
the crack tip are placed at the quarter distance from the tip instead of being
half distance away (see Fig 2). Formulas shown in Fig 1 permit to find the 3D
coordinates of any node.
3. J-integral expressions
J-integral is the following contour integral:
XXIV-1
0; = fc(cosk£->f)
1 1 1
:_ ::_ : ii :c ~::_ i _ _ i"
is
I , i >t
___5^.:„ « z::__i :::::
— i_ . iX ^__
1 i
i * i -i \i , i ■_
, , i ! J . R
1 ! i
IT 1
2 . , -
i 1 !
1 1 1
tt --- i -
x = R(cosf-<i)
FIG. 1
n
■ • •
! . 1
i
FIG. 2
FIG. 3
V
[1]
where w is the strain energy density, Tj is the traction, x t is the coordinate in
the direction of the crack, and T is any contour that begins on one face of the
XXIV-2
crack and ends on the other (see Fig 3). The integral has the meaning of the
potential energy release per unit crack advance (known as 'the energy release
rate', or 'the crack driving force').
Explicit expressions for the J-integral through the nodal values of
displacement, strain , and stress have been written for the two contours shown
in Fig 1. The structure of those expressions is exemplified below for the inner
contour.
Eq [1] is rewritten as
J = l!-I 2 [2]
where
I x = ^ w dx 2 [3]
r
and
I 2 = \ Cy (jdui/dxj nj ds [4]
r
The contour is split into five paths T ly ..., T 5 (see Fig 1), and the integrals [3,4]
become the sums of the integrals over these paths:
Ik = hi + - + x k5 » k=l,2. [5]
As examples, the expressions for I n and I 2 i through he nodal values of u 2 , ey,
and Oy are shown here:
I n = - (h/6) (w 229 + 4W 244 + 2w 255 + 4w 270 + w 281 ) [6]
I 21 = - (h/6) (f 229 + 4f 244 + 2^55 + 4f270 + f281) [ 7 ]
where h is the mesh size, the upper indecies refer to node numbers,
w = Oyey/2 [8]
f = o n e n + o 12 Ou^xj) [9]
and, as a matter of example, the expression for 9u 2 /9xj through the nodal
values of u 2 is shown:
Oua^xj) 244 = (l/2h)(u 2 257 + U2 231 - U2 227 - u 2 253 )
+ (l^l)(U2 254 + U2 243 + U2 228 - U 2 230 - U 2 245 - U2 256 ) [10]
XXIV-3
3. Testing of the patch
To verify the numerical procedures, comparison has been proposed with
an existing solution for a rectangular plate with an edge crack parallel to the
clamped edges (4).
4. Discussion
The energy release rate and its J-integral representation employed in
this study corresponded to static (or slowly growing) crack, whereas the crack
under consideration is a fast growing one. However, it is known that the
energy release rate for a moving crack is related to the static one as G d y n =
g(v)G stat , where g(v) is a monotonically decreasing function of the crack
velocity v which goes from latv = OtoOatv reaching the Rayleigh wave speed
(3). Therefore employing G stat overestimates the crack driving force and thus
is conservative when a possibility of a catastrophic failure of the cylinder is
considered.
If, nevertheless, the estimates will result in unrealistically large crack
sizes at the end of the duration of the impact, expressions for dynamic J-
integrals and their evaluation in finite element environment are available (see
the literature review).
Finite element models of elastic-plastic crack growth in the presence of
both small and large scale yielding are also available in the literature (see the
literature review).
Acknowledgment
The author is thankful to his MSFC colleague Rene Ortega for
formulating the problem of manageable dimensions as well as for his constant
support throughout the summer. Financial support of the Summer Faculty
Fellowship Program at Marshall Space Flight Center is gratefully
acknowledged.
References
1. Barsoum, Roshdy, On the use of isoparametric finite elements in linear
fracture mechanics, Int. J. for Numerical Methods in Engineering, 10
(1976), 25-37.
2. Hurlbut, Arthur, Finite Element Modeling of Crack Growth and Failure of
Composite Laminates, Ph.D. Thesis, Clarkson University, 1985.
3. Kanninen, Melvin and Popelar, Carl, Advanced Fracture Mechanics,
Oxford University Press, New York, 1985.
4. Torvik,P.J., On the determination of stresses, displacements, and stress-
intensity factors in edge-cracked sheets with mixed boundary conditions,
Trans. ASME, Ser E, J. Appl. Mech. 46 (1979), 611-617.
mv-4
4 4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVBLLE
CFD SIMULATION OF COAXIAL INJECTORS
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleagues:
NASA/MSFC:
Offices:
Division:
Branch:
D. Brian Landrum, Ph.D.
Assistant Professor
University of Alabama in Huntsville
Department of Mechanical and Aerospace
Engineering
Ten See Wang
P. Kevin Tucker
Structures and Dynamics Laboratory
Aerophysics
Computational Fluid Dynamics
XXV
CFD SIMULATION OF COAXIAL INJECTORS
D. Brian Landrum, Ph.D.
Assistant Professor
Department of Mechanical and
Aerospace Engineering
University of Alabama in Huntsville
INTRODUCTION
The development of improved performance models for the Space Shuttle Main Engine
(SSME) is an important, ongoing program at NASA MSFC. These models allow prediction of
overall system performance, as well as analysis of run-time anomalies which might adversely
affect engine performance or safety. Due to the complexity of the flow fields associated with
the SSME, NASA has increasingly turned to Computational Fluid Dynamics (CFD) techniques
as modeling tools.
An important component of the SSME system is the fuel preburner, which consists of a
cylindrical chamber with a plate containing 264 coaxial injector elements at one end. A fuel rich
mixture of gaseous hydrogen and liquid oxygen is injected and combusted in the chamber. This
process preheats the hydrogen fuel before it enters the main combustion chamber, powers the
hydrogen turbo-pump and provides a heat dump for nozzle cooling. Issues of interest include
the temperature and pressure fields at the turbine inlet, and the thermal compatibility between
the preburner chamber and injector plate. Performance anomalies can occur due to incomplete
combustion, blocked injector ports, etc. The performance model should include the capability
to simulate the effects of these anomalies.
The current approach to the numerical simulation of the SSME fuel preburner flow field
is to use a global model based on the MSFC sponsored FDNS code (1). This code does not
have the capabilities of modeling several aspects of the problem such as detailed modeling of
the coaxial injectors. Therefore, an effort has been initiated to develop a detailed simulation of
the preburner coaxial injectors and provide gas phase boundary conditions (species
concentrations, pressures, temperatures, etc.) just downstream of the injector face as input to
the FDNS code. This simulation should include three-dimensional geometric effects such as
proximity of injectors to baffles and chamber walls and interaction between injectors.
This report describes an investigation into the numerical simulation of GH2/LOX coaxial
injectors. The following sections will discuss the physical aspects of injectors, the CFD code
employed, and present preliminary results of a simulation of a single coaxial injector for which
experimental data is available. It is hoped that this work will lay the foundation for the
development of a unique and useful tool to support the SSME program.
PHYSICAL ASPECTS OF COAXIAL INJECTORS
Liquid propellant rocket injection is a complex combination of physical process including
liquid atomization and evaporation, and chemical reactions. The complexity is increased by the
fact that at least one of the constituents exists in both the liquid and vapor phases. In order to
make the injection simulation problem numerically tractable, these physical processes are
described by sub-models. The following two sections describe the sub-models for atomization
and evaporation. The current study did not include the effects of chemical reactions and
therefore this sub-model will not be discussed.
XXV-l
Injection / Atomization
In a coaxial injector the core liquid propellant jet is broken into smaller droplets through
shear forces imposed by the co-flowing, high velocity, annular gas jet surrounding it. A
cursory review of current atomization modeling capabilities and the experimental validation data
base was recently presented by Liang, et al. (2). Currently, there are two primary approaches
to the modeling of an atomizing liquid jet The first approach, known as the Jet Embedding
Technique (3), resolves the intact jet shape exactly with an adaptive grid. Simplified equations
of motion are solved within the core to model its growth and subsequent atomization.
The second approach to atomization modeling is known as the Blob Atomization Model.
This approach is based on the Reitz's approximation of the surface wave dispersion equation
for a round jet (4) in conjunction with a Taylor Analogy Breakup model (5). The model
assumes that the liquid jet can be represented by injected drops which are the diameter of the
injection port. Linear stability theory is then used to model secondary breakup into smaller
drops. Atomization is a function of droplet aerodynamics, liquid surface tension, and liquid
viscosity. This approach does not allow the shape of the jet to be resolved. Numerically, the
technique can be coupled to a Volume Of Fluid (VOF) technique (6), in which the fractional
volumes of liquid, droplets and gas are tracked within each computational cell. The Blob
Atomization and VOF approaches were used in the simulation described in this report.
Evaporation
A sub-model is also required to simulate the effects of evaporation of the cold liquid into
the warmer surrounding gas. A vapor-liquid film model is used on the droplet surface. Quasi-
steady state diffusion and energy equations are solved for the droplet heating rate and
evaporation rate. The resultant equations used are presented by Liang and Ungewitter (see
Reference 4).
For many injector scenarios the evaporation occurs at subcritical conditions where the
droplet surface temperature is assumed to be the liquid saturation temperature. For the case of
SSME preburner LOX injection, the chamber pressure far exceeds the critical pressure. In this
situation the surface of the LOX droplet can be in a critical state while the interior of the droplet
remains below the critical temperature. A supercritical evaporation model must ultimately be
used. Reference 4 describes such a model although only subcritical evaporation was considered
in the preliminary study documented in this paper.
COMPUTATIONAL CODE AND MODIFICATIONS
The numerical simulation was based on the Multiphase All-Speed Transient (MAST) code
of Chen (7). This code uses a time accurate, temporal marching technique. The method is
pressure based and also uses a operator-splitting algorithm to allow for various speed regimes
in the flow field. A stochastic particle tracking method is incorporated (8). MAST uses a VOF
technique, but simulation results indicate that this may not be totally active. The MAST code
also includes a limited capability to generate computational grids. Options to generate uniform,
exponentially stretched and mixed grids are available.
The MAST code was modified for this study. Although the numerical structure of the
code is generalized for arbitrary fluid constituents, many thermofluid properties in the current
version were hardwired for air. These properties had to be replaced with values representative
of hydrogen and LOX. First, various thermofluid properties for the gaseous hydrogen were
inserted. The second major task consisted of assembling a LOX data base. Required
parameters included vapor pressure, latent heat of vaporization, surface tension, and viscosity
of LOX as a function of temperature. A representation of the binary diffusion of oxygen into
hydrogen also had to be provided.
XXV-2
EXPERIMENTAL DATA
A large experimental data base exists for coaxial injection using a variety of test liquids
and gases. This data base is summarized in Reference 2. A capability for simulation of coaxial
injection is currently being demonstrated at the Pennsylvania State University Propulsion
Engineering Research Center (PSU/PERC). The hardware consists of a cylindrical chamber
with an injector assembly at one end and a nozzle section at the other. The dimensions of the
injector are comparable to the fuel preburner elements used in the SSME. Details of the injector
assembly hardware are described by Pal, et al. (9). Both cold flow GN2/H20 and hot -fire
GH2/LOX injection has been performed in the laboratory to date. Because of the potential of
this laboratory to produce validation data, a simulation of the PSU/PERC injector was chosen
as the test case of this study.
INJECTOR SIMULATION RESULTS
The PSU/PERC chamber was modeled with an axisymmetric computational grid shown
in Fig. 1. Only one-quarter of the length of the chamber was modeled. The upper half of the
chamber was modeled so that the first grid line is the combustion chamber axis. For this
preliminary investigation the numerically simulated injector did not include the LOX post
recess. A fine uniform grid was used in the hydrogen annulus region. The grid was
exponentially stretched from this region down to the chamber axis and upwards to the chamber
wall. The total grid was 60 axial by 50 radial points. An injection boundary condition was
applied at the hydrogen annulus and the downstream boundary condition was to fix the
pressure at the quoted value for the hot-fire tests. The chamber axis was a symmetry boundary
condition and all other surfaces were modeled with no-slip wall boundary conditions.
Consistent with the blob injection used in the MAST code, LOX droplets were created at the
i=2, j=2 grid point. These droplets could then convect or breakup in the chamber.
Several simulations were performed in order to investigate the capabilities of the MAST
code. These consisted of hydrogen injection only, LOX droplet injection only and coaxial
GH2/LOX injection. Representative results are illustrated in Figs. 2 and 3 where the location of
LOX droplet parcels in the computational domain are plotted at a time of 0.5 msec. Figure 2
shows the parcel distribution for LOX injection into static hydrogen. The droplets have
penetrated a short distance into the chamber with no significant lateral dispersion. In Fig. 3 the
LOX droplets are injected with the surrounding hydrogen jet. The axial penetration is
comparable to the LOX injection only. The significant difference is the dispersion of the
droplets laterally into the chamber. An interesting result of the simulation was that no droplet
evaporation was seen during the time simulated. This may be due to the small magnitude of the
temperature gradient between the LOX droplets (injected at 117 K) and the injected and ambient
hydrogen gases (both at 289 K). This behavior may also indicate that the code is not accurately
modeling the evaporation.
CONCLUSIONS / FUTURE WORK
A preliminary study of numerical simulation of GH2/LOX coaxial injection has been
performed. The MAST code was modified with thermofluid properties for hydrogen and
oxygen. The modeled injector was based on hardware currently being used at Penn State
University. Several aspects of the injection problem were simulated in order to evaluate the
capabilities of the MAST code. Qualitative results indicate that the effects of the annular
hydrogen jet are to disperse the LOX droplets laterally. No droplet evaporation was predicted.
This may be due to the temperature gradients simulated or indicate a failure of the code
evaporation model. Further analysis is required.
In general the MAST code was difficult to implement. Many of the thermofluid
XXV- 3
parameters were hardwired for air and had to be changed. There is also some question as to
whether the incorporated sub-models are correctly implemented. But, this criticism must be
tempered by the fact that this is the first time that the code has been used to model a coaxial
injection case. Further investigation into the code capabilities is therefore warranted.
Future work should include incorporation of H2-02 gas chemistry into the simulation.
The capability to model supercritical evaporation should also be included in the code. Detailed
validation studies should then be performed using the Penn State GN2/H20 and GH2/LOX
data.
ACKNOWLEDGEMENTS
The author would like to acknowledge the technical assistance of Ten See Wang and
Kevin Tucker during this project. Bruce Vu answered numerous questions about computer
systems and plotting routines. Terry Jones provided word processing support. The
contributions of each of these individuals was greatly appreciated.
REFERENCES
1. Chen, Y. S., "FDNS - A General Purpose CFD Code: User's Guide," ESI-TR-93-01,
Engineering Sciences, Inc., May 1, 1993.
2. Liang, P. Y., Przekwas, A. J., and Santoro, R. J., "Propellant Injection and Atomization,"
Presented at the Combustion-Driven Flow Technology Team Meeting, NASA MSFC, July ??,
1993,.
3. Przekwas, A. J., Chuech, S., and Singhal, A. K., "Numerical Modeling of Primary
Atomization of Liquid Jets," AIAA 89-0163, 1989.
4. Liang, P. Y. and Ungewitter, R., "Multi-Phase Simulations of Coaxial Injector
Combustion," AIAA 92-0345, 1992.
5. Seung, S. P., Chen, C. P., and Chen, Y. S., "Development of an Atomization
Methodology for Spray Combustion," presented at the 11th Workshop for CFD Applications
in Rocket Propulsion, NASA MSFC, April 20-22, 1993.
6. Liang, P. Y. and Schuman, M. D., "Atomization Modeling in a Multiphase Flow
Environment and Comparison with Experiments," AIAA 90-1617, 1990.
7. Chen, C. P., Jiang, Y., Kim, Y. M., and Shang, H. M., "A Computer Code for Multiphase
All-Speed Transient Flows in Complex Geometries," NASA CR (unnumbered), October,
1991.
8. Kim, Y. M., Shang, H. M., Chen, C. P., and Ziebarth, J. P., "Numerical Modeling for
Dilute and Dense Sprays," presented at the 10th Workshop for CFD Applications in Rocket
Propulsion, NASA MSFC, April 28-30, 1992.
9. Pal, S., Moser, M. D., Ryan, H. M., Foust, M. J., and Santoro, R. J., "Flowfield
Characteristics in a Liquid Propellant Rocket," AIAA 93-1882, 1993.
XXV-4
j = 50
-a
£
H2in
■iiiniiiiiiiii
ilillillliiiii
1 = 1
S
vo
II
i = 1 Symmetry i = 60
Fig. 1 Computational grid and boundary conditions for PSU injector simulation
Fig. 2 Spray parcel distribution for LOX
injection only, t = 0.5 msec.
E
>-
0.00005
0.00004
0.00003
0.00002
0.00001
O0000 8.o5b 0.002
0.004 0.006
X , meters
0.008 0.010
Fig. 3 Spray parcel distribution for GH2/
LOX injection, t = 0.5 msec.
B
>
0.00005
0.00004
0.00003
0.00002
0.00001
0.00000 nnn
0.000
0.002 0.004 0.006
X , meters
0.008 0-010
XXV-5
M M Of,
*k 4t t3
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
The University of Alabama
Structure in Gamma-Ray Burst Time Profiles:
Correlations with Other Observables
Prepared by:
Academic Rank:
Institution and Department:
MSFC Colleague:
NASA/MSFC:
Laboratory:
Division:
Branch:
John Patrick Lestrade
Associate Professor
Mississippi State University
Department of Physics and
Astronomy
G. J. Fishman
Space Science
Astrophysics
Gamma-Ray Astronomy
XXVI
Introduction
One of the current debates raging in the world of gamma-ray burst physics
is whether the sources of these enigmatic bursts arise from a single or from multi-
ple distributions. Several authors contend that the histograms of GRB observables
imply the latter. The two most-Hkely candidate components are galactic and cosmo-
logical. For example, Atteia et al. (1993) claim that a dip in the V/V max distribution
is a result of such a two-component source distribution. Lamb et al. (1993) have
used a parameter called the 'burst variability' calculated by dividing the maximum
count rate on the 64-msec timescale by that from the 1024-msec timescale to show
that a correlation of this parameter with burst brightness implies a two-component
model. Lamb's paper has met vigorous criticism.
We have developed two parameters that measure the variability or structure in
the time profiles of BATSE gamma-ray bursts. Both parameters ("structure" and
"spike height") are based on the statistics of "runs up" and "runs down" (Knuth,
1981). In short, the structure parameter is the observed number of runs (at several
lengths) minus the number expected in a chance distribution. The "spike height" is
the sum of all run heights minus the expected sum. These two are straight-forward
to calculate, robust, and measure the variability over the complete profile - not just
at the peak. For a full description of the algorithm, refer to Lestrade (1993).
We have applied this algorithm to the profiles of 156 GRB's. In this paper we
present graphs of the two parameters as functions of 1) burst duration, 2) burst
hardness ratio, 3) VfV max , 4) source galactic longitude, and 5) source galactic lat-
itude. We seek correlations as well as groupings in the data that might indicate a
multi-component source distribution.
Correlations:
1) Duration: As a measure of duration we take the values of T90 in units of 64-msec
bins. In this paper, we are considering only those bursts whose durations are longer
than 12 seconds (i.e., 200 bins.) As Figure 1 shows, there are no apparent groupings
nor significant correlations.
2) Hardness: For the hardness ratio, we use the value h =(chan 3+ chan 4)/(chan
1+ chan 2) from the BATSE DISCSC data. This is approximately equal to the flux,
above 100 keV divided by the flux below 100 keV (down to the threshold of roughly
25 keV). As before, Figure 2 shows no correlation nor any evidence of grouping.
3) V/V max ,: The quantity VlV max measures the relative distance to a burst. Dis-
tant, weak bursts have values close to unity while the brightest have values close
to zero. For a homogeneous distribution of sources, the distribution should show
a uniform distribution between and 1. As is well documented, the ensemble of
XXVI -1
GRB's shows a paucity of weak bursts indicating a radial inhomogeneity. In effect,
our instruments are seeing to the "edge" of the radial distribution.
Of course, we would expect to see a correlation between the amount of structure
in a burst and the burst's distance (or V/V ma . x ). This is seen in Figure 3 which shows
that the more distant, i.e., weakest bursts, show less structure because the smaller
spikes are lost in the background noise. Naturally, as seen in the right part of Figure
3, the more distant bursts have spikes which are less intense.
1 0000 3 — : ilOOOO j-
o
tt , 1000:
E-i
«„
■■ ■ *■ - . ■■<
* m y v 1 ■ -
■_ ■ _ _
100 "I 1 I I I iimi 1 — i i nun 1 — i i nun 1 — i i urn
0.1 1 10 100 1000 100
1000:
i i i 1 1 in i i i i i mi i i i i i n il
1000 10000 100000
Structure Spike Height
Figure 1. Burst Duration versus Structure and Spike Heights
v *->
V
X
■
o
■
"■ .
o
T-4
■ ■
£ '■
■:■*
■
■
■ ■
■ ■
>
■
0)
■ ■
X
■
■ ■
■ ■
■ "
o
■
" ■ i ■ 9 ■
o
■ ■ ■
"*
■ ■
■a
tH
■
■
■
A0.5-
■
•
■ ■
■■
"--
en
■
■
■
■ ■
s
■ ■■
■
■
-a
u
■%
cs
■
M 0i
1 1 1 1 II
m F-r
ft.m,
1.0-
■ * *
■ i ■■
■
1-
"
■- m *-i A i;. -
■
■
0.5-
- ■ .- •; r- -
■ - "A- *
■ ■■ ■
■
■
- *■ ■ - ■
■
n<
i — i—
0.1 1 10 100 1000 100 1000 10000
Structure Spike Height
Figure 2. Burst Hardness Ratio versus Structure and Spike Heights
100000
XXVI -2
>
\ 0.1:
— i 1
■ ■
* ■
■
■
■
■
■• *
■ ■
0.01
0.1
i i 1 1 1 1 iii 1 i i iiiiii 1 i 1 1 inn 1 i 1 1 1 *
— -
\ ■■%
■■■"■■ ■
■ "* ■■
■ %■ ■
v ■ ■■■ %
■ %
ft fti 4 1 — i — I linn 1 — i — i l i nn 1 — *■ i i 1 1 "I
°- 01 1000 10000 100000
1 10 100 1000 100
Structure Spike Height
Figure 3. Burst V/V ma . x versus Structure and Spike Heights
4) Sky Position: Finally, Figures 4 and 5 present graphs of galactic longitude and
latitude versus structure and spike height. Figure 4 shows no significant features
in galactic longitude. However, Figure 5 shows that bursts that come from high
latitudes (i.e., > 45°) show less variance in the spike height parameter than those
that come from low latitudes (i.e., within 45° of the galactic plane).
360 1
■ ■_■■■ ■
360 -I
* ■„ ■ » ■
■
300-
■■ "■ " . -
- J" ." I
■ ■ ■ _
300-
■ J*- -■■"■
_ ■ ■
■
3 240-
a
■ ■
■ ■ ■■ . i
■
240-
■ ■ ■
■ ■ — ■■
■ ■ ■■■/*
■ * ■ ■
J 180-
-4->
jj ISO-
'S
-r : - ■- ■*
■ - i -" ■ i
■ ■ ■ ■ Ji ■
■ ■
180-
120-
■ m J m ■"%
■ ■ mm m M
. . %■ " V '
^ a ■ ■
■
o
■ _
■ ■ ■ *
60-
-■ .- - .
■■
60-
■ ■
0-
■« _ »
rt i
1 1 10 100 1000
1C
10 1000 10000
100
Structure
Spike Height
Figure 4. Burst Galactic Longitude versus Structure and Spike Heights
XXVI -3
-90 "1 1 — ' ' "m i r
0.1 1
iiii i i i i i nun i i i uni t
10 100 1000 100
Structure
10000
Spike Height
-i — I i i ml
100000
Figure 5. Burst Galactic Latitude versus Structure and Spike Heights
Conclusion:
The result seen in the Latitude-Height graph is not expected. It is possible
that this is just a statistical anomaly. We will soon do a complete statistical analysis
to determine its significance. If the result stands up under further scrutiny, it will
certainly be adopted by the "galactic" modelers as evidence that at least some
bursts arise from neutron stars which are confined to the plane of the galaxy.
References:
1. Atteia, J.-L. and Dezalay, J.-P., Gamma-Ray Bursters in the Galactic Disk As-
tron. Astrop. , in press, 1993.
2. Lamb, D. Q., Graziani, C. and Smith, I. A., Evidence for Two Distinct Morpho-
logical Classes of Gamma-Ray Bursts From Their Short-Timescale Variability
Ap. J. , in press, 1993.
3. Knuth, D. E., The Art of Computer Programming, Seminumerical Algorithms,
2nd (Addison Wesley, Reading, Mass., 1981), p. 65.
4. Lestrade, J. P., The Statistics of Runs Up and Down for BATSE GRB Time
Profiles Ap. J. , in prep., 1993.
XXVI -4
A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
SPATIAL INTERPRETATION OF
NASA'S MARSHALL SPACE FLIGHT CENTER
PAYLOAD OPERATIONS CONTROL CENTER
USING VIRTUAL REALITY TECHNOLOGY
Prepared By:
Academic Rank:
Institution and
School :
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
Patricia F. Lindsey
Lecturer
East Carolina University,
School of Human Environmental
Sciences
Joseph P. Hale
Missions Operations Laboratory
Operations Engineering Division
Crew Systems Engineering Branch
XXVII
SPATIAL INTERPRETATION OF NASA'S MARSHALL SPACE FLIGHT CENTER
PAYLOAD OPERATIONS CONTROL CENTER USING VIRTUAL REALITY
TECHNOLOGY
Introduction
In its search for higher level computer interface and
more realistic electronic simulation for measurement and spa-
tial analysis in human factors design, NASA at Marshall Space
Flight Center is evaluating the functionality of virtual
reality (VR) technology. Virtual reality simulation generates
is a three dimensional environment in which the participant
appears to be enveloped (Nugent, 1991). It is a type of in-
teractive simulation in which humans are not only involved ,
but included (Helsel and Roth, 1991).
The military and entertainment industries along with the
physical sciences have driven the development of computer
equipment, programming, and presentation techniques used in
the production and presentation of VR generated environments.
The development of headsets, high resolution displays and
position sensors have enabled the creation of the illusion of
existing within a yet unconstructed space (Editorial, 1991).
The general purpose nature of VR technology makes it an
intelligence amplifying (IA) tool — utilizing both the computer
advantage in calculation and the human advantage in evaluation
and putting ideas into context. These advantages are aug-
mented with the use of input gloves, body suits, and display
head gear that permits the user to utilize natural movement,
rather than typed instruction or symbols and text picked from
a menu (Rheingold, 1991).
Virtual reality technology is still in the experimental
phase but it appears to be the next logical step after com-
puter aided three-dimensional animation in transferring the
viewer from a passive to an active role in experiencing and
evaluating an environment (Eschelman and Tatchell, 1991).
There is great potential for using this new technology when
designing environments for more successful interaction, both
with the environment and with another participant in a remote
location. At the University of North Carolina, a VR simula-
tion of a the planned Sitterson Hall, revealed a flaw in the
building's design that had not been observed during examina-
tion of the more traditional building plan simulation methods
on paper and on computer aided design (CAD) work station
(Aukstankalnis, 1991). The virtual environment enables mul-
tiple participants in remote locations to come together and
interact with one another and with the environment. Each par-
ticipant is capable of seeing himself and the other par-
ticipants and of interacting with them within the simulated
environment.
Utilization
Three areas of utilization of VR technology in human fac-
tors design covered in this study are: (a) simulation tech-
XXVII-1
niques, (b) behavioral settings, and
(c) human/computer interaction. Simulations provide a method
of presentation of the environment without necessitating on-
site visits, permit response to environments to manipulate the
prospective environment. Simulation is most useful in situa-
tions where observations or experimentation are not feasible
or ethical.
Behavioral settings are social and psychological situa-
tions in which human behavior occurs (Wicker, 1979). They are
both structural and dynamic (Barker, 1968) and include time
and place boundaries, duration of setting, number of times
setting occurred over a period of time, number of par-
ticipants, positions of responsibility, demographic group to
which participants belong, behavior patterns of participants,
and behaviors that occur in the setting (Wicker ,1 979) . In or-
der to understand the behavior of individuals or groups, we
must examine the opportunities and constraints encompassed in
their environments.
Virtual reality enhances human /computer interaction. In-
teractive computer programs, using VR simulation take ad-
vantage of both the computer advantage in calculation and the
human advantage in evaluation and putting ideas into context.
Virtual reality weakens the barrier between man and machine by
permitting the user to use natural movement rather than symbol
or word commands.
Using VR for evaluation of behavioral settings enables
exploration of connections between specific environmental at-
tributes and users perceptions of those attributes. Com-
ponents within a behavioral settings control the range of
human behavior by promoting some actions and prohibiting
others, therefore observation and research should clarify and
supplement that which is known about relationships between
physical environments and human behavior.
The Study
Virtual reality simulation is promising but there are no
studies to verify that reaction to the VR environment ap-
proximates reaction to the "real world" environments. This
study compares responses of participants who viewed NASA's
Payload Operations Control Center (POCC) at Marshall Space
Flight Center with responses of the same participants who
viewed the same environment via VR simulation. This study in-
vestigates: (a) the potential for using VR to evaluate
human/environmental interaction, (b) whether observation of
environments using VR simulation provides the same information
about the characteristics of that environment as is provided
by observation of the "real world" environment, (c) the
reliability of using virtual reality to interpret the at-
tributes, deficiencies, and characteristics of an existing or
planned environment.
The study is a pretest-posttest design. The sample con-
XXVII-2
sisted of 24 volunteers — 12 NASA employees who have worked in
POCC console positions and 12 university and community college
faculty members who have never worked in the POCC. Six from
each group were male and six were female. Responses of par-
ticipants were recorded on a forced response questionnaire,
and a semantic differential questionnaire. In addition, six
members of the sample were asked to give verbal responses to a
moderately scheduled, open ended follow up questionnaire.
Responses were recorded on audio tape. The qualitative infor-
mation gathered from the semantic differential and the follow
up questionnaire will be used to clarify the quantitative in-
formation gathered from the forced response questionnaire.
Participants were seated at two specified points in both
the "real world" and VR POCCs. Questionnaires were completed
from these two locations. The participants' seat height was
adjusted so that their eye height approximated the eye height
of a 50th percentile male at one location and a 50th percen-
tile female at the other location (NASA, 1989). After one set
of questions was completed in the virtual POCC, changes were
made to the virtual environment and the questionnaire was com-
pleted again. Responses before and after the changes will be
compared. Questions concerned distance judgment, head rota-
tion, and perception. The sequence of observation was the
same from both consoles and in the "real world" and the VR
POCCs. The semantic differential questionnaire was completed
from the center back of the POCC from a standing position.
The equipment, hardware and software used to create the
virtual POCC environment included eye-phones and data glove by
VPL research, Inc. A Macintosh 2FX computer, 2 silicon
graphics computers — 310 VGX and 320 VGX-B. The graphics
package is Swivel 3-D by VPL Research, Inc. Body Electric
Visual Programming Language connects input by the operator to
drive the simulator is translated by Isaac.
Since participants using VR equipment were unable to read
the questionnaire or designate the answers while wearing VR
gear, the questions and answer options must be read to the
participant and answers marked by a surrogate. The researcher
or research assistant acted as surrogate. In order that con-
ditions be as alike as possible in both settings, questions
were also read and answers marked by the surrogate in the
"real world" POCC.
Data from the questionnaires will be coded, entered into
the computer and verified for accuracy. Using SPSS, descrip-
tive statistics will be generated including frequencies,
means, and percentages. Analytical statistics for all
hypotheses will include a repeated measures multivariate
analysis of variance to test differences between groups.
Conclusion
Analysis of data has not yet begun but some anticipated
conclusions drawn from the data and from comments of par-
XXVII-3
ticipants include a similarity in spatial analysis among
groups. Some differences are apparent between participants
who have worked at the POCC consoles and those who have not.
It appears that there is some difference in responses between
those who view the "real world" POCC first and those who view
the VR POCC first. Estimation of distances in the VR POCC ap-
pear to be similar to estimation of distances in the "real
world" POCC up to a distance of about 10 feet. Beyond that,
however, the estimated distances in the VR POCC are greater
than those in the "real world" POCC. Overall, the estimates
of distance, head rotation, perception appear to be similar in
both "worlds".
Acknowledgment s
Much appreciation is due to the staff in MSFC's Summer
Faculty Fellowship Office for advice and support while this
study was in progress. Many thanks go to the members of my
dissertation committee at Virginia Polytechnic Institute and
State University (Virginia Tech), especially to Joan McLain-
Kark, committee chair. The committee was instrumental in
aiding preparation of this study. Joe Hale and his staff mem-
bers, Michael Flora, Gina Klinzak, and Peter Wang, along with
Patrick Meyer, a participant in the PIP program, have all my
gratitude for their generosity of time, knowledge, guidance,
and friendship.
References
1. Aukstankalnis, G. Virtual reality and experiential
prototypes of CAD models. DesiqnNet . (1992, January).
2. Barker, R.G. Ecological psychology . Stanford, CA.:
Stanford University Press.
3. Editorial Being and believing-ethics in virtual
reality. Lancet , 338 (8762), 283-284. (1991).
4. Eshelman, P. & Tatchell, K. How beneficial a tool is
computer-aided design? Forum , pp. 15-19. (1992).
5. Helsel, S.K. & Roth, J. P. (eds.). Virtual reality:
Theory, practice, and promise . Westport, CN:
Meckler Publishing. (1991).
6. National Aeronautics and Space Administration (NASA).
Man-Systems Integration Standards; SA-STD-3000 .
National Aeronautics and Space Administration.
pp. 3-11 - 3-25. (1989).
7. Nugent, W.R. Virtual reality: Advanced imagery special
effects let you roam in cyberspace. Journal of the
American Society for Information Science , 42 ( 8) , 609-617.
(1.991).
8. Rheingold, H. Virtual reality . New York: Simon &
Schuster. (1991).
9. Wicker, A.W. An introduction to ecological psychology .
Belmont, CA: Wadsworth, Inc. (1979).
XXVII-4
A A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
NEURAL NETWORK-BASED CONTROL USING LYAPUNOV FUNCTIONS
Prepared By:
Academic Rank:
Department:
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
Leon A. Luxemburg, Ph.D.
Assistant Professor Institution and
TAMU
Marine Engineering
Henry B. Waites, Ph.D.
Structures and Dynamics Laboratory
Control Systems
Precision Pointing Systems
XXVIII
Introduction
Consider a linear nonminimal phase plant given as follows:
x = Ax + Bu (1)
y = Cx (2)
The goals of this research effort are:
1. To develop an algorithm for offline stabilization of linear and nonlinear plants
with known parameters by using a neural network controller.
2. The results of stabilization procedure must be rigorously tested mathematically.
3. The obtained controller should become linear controller which also stabilizes the
plant when linearization of the neural network is performed.
4. Tracking of step inputs must be achieved.
5. Provide unified treatment of plant and controller dynamics in terms of differential
equations rather than considering a hybrid discrete-continuous system.
To stabilize ( 1) we propose a neural network described by the following equations:
z = g(z,u,y) (3)
where the output of the net o is given by o = wjy + w^z and u = o + ref, where ref
is the reference input.
Definition of asymptotic stability of nonlinear system. Consider a plant-controller
dynamical system given above in the phase space R 1 with state vector (x T , z T ) T . Then
this controller stabilizes the plant with the region of stability U, G U C R 1 if and only
if disconnecting external input ref results in convergence of any trajectory of combined
plant-controller state space to 0.
The neural network consists of three layers: input layer, inner layer and the out-
put layer with 5,4 and 2 nodes in these layers respectively. Sigmoid functions in
the inner layers are chosen to be hyperbolic tangent functions y(x) = (exp(x) —
exp(—x))/(exp(x) + exp(— x)). The layers are fully interconnected resulting in 28
weights. Additional weights are 4 weights for 2 two-dimensional vectors u?i, tt>2 in the
output o above totalling 32 unknown weights. The 5x4 matrix of weights connecting
input to inner layer is denoted by E and the 4x2 matrix of weights connecting inner
XXVIII-1
layer to the output layer is denoted by D. The total 32-dimensional weight vector is
denoted by r.
To fully explain our approach we need to formulate two well known results about
Lyapunov functions:
Result A: Let x = p(x), x € i? 1 be a differential equation on a bounded open set
U <Z BJ 1 and let p(0) = G U. Let h{x) be a continuous function on U such that
h(x) > on U and h(0) = 0. Let < S7h{x),p(x) >< for all x G U, where \/h(x)
denotes gradient of h and <, > denotes the scalar product in i? 1 . Then every trajectory
of our differential equation with initial condition in U converges to as t — *• oo.
Result B: All the eigenvalues of matrix T have negative real parts if and only if for
any given positive definite symmetric matrix N the matrix equation T T M+MT = —N
has a unique positive definite symmetric solution M.
The basic underlying idea of the solution of stabilization problem using neural network
controller is as follows: find a 6 x 6 matrix M and the set of weights r with dimension
of r being 32 such that h(v) = v T Mv is the Lyapunov function in a neighborhood of
in a six dimensional state-space with the state vector (z) . This would require that the
time derivative of h , h(v) = v T {T T M + MT)v be a negative function on U where T is
the Jacobian of the overall plant-controller dynamical system. Function h(x) depends
altogether on 68 parameters: on vector r and on vector g which is such a vector that
when arranged in a 6 x 6 matrix G will satisfy the equation GG 7 = M .
Our approach then is to start with random vector r and random vector g and form
a gradient descent equation
q = —adh/dq (4)
where q is the six-dimensional state vector q = f z J , a is not a constant but a vector
and in the formula above we consider the Hadamard product of a with the partial
derivative of h by q. Also, a changes with time as the function h decreases.
While simulating the gradient descent equation we modify vectors r and g until
function h above is negative on a neighborhood of 0.
To check that we have designed the stabilizing controller with the linear plant we
need only to check that the egenvalues of the matrix MT + T T M are all negative.
However, in this section we extend our method to nonlinear plants and show how to
verify the stability in this case.
XXVIII-2
Algorithm for stabilization of nonlinear plants:
1. Stabilization of Jacobian at the equilibrium is done first and proceeds as in the
case of linear plants. (Here we assume that the nonlinear plant has an equilibrium
and we stabilize around this equilibrium).
2. After obtaining some open region of stability around the equilibrium as in part
1 we select points at random lying on concentric expanding spheres around this
stable equilibrium and adjust the weights of neural net to achieve the negativity
of the derivative of Lyapunov function. Lyapunov function M is also given as a
neural net.
Verification of stability of a given region for the given nonlinear plant and stabilizing
neural net: Given the candidate for stability region U and the Lyapunov function h
we can derive the upper bound K on the partial derivatives of h with respect to state
vector:
dh/dw < K (5)
where w is the arbitrary point in U. If for every point wgf/we have h(w) < — 0, >
then, as follows from the Taylor's formula for multivariable functions, in the open ball
of of radius fi/K the derivative k is negative. If we cover U with the balls of radius
01 K then h is negative on U insuring stability. This can also give us an estimate on
the number of training points to achieve the stability.
Definition. Given a differential equation x = f(x), x € BJ 1 a point x is an equilibrium
of order k, k < n if f(x Q ) = and the Jacobian df(x )/dx at x is nondegenerate and
has exactly k eigenvalues with positive real parts. By a stable manifold of xo we mean
a union of all trajectories converging to x as t —* oo.
Definition. Consider the dynamical system w = f(w) described by a neural network-
plant differential equations and having the Lyapunov function h. Let Ube the maximal
set such that U is connected, contains the origin of the state-space, h is positive on U
and h is negative on U. Then U is called the maximal stability region.
Theorem. In the notations of previous two definions let w = f(w) be a differential
equation describing plant-neural network dynamical system and let U be the maximal
stability region for Lyapunov function h. Then
1. If U is bounded then on the boundary of U there are equilibria of all orders
fc, < k < n.
XXVIII-3
2. Under generic assumptions the boundary of U is the union of stable manifolds of
equilibrium points lying on the boundary.
3. Every trajectory on the boundary of U converges to an equilibrium point as
t — » oo. Iff/ is bounded the the same is true for t — * — oo.
4. The point on the boundary where the minimum of h is achieved is an equilibrium
point of order 1.
Conclusions
We have successfully demonstrated how the problem of stabilization of plants can be
reduced to a problem of approximation of functions. Neural networks have been shown
to have approximating and interpolating properties. This approach is good for linear
and nonlinear plants. Software has been generated to demonstrate this approach.
Directions for further research:
1. Generate faster software to utilize parallel processing features.
2. Improve algorithms to increase success rate for ill-conditioned plants such as the
one considered. The convergence is successful for a random linear plant all the
time.
3. Generate efficient software for nonlinear plants stabilization and tracking.
4. Study regions of stability and phase portraits of plant-neural controller and gra-
dient descent learning differential equations.
5. Develop techniques for pole placing of linearized version of plant-neural controller
system and of shaping the stability region.
Acknowledgements
The substantial contributions to this work by Dr. Henry Waites and help by Mark
Whorton is acknowledged and appreciated.
XXVIII-4
4 4 8
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
ACCESS TO SPACE STUDIES
Prepared by:
James A. Martin, Sc.D.
Academic Rank:
Associate Professor
Institution and
Department:
The University of Alabama,
Aerospace Engineering Department
MSFC Colleague:
Robert F. Nixon
NASA/MSFC:
Office:
Group:
Space Transportaion and Exploration
Upper Stages
XXIX
Access to Space Studies
James A. Martin
University of Alabama
introduction
The National Aeronautics and Space Administration is currently considering
possible directions in Earth-to-orbit vehicle development under a study called
"Access to Space." This agency-wide study is considering commercial launch
vehicles, human transportation, space station logistics, and other space
transportation requirements over the next 40 years. Three options are being
considered for human transportation: continued use of the Space Shuttle,
development of a small personnel carrier (personnel logistics system, PLS), or
development of an advanced vehicle such as a single-stage-to-orbit (SSTO).
Several studies related to the overall Access to Space study are reported in this
document.
Hydrogen Upper Stage for Delta
The Delta commercial launch vehicle has had a long and successful life. One
of the possibilities for extending the capability of the Delta is to replace the storable
second stage and solid third stage with a hydrogen/oxygen stage. A study was
conducted to show the payload potential of such a stage with several engine
options. The first step in the study was executing the trajectory optimization
program Opguid to find the burnout weight for each engine design point. The inert
weight of the stage was calculated from weight estimating relationships developed
for such a stage, and the payload was found by subtracting the inert weight from
the burnout weight. Several propellant weight cases were computed for each
engine case.
The RL1 0C, which has not been developed but is a derivative of an existing
RL10 engine, was analyzed at several thrust levels and exit areas. The RL10A4,
which is an existing engine, and an advanced expander were analyzed. A new
engine concept called the Advanced Technology Low Cost engine (ATLC) under
consideration for development was analyzed. It would have a low-pressure staged
combustion cycle and an uncooled chamber. The results are shown in the
enclosed figure. Because the thrust level of the RL10C could be chosen at the
optimum value for this application, it provided a somewhat better payload than the
other candidate engines.
The results of this study indicate that a hydrogen upper stage can provide a
payload increase from 4010 lb, the capability of the existing Delta, to about 5600 lb.
XXIX-l
The inert weight calculations used in the analysis assume a stage with self-
supporting tanks with convex bulkheads. The inert weight is approximately 6500
lb. An existing stage, Centaur, has pressure-stabilized tanks and a concave lower
hydrogen tank bulkhead. With these features, it has an inert weight of about 4300
lb. Using such a stage would increase the payload to about 6800 lb, but the costs
may be greater.
Advanced SSTO Engines
A current contract with Rocketdyne is considering advanced hydrogen engines
for the SSTO vehicle option. After considering previous engine studies for SSTO
vehicles, several engine designs were selected for analysis. This analysis will
include engine calculations by Rocketdyne and vehicle analysis by NASA. Vehicle
calculations at The University of Alabama may also be included. The engines will
include full-flow staged-combustion engines, hybrid expander engines, and
SSME-type engines. Mixture ratios of 6 and 7 will be included. Initial results
indicate that the full-flow engine can reduce the vehicle dry mass from 232,000 lb
to 159,000 lb.
Expendable Hydrogen Tank SSTO
The fully reusable SSTO being considered should have considerably lower
recurring costs than the Space Shuttle or PLS options. There has been an
assumption that a fully reusable vehicle would have the lowest recurring costs. To
explore this assumption, a concept has been studied with an expendable hydrogen
tank. Initial vehicle results indicate that the vehicle gross weight drops from about
2.4 million lb for the fully reusable vehicle to under 1 .8 million lb with the
expendable hydrogen tank. This is because returning the hydrogen tank for reuse
increases the size of the vehicle, increasing the thermal protection weight, the
wings, landing gear, etc. The number of SSME's is reduced from 7 to 5. The
development, production, spares, and engine costs are therefore reduced. This
reduction is balanced by the added cost of the expended tank which must be
replaced each flight. Cost estimates show that the net result is essentially no
change in the total costs, but the early costs are reduced, which would provide a
net savings if the time value of money is included in the analysis.
Orbiter instead of PLS
The PLS option studies have discovered a vehicle concept with some promise.
It uses a reusable propulsion and avionics (PA) module with expendable tanks.
Each PA module has two SSME's. With three PA modules, a 65,000 lb payload
can be launched to the space station. Six flight of this cargo vehicle per year can
provide the space station logistics. The PLS can be launched to the space station
on the same vehicle. The recurring costs are estimated to be significantly lower
XXIX-2
than the current Space Shuttle costs, but the development costs that must be
invested to get to this system are quite high. In an attempt to reduce these costs, a
concept was developed that does not require the PLS development. The Space
Shuttle orbiter is used with a small oxygen tank in the payload bay and a small set
of expendable hydrogen tanks. This orbiter and small tank set is launched with the
vehicle with three PA modules. Weight estimates and trajectory results indicate that
a 21 ,700 lb payload can be delivered to the space station.
Russian Engine PA Module
There is a possibility that Russian engines could be used in a new launch
vehicle. The existing RD-170 engine has been proven to be reliable and has
excellent performance. A concept was developed which would use a PA module to
reuse one RD-170 and another PA module to reuse two SSME's. This concept
would have more payload than the concept with three PA modules with two
SSME's each, and the tank would be smaller because most of the fuel would be
kerosene rather than hydrogen. One alternative to this concept is to use two RD-
180 engines, each in a PA module, instead of one RD-170. The two SSME's would
still be used. The RD-180 is essentially half of an RD-170. Another alternative is to
use three PA modules, each with one RD-701 engine. The RD-701 is a
tripropellant derivative of the RD-1 70. In this alternative, no SSME's would be
needed.
Engine Comparisons
5800
5600-
5400-
5200
Payload, lb
5000-
4800-
4600
4400
20000
Figure 1
30000 40000 50000
Propellant, lb
60000
XXIX-3
4491
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA
Flux Measurements Using the BATSE
Spectroscopic Detectors
Prepared by:
Academic Rank:
Bernard McNamara
Professor
Institution and
Department:
New Mexico State Universiy
Astronomy Department
NASA/MSFC
Office :
Division:
Branch:
Space Sciences Laboratory
High Energy Astrophysics
Gamma-Ray and Cosmic Ray
MSFC Colleague
BA. Harmon
XXX
Introduction
Among the Compton Gamma-Ray Observatory instruments, the BATSE Spectroscopic
Detectors (SD) have the distinction of being able to detect photons of energies less than
about 20 keV. This is an interesting energy range for the examination of low mass X-ray
binaries (LMXBs). In fact, Sco X-l, the prototype LMXB, is easily seen even in the
raw BATSE spectroscopic data. The all-sky coverage afforded by these detectors offers
a unique opportunity to monitor this source over time periods never before possible.
The aim of this investigation was to test a number of ways in which both continuous and
discrete flux measurements can be obtained using the BATSE spectroscopic datasets.
A instrumental description of a SD can be found in the Compton Workshop of April
1989 (p 2-39), this report will deal only with methods which can be used to analyze its
datasets. Many of items discussed below, particularly in regard to the earth occulta-
tion technique, have been developed, refined, and applied by the BATSE team to the
reduction of BATSE LAD data. Code written as part of this project utilizes portions of
that work. The following discussion will first address issues related to the reduction of
SD datasets using the earth occultation technique. It will then discuss methods for the
recovery of the flux history of strong sources while they are above the earth's limb. The
report will conclude with recommended reduction procedures.
SD Fluxes Measured Using the Earth Occultation Technique
The earth occultation technique utilizes two source flux measurements per orbit: one
obtained shortly after the source rises above the earth's limb and one shortly before the
source sets behind the earth's limb. These fluxes are subtracted from background val-
ues taken near these times but when the source is behind the earth's limb. Since the
background changes in a continuous fashion, a detailed background model is not needed
to obtain source flux measurements using this method. This is the strongest positive
attribute of the earth occultation method. The actual details of how the source and
background fluxes are measured depend upon such things as the source strength, the
presence of other sources, and the time over which the measurement takes place. These
are discussed below.
Item 1): The source strength
The main complicating factor here occurs when the source is strong and exhibits random,
short period, variations. Such sources are not common in SD datasets. In fact, only one
celestial source, Sco X-l, has been observed to show this type of activity. In this case
the slope of the least squares line on either side of the occultation step is normally quite
different. If the same slope is assumed an inaccurate estimate of the step size can result.
xxx-1
In addition, if the source variability timescale is less than but comparable to the interval
being fit, then the least squares slope and intercept can be influenced by activity some-
what removed in time from the step. This will also result in a difference in the source
strength, depending on the integration time.
Item 2): Least squares based estimates of the background flux
The background flux reflects an approximate sinusoidal pattern governed by the amount
of earth blockage as seen by the detector. Over time intervals exceeding a few hundred
seconds the background flux can change in a nonlinear fashion. Incorporating a quadratic
term into the background model to account for this departure can produce a better fit.
An undesirable effect of this is that as one includes times further and further from the
step, flux changes which occur close to the step have less and less of an impact on the
model. This makes the estimate of the background flux located at the step suspect.
A second problem is that as a wider time interval is included, other rising and setting
sources may effect the background fit in undesirable ways.
Item 3): Using background fluxes measured close to the step
This might appear to solve the problem raised above. Unfortunately it also has problems.
Generally the background level is not constant with time. One must therefore somehow
correct the computed background flux to the value it would have had at the time when
the (step + background) flux is measured. If the time interval over which the background
is measured is short, the resultant flux level will be sensitive to noise fluctations since it
will be based on relatively few points.
Item 4): Dealing with very noisy data
The data collected using a gain setting of 8X is normally quite noisy compared to that
at 4X. To lessen the impact of noise, two types of filters can be employed. The first
removes large cosmic ray spikes from the data. This can be accomplished by passing the
data through a filter which removes datapoints which deviate from prior points by a user
defined number of standard deviations. A second filter which removes high frequency
noise (such as Butterworth filter) can then be applied. This procedure was tested with
BATSE SD data an appears to work quite well. The selection of filter parameters involves
a subjective decision but reasonable variations in their values only change the step sizes
by small amounts i.e. 1-2 cnts/sec.
SD Light Curves Obtained During a Single Orbit
In many cases it is desirable to obtain the entire light curve of a source while it is above
the earth's limb. To do this one must have a model which accounts for the background
xxx-2
during this entire time period. Two models have been developed and tested which, at
least to first order, allow this to be done. The first fits the background to a second order
polynomial in terms of the cosine of the earth angle. The second model attempts to re-
move the background by subtracting a nearby orbit which not only includes background
but the primary and/or other sources. This latter model assumes that the secondary
sources have an identical level of activity in the reference and program orbit. These two
models are more fully described below.
Model 1: Background Removal Using a Polynomial Earth Angle Fit
For this model to work one must have background data from a substantial portion of
an orbit. Lack of TDRSS communication, SAA passages, and the short term decay of
radioactive isotopes all combine to make this condition difficult to meet. It is also not in-
tended to track subtle changes in the background. Increasing the order of the polynomial
to account for these changes generally results in a poorer overall background fit. A much
more detailed, physically based, background model is currently being developed by the
BATSE team but is not yet available. A second complicating feature which this model
does not address is the presence of multiple sources. In the case of Sco X-l the galactic
center region can rise and set shortly after Sco X-l. Obviously when this situation occurs,
results based upon this simple model will be incorrect.
Model 2: Subtraction of an Inactive Nearby Orbit
This technique assumes that orbits exist in which the presence of a source can be treated
as a constant additive term to the background. Orbits which appear to meet this con-
dition occur often in the SD datasets of Sco X-l. This type of behavior is associated
with Sco X-l when it is located on its normal branch in a two color x-ray diagram.
Even when this source is active, orbits which show a constant level of activity are not
uncommon. The equality of step sizes at earth rise and set can be used to help locate
orbits of constant activity as can a visual inspection of a flux versus time plot of the
data. The subtraction of two orbits which meet these criteria can also be used to re-
veal subtle, longer term variations that are difficult to see in the unprocessed data. The
advantage of this technique is that other sources, which exhibit constant emission over
a few orbits, are subtracted out of the signal. The disadvantage of the technique is
that slight trends may be introduced into the orbit of interest from the reference orbit.
In cases where high precision is needed, the presence of these trends can be determined
by differencing the reference orbit to another nearby orbit which meets the above criteria.
Adopted Analysis Techniques
Earth Occultation Method
A compromise between the various issues raised above which appears to work well is to
xxx- 3
model the background with a linear least squares fit extending 100-150 seconds prior to
the step. Longer time periods run the danger of 1) incorporating other sources, 2) violat-
ing the linear assumption, and 3) not adequately modeling the region close to the step.
For measurement of the source, two different approaches are used. The first measures
the average (source + background) flux over a time period of 40-60 seconds immedi-
ately after/preceding the step. The 60sec interval yields a slightly smaller step error.
The second approach models this region with a linear least squares fit. In both cases
the background flux is extrapolated to the time of the step. If the source is relatively
inactive, both methods give, to within the step error, identical results. If the source is
active, the average value is believed to give a better value of the instantaneous step size.
The computer programs written to perform these tasks were tested by running a LAD
dataset and then comparing the step sizes with those obtained with the BATSE LAD
earth occultation software. The LAD step sizes from both programs were found to be in
agreement.
SD datasets collected using gain settings near 8X are very noisy. A significant improve-
ment in the value of a step size can result with the aid of the filtering techniques mentioned
earlier. The application of these filters may be a necessary condition in order to obtain
meaningful results with a gain setting of 8X. Depending on the source energy distribution
and strength some additional higher energy information may be available from channel
2 data when the gain is set at either 4X or 8X. The sensitivity of a SD increases by a
factor of about 2.5 from 16 to 40 keV . In the case of Sco X-l this helps compensate for
the fact that the flux emitted by this source drops off steeply above 10 keV.
Orbital Light Curves
At the present time I would recommend the subtraction of a quiescent orbit from a nearby
orbit to obtain an orbital light curve. The main assumption inherent in this technique is
that occassionally one can find orbits where the source emission is relatively constant. A
second but less severe assumption is that the earth modulated x-ray background is also
repeatable over at least a few orbits. The former assumption can be tested by viewing
the raw orbital data and by comparing the step sizes at earth rise and set for each orbit.
If the source is indeed stable during an orbit, its rise and set step sizes should to equal.
In the case of Sco X-l periods of activity are easily distinguishable even in the raw data.
The assumption dealing with the repeatable nature of the background was tested by
computing its least squares determined slope near a Sco X-l step during the course of a
day. The slope was found to be unchanged over time intervals of approximately 30,000
sec. This implies that the earth modulated x-ray background changes slowly: overtime
frames of many hours. A significant advantage that the orbital subtraction model enjoys
over that discussed above is that it automatically accounts for other sources that have
constant emission over this time period.
xxx-4
1993
4 4 3 c
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVTLLE
INTEGRATION AND EVALUATION OF A SIMULATOR DESIGNED TO BE USED
WITHIN A DYNAMIC PROTOTYPING ENVIRONMENT
Prepared by:
Loretta A. Moore
Academic Rank:
Assistant Professor
Institution and
Department:
Auburn University
Department of Computer Si
MSFC Colleague:
Joseph P. Hale
NASA/MSFC:
Laboratory:
Division:
Branch:
Mission Operations
Operations Engineering
Crew Systems Engineering
XXXI
Introduction
The Human Computer Interface (HCI) prototyping environment is designed to allow
developers to rapidly prototype systems so that the interface and functionality of a system
can be evaluated and iteratively refined early in the development process. This keeps
development costs down by modifying the interface during the requirements definition phase,
thus minimizing changes that need to be made during and after flight code development.
Problems occur within a system when the user interface is not adequately developed and
when designers and developers have an incomplete understanding of the system requirements.
A process has been developed for prototyping on-board payload displays for Space
Station Freedom (Moore, 1992). This prototyping process consists of five phases:
identification of known requirements, analysis of the requirements, development of a formal
design representation and specification, development of the prototype, and evaluation of the
prototype. The actual development of the prototype involves prototyping the displays,
developing a low fidelity simulator, building of an interface (or communication) between the
displays and the simulator, integration of these components, and testing to ensure that the
interface does what the developer expects.
This research integrates and evaluates a software tool which has been developed to
serve as a simulator within the prototyping environment. The tool is being evaluated to
determine whether or not it meets the basic requirements which are needed for a low fidelity
simulator within this environment. In order to evaluate the architecture and its components, a
human computer interface for and a simulator of an automobile have been developed as a
prototype. The individual components (i.e., the interface and simulator) have been developed
(Moore, 1993), and the current research was designed to integrate and test the complete
working system within the prototyping environment. The following sections will describe
the architecture and components of the rapid prototyping environment, the development of a
system to assess the environment, and the integration and evaluation of PERCNET.
Architecture of the Environment
The architecture for building prototypes of systems consist of four major
components: a interface development tool, a test and evaluation simulator development tool,
a dynamic, interactive interface which links the interface and the simulator, and an embedded
evaluation capability. The interface development tool allows the designer to dynamically
develop graphical displays. The test and evaluation simulator development tool will allow
the functionality of the system to be implemented and will act as driver for the displays. The
dynamic, interactive interface will handle communication between the HCI prototyping tool
and the simulation environment. This component consists of a server which sends and
receives messages between the other components. The embedded evaluation capability will
collect data while the user is interacting with the system and will evaluate the adequacy of an
HCI based on a user's performance.
Human Computer Interface Development Tool. Sammi by Kinesix has been
chosen as the Human Computer Interface (HCI) development tool. Sammi is a graphical user
XTrtrT T
m\J.~ J.
environment which allows user interfaces to be built which can manage networked
information graphically. Sammi combines the functions of a graphical user interface with full
network communication support. Within Sammi the user interface and the networked data
access can be defined independently of the actual data source or application. This will allow
an interface developed under Sammi and communicating with the low fidelity simulator, to
later be connected to a high fidelity simulator such as those in the Payload Crew Training
Complex (PCTC), and later to the actual on-board flight software. Sammi has a distributed
architecture which means that the user interface and the application code are separate, that is,
the user interface is no longer embedded within the application code. With this separation
users can easily create and modify the human computer interface without affecting the
datasource, and vice versa. This will allow concurrent development of the application and
the interface. Sammi developed applications can use remote procedure calls to access
information from a variety of nodes and servers on an Ethernet network.
Simulator Development Tool. A simulator is a computer program that models a
system or process in order to enable people to study it. The simulator development tool
should provide the capability to develop a low fidelity simulation of a system or process.
The development of a simulator has two important functions. First, the simulator helps the
developer to identify and define basic system requirements. Second, potential users can
evaluate both the look (in terms of the screen layout, color, objects, etc.) and feel (in terms of
operations and actions which need to be performed) of a system. During the requirements
definition phase, a high-fidelity simulation of the system will not yet have been developed, so
it is important to build a low fidelity simulator, so that the iterative cycle of refining the
human computer interface based upon a user's interactions can proceed.
For a piece of software to function as a simulator within this environment there are
several requirements which must be met in addition to it just being a simulation tool. These
requirements include the ability of the process to communicate with UNTX processes using
the TCP/IP protocol; real-time simulation execution, the execution engine must be tied to a
real time clock to assure that simulation timing and data collection are accurate; an option for
a variable communications mode during execution (i.e., with and without external
communication); real time communication with Sammi on a separate platform, via Ethernet;
the ability to receive data from Sammi to dynamically control scenario events, modify
blackboard variables, trigger scenario events, and track operator actions for post-hoc analysis;
the ability to specify and send commands and data to Sammi; and the ability to receive data
and commands from multiple Sammi applications/stations. The multiple Sammi stations may
include one or more display prototype stations and a monitoring station. A Simulator
Director should be able to send commands to this software from a monitoring station (e.g.,
start simulation, trigger scenario event). Sammi subroutines must be provided that have been
developed for the Simulator-Sammi communication and the software must be tested and
validated with documentation provided. PERCNET is designed to be used as a knowledge-
based graphical simulation environment for modeling and analyzing human-machine tasks.
Within PERCNET task models are developed using modified petri nets, a combination of
petri nets, frames, and rules. This research evaluated PERCNET to determine whether or not
it met the basic requirements which were listed above.
XXXI -2
Dynamic, Interactive Interface. This interface will handle communication between
the HCI prototyping tool and the simulation environment during execution. This interface is
a server which has been developed using the Sammi Application Programmer's Interface
(API). It will be a peer-to-peer or asynchronous server which means that messages and
commands can be sent and received both ways between Sammi and the application. Once the
embedded evaluation tool has been developed, the server can also service requests from this
process providing information as to which functions the user has used, errors which have
been made, and so forth.
Embedded Evaluation Capability. The embedded evaluation capability will include
a capture/playback component and an analysis component. The Capture feature will capture
a user's session and save this information to a log. This log can later be "played" back or
analyzed. The analysis component will analyze the user's session and provide guidelines for
the redesign of the system. Some of the measures will include: frequency of use of specified
features, task completion time, error counts, requests for help, amount of work/errors per
unit time, and response time to different activities and events.
Development of a prototype within the Architecture
In order to assess the individual components of the architecture a system was chosen
and developed (Moore, 1993). The system chosen for pathfinding and initial empirical
evaluation of the project was an automobile. An automobile has sufficient complexity and
subsystems' interdependencies to provide a moderate level of operational workload. Further,
potential subjects in the empirical studies would have a working understanding of an
automobile's functionality, thus minimizing pre-experiment training requirements. There
were four basic tasks which were completed: (1) requirements were developed for the
automobile simulator, (2) the automobile simulator was developed using PERCNET, (3) a
human computer interface for operating the automobile simulator was developed using
Sammi, and (4) evaluation criteria for the operation of the automobile simulator were
developed (Moore, 1993).
Integration and Evaluation of PERCNET
The initial design provided by Perceptronics presented a potential problem. The
dynamic, interactive interface component was designed to be embedded within the
PERCNET process. This would allow Sammi and PERCNET to communicate, however,
there would be no way for other processes to communicate with Sammi and PERCNET.
This was a real problem within our environment because the embedded evaluation capability
would be a separate process that needed to send messages and receive information from this
process during the execution. Once this problem was identified and the importance of this
function was understood, the developers from Perceptronics changed the architecture.
PERCNET provides the basic functionality of a tool which can act as a simulator with
the changes made in its architecture. However, there are some remaining issues which need to
be addressed and major problems with the current system which need to be fixed. One
problem concerns the system running out of swap space and exiting because it can no longer
XXXI-3
allocate memory. A minimal configuration of this tool needs to be presented and the the
system should be able to run with this configuration without the system exiting. A second
problem involves the tendency of the system to core dump, sometimes in response to
specific features (such as trying to use an option from the menu which has not been
implemented or is not currently working) and sometimes randomly. A third problem, is that
the screen and the keyboard lock up and the system has to be rebooted. It is not clear
whether the problem can be attributed to PERCNET or the second screen (a Plasma display)
which is attached to the SunSPARC station on which we are running PERCNET. This item
needs further investigation. There have been other problems with several features of me
system and most of these have been fixed by the developers at Perceptronics. However,
there are several functions of the system which have not yet been evaluated yet, such as,
communication across the network, having multiple Sammi displays communicate with a
single PERCNET model, and being able to start and stop the simulation from the second
Sammi window.
Conclusions and Future Work
PERCNET has been integrated within the human computer interface prototyping
environment; however, it is recommended mat further testing and evaluation be conducted
using the automobile interface and simulator to resolve the issues previously discussed.
Most requirements have been met but there needs to be a more thorough evaluation of the
simulator tool and the architecture of the environment.
Following the automobile prototype development, a second system, based on a
Spacelab/Space Station payload should be developed for further evaluation of the
environment. This should involve development of the payload simulator requirements from
existing experiment simulator requirement documents, development of the payload simulator
using PERCNET, development of an interface for the payload using Sammi, and integration
and testing of the payload simulator and interface.
References
Moore, L. A. (1993). Assessment of a Human Computer Interface Prototyping Environment
(Contract No. NAS8-39131). MSFC, AL: NASA, George C. Marshall Space Flight Center.
Moore, L. A. (1992). A Process for Prototyping Onboard Payload Displays for Space
Station Freedom. In M. Freeman, R. Chappell, F. Six, & G. Karr (Eds.), Research Reports -
1992 NASA/ASEE Summer Faculty Fellowship Program f Report No. NASA-CR-184505,
pp. XXXVI. 1 - XXXVI.4). MSFC, AL: NASA, George C. Marshall Space Flight Center.
Perceptronics User's Manual . (1992). Woodland Hills, California: Perceptronics, Inc.
Sammi User's Guide . (1992). Houston, Texas: Kinesix Corporation.
XXXI -4
C-3
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
EVALUATION OF OVOSTATIN AND OVOSTATIN ASSAY
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
Debra M. Moriarity, Ph.D.
Associate Professor
University of Alabama in
Huntsville,
Department of Biological
Sciences
Marc L. Pusey, Ph.D.
ES 76
Microgravity
Applications
Biophysics
Science and
XXXII
INTRODUCTION
Ovostatin is a 780,000 MW protein, originally isolated
from chicken egg white, which is active as a protease
inhibitor (1) . Structural studies indicate that the protein
is a tetramer of identical subunits of 165,000 MW which can be
separated upon reduction with /3-mercaptoethanol . Chicken
ovostatin is an inhibitor of metal loproteases such as
collagenase and thermolysin, and of acid proteases such as
pepsin and rennin (2). Ovostatin isolated from duck eggs (3)
and from crocodile eggs (4) appears to be similar to chicken
egg ovostatin, but with significant differences in structure
and function. Duck ovostatin contains a reactive thiol ester
which is not found in the chicken protein, and duck and
crocodile ovostatin inhibit serine proteases such as trypsin
and chymotrypsin, while chicken ovostatin does not. Electron
microscopy (4,5) of ovostatin indicates that two subunits
associate near the middle of each polypeptide to form a dimer
with four arms. Two of these dimers then associate to produce
a tetramer with eight arms, with the protease binding site
near the center of the molecule. Upon binding of the
protease, a conformational change causes all eight arms to
curl toward the center of the molecule, effectively trapping
the protease and sterically hindering access of the substrates
to its active site. The structural- organization and mechanism
of action proposed for ovostatin are nearly identical to that
proposed for a 2 -macroglobulin, a serum protease inhibitor (6)
which may play an important role in regulation of proteases in
animal tissues.
Although the general arrangement of subunits appears to
be the same for all ovostatins studied, some differences have
been observed, with chicken ovostatin more closely resembling
reptilian ovostatin than the duck protein. This is a
surprising result, given the evolutionary relatedness of
chickens and ducks. It is possible that the difference in
structures may be due to deformed subunit arrangements which
occur during the processing and fixing necessary for electron
microscopy (4) . Examination of the native structure of these
proteins using X-ray crystallography would help clarify these
discrepancies .
BODY
Obviously, it is necessary to have good quality crystals
of ovostatin if x-ray crystallography is to be performed. Such
crystals could also be used as a model system to study and
understand numerous aspects of crystal growth for such a large
protein. For these reasons, attempts have been made at MSFC
to prepare crystals of chicken egg white ovostatin. Ovostatin
has been purified using slight modifications of published
procedures. SDS-gel electrophoresis under reducing conditions
XXXII- 1
indicated a large band of MW 165,000 and a smaller band at MW
88,000. This smaller band has been reported to be a fragment
produced by action of the bound protease on the ovostatin (7)
and has also been found to occur due to autolytic degradation
of duck ovostatin. Such autolytic degradation had not
previously been observed for chicken ovostatin (7) . Attempts
to crystallize the ovostatin preparations have had limited
success, with reasonable size crystals only occurring on a few
occasions. For this reason, it was deemed necessary to
investigate the protease inhibitory activity of the ovostatin
preparations to determine if native, active molecules were in
fact being purified.
One assay for ovostatin employs the metal loprotease
thermolysin and uses azocasein as its substrate in a reaction
carried out at 23 °C. Nagase et al. (1) have reported that
using this assay, they have observed a 1:1 stoichiometric
relationship between thermolysin and ovostatin. Thus, when
there is a molar ratio of ovostatin: thermolysin of 0.5 one
should observe 50% inhibition of the protease. Initial trials
using this assay at MSFC resulted in absorbance differences
between the blanks and the positive controls of only 0.3 - 0.6
absorbance units. Also, the azocasein substrate gave higher
readings with increasing storage time at 4°C. Hemoglobin was
tried as an alternate substrate for the thermolysin, but was
not a good substrate for the enzyme. After several
preparations of new azocasein solutions it was found that
storing the azocasein solution at -20°C gave more stable, low
blank values for the assay. Increasing the assay temperature
from 23 °C to 37 °C increased the activity of the thermolysin
and hence, the absorbance readings, as expected. However, it
was observed that ovostatin inhibition of thermolysin was
decreased at molar ratios of ovostatin: thermolysin less than
1.0. The observed temperature dependence of the assay is
shown in Figure 1. Since ovostatin is expected to be a
physiologically important inhibitor of bacterial proteases in
the egg at the normal chicken body temperature of 42° C these
results are curious and warrant further investigation.
Figure l.
Ovostatin Inhibition of Thermolysin
Temperature Dependence
0.5 1.0
0/T Molar ratio
JE-5 25 C^ III | 7 CfflHf 42 C
XXXII-2
Several ovostatin preparations were assayed and found to
yield less than a 50% inhibition of the thermolysin when used
at a 0.5 molar ratio of ovostatin: thermolysin. These
preparations were analyzed by SDS-polyacrylamide
electrophoresis, and all but one appeared to be quite pure,
except that the 88K MW degradation product was visible in
nearly all the lyophilized, stored preparations. Assay and
gel electrophoresis were then performed on freshly prepared
ovostatin at several key steps during the purification
procedure. The preparation did not have much of the 88K band
present and seemed to be nominally active through the ion
exchange column portion of the isolation procedure. At this
point it was also observed that the ovostatin solutions stored
at 4°C appeared to lose activity with time. Thus,
preparations of ovostatin that required more than 5-6 days to
complete could be becoming less active during the isolation.
Many of the blood coagulation factors are proteases and
it was of interest to determine whether ovostatin might
inhibit one or more of these. Thrombin, which acts near the
end of the blood clotting cascade, is readily available
commercially, so ovostatin was examined for its ability to
inhibit the action of thrombin on fibrinogen and the
subsequent formation of a fibrin clot. Assays at 37 °C with up
to a 2 fold molar excess of ovostatin over thrombin did not
indicate any inhibition. Native polyacrylamide gels of
ovostatin incubated wi£h thermolysin or with thrombin
indicated that the thermolysin bound to ovostatin and changed
its electrophoretic mobility, but the thrombin did
not.
Assays of ovostatin performed at both high (1.0 mg/ml)
and low (0.025 mg/ml) concentrations gave conflicting and
irreproducible results. It was thought that perhaps there was
either an as yet unreported requirement for some cation for
ovostatin activity, or that some cation could inactivate the
ovostatin. To test this hypothesis, ovostatin was incubated
with 1 mM EDTA prior to incubating it with thermolysin. The
results of this experiment indicated that this treatment may
have produced a slight increase in the activity of the
ovostatin when assayed at a molar ratio of 0.5. However,
incubation of ovostatin with 5 mM EDTA resulted in the
opposite effect, decreasing the ovostatin activity at a 0.5
molar ratio.
Several attempts were made to crystallize different
ovostatin preparations that had been stored lyophilized at -
20 °C, but none were successful.
CONCLUSIONS
As is often the case in science, these results have
XXXII-3
raised more questions than they have answered. While it
appears that ovostatin prepared at MSFC has some inhibitory
activity towards thermolysin, it may not have optimal
activity. This may or may not be the reason for the
difficulty in crystallizing these preparations. Although the
crystallization problem was not solved, several important
observations were made:
1) Azocasein solutions must be stored at -20° C.
2) Thermolysin solutions should be made up as concentrated
solutions in 50% glycerol, stored at -20° C and diluted
to the appropriate concentration immediately before use.
3) Hemoglobin is not a good substrate for this assay.
4) Chicken ovostatin does not inhibit thrombin.
5) The inhibition of thermolysin by ovostatin is temperature
dependent at low ovostatin: thermolysin ratios, and
decreases as one approaches physiological temperatures.
6) It appears that there are as yet undefined variables in
the purification of active chicken ovostatin.
More work needs to be done to identify the reason for the
appearance of the 88K MW band in the ovostatin preparations
and to discern the appropriate conditions to produce ovostatin
crystals .
REFERENCES
1. Nagase, H. and Harris, E.D., Jr. (1983) J. Biol. Chem. 258,
7481-7489
2. Kato, A, Kanemitsu, T. and Kobayashi, K. (1991) J. Agric.
Food Chem. 39, 41-43
3. Nagase, H. , Harris, E.D., Jr. and Brew, K. (1986) J. Biol.
Chem. 261, 1421-1426
4. Ikai, A., Kikuchi, M. and Nishigai, M. (1990) J". Biol.
Chem. 265, 8280-8284
5. Ruben, G.C., Harris, E.D. , Jr. and Nagase, H. (1988) J.
Biol. Chem. 263, 2861-2869
6. Sjoberg, B. and Sarolta, P. (1989) J. Biol. Chem. 264,
14686-14690
7. Nagase, H. and Harris, E.D., Jr. (1983) J. Biol. Chem. 258,
7490-7498
XXXII-4
/§ A
*? P A.
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
EVALUATION OF COMPUTER-AIDED INSTRUCTION TECHNIQUES
FOR THE CREW INTERFACE COORDINATOR POSITION
Prepared By:
Academic Rank:
Institution and Department:
MSFC Colleague:
NASA/MSFC:
Division:
Branch
Gary P. Moynihan
Assistant Professor
The University of Alabama
Department of Industrial Engineering
Beth Skidmore
Mission Operations Laboratory
Crew Training and Support
XXXIII
The Crew Interface Coordinator (CIC) is responsible for real-time voice
and procedural communication between the payload crew on the orbiter
and the payload operations team on the ground. This function is dedicated
to science activities and operations, and may also include some
responsibilities for crew training. CIC training at Marshall Space Flight
Center (MSFC) consists of mission-independent training, mission
simulations, and line-organization training. As identified by Schneider, the
program provides very good generic training, however position-specific
training may be obtained in a very unstructured way. (4) A computer-
based training system, identified as Mac CIC is currently under
development to address this issue. Mac CIC is intended to provide an
intermediate level of training in order to prepare the CIC for the more
intensive mission simulations. Although originally intended as an
Intelligent Tutoring System, Mac CIC currently exists as a hypertext-based
application. The objectives of this research is to evaluate the current
system, and to provide both recommendations and a detailed plan for Mac
CIC's evolution into an Intelligent Tutoring System.
The goal of the Mac CIC system is to provide training on integrating
CIC-specific knowledge and skills in an interactive environment. The
system is executed on a Macintosh Ilci microcomputer and utilizes text,
graphics, video and digitized audio to present information to the user. The
initial system design identified the following major modules: (4)
1) "Teach Me About" - provides a library of CIC-specific knowledge, ■
including: Payload Operations Overview, Communications, Mission
Timeline, Documentation and CIC Overview,
2) "Skills" (also referred to as "Practice") - allows the trainee to practice
CIC-specific skills one topic at a time. It is intended to provide tutoring
capabilities in addition to conventional Question/ Answer drills.
3) "Scenarios" - provide a means for the CIC trainee to practice making
decisions which require integrated knowledge and skills.
At the time of this writing, the major portion of the Teach Me About
module has been constructed using SuperCard (Version 1.6). Little
programming has been done regarding the remaining modules. The initial
design envisioned the utilization of the NEXPERT OBJECT expert system
shell as a platform for the Skills and Scenario modules. NEXPERT would be
linked to the SuperCard application via its HyperBridge facility.
XXXIII-l
According to Dumslaff and Ebert, the three primary methodologies of
computer-based training systems are traditional computer-assisted
instruction, hypertext and intelligent tutors. (1) A large variety of
hypertext-based training systems have been developed, and the present
trend appears to favor this approach over the highly structured computer-
assisted instruction. (2,) The decision to utilize SuperCaid as the basis for
the Teach Me About module is consistent with current work in the field.
Intelligent tutoring systems differ from the other methods of
computer-based instruction by incorporating artificial intelligence
techniques. The utilization of expert systems is a well-established means of
doing this. (3) Although symbolic languages (e.g. LISP, PROLOG) or even
conventional languages (e.g. C, PASCAL) may be used to develop an expert
system, the selection of an expert system shell for the Mac CIC project was
a correct decision. Expert system shells are pre-packaged inferencing
mechanisms with auxiliary features so as to facilitate systems
development. Essentially, they are expert systems without the domain-
specific knowledgebase. The advantage of this approach is that it allows
the project team to focus effort on establishing the knowledgebase, and not
on constructing supporting software facilities. NEXPERT OBJECT is a
multiparadigm expert system shell capable of using both objects and rules.
It also provides both forward and backward search mechanisms along its
inference net. NEXPERT's hybrid method of chaining tends to be an
extremely efficient processor, as is found in most true expert system
environments. Selection of of NEXPERT OBJECT provided the best balance of
cost versus capabilities for this project. It is important to note that
NEXPERT is a complicated application, and as with most other
environments, training is not trivial. (3)
Although the overall design approach to Mac CIC appears to be correct,
considerable work remains regarding the existing module and those still to
be developed. The initial step in the development of these recommenda-
tions was to obtain feedback from actual CICs. A preliminary review of the
existing Mac CIC system was conducted from June 7 to 9. The group
included both experienced and novice CICs, thus providing a broad
perspective. Suggestions were reviewed, and many form the foundation of
the subsequent recommendations in this report.
It is envisioned that aspects of the Mac CIC system could be migrated in
order to support the training of other POCC positions, (e.g. Data
Management Coordinator (DMC), Operations Controller (OC), Payload
XXXIII-2
Activities Planner (PAP)). Analysis of the system indicates that most of the
Teach Me About module is suitable for migration to these POCC positions,
The CIC Overview and still to be developed CIC Golden Rules, however, are
position-specific, as will be the Skills and Scenario modules. The approach
taken for each of these, however, can be used for migration. This would
essentially provide the framework around which domain-specific
knowledge could be applied. This is particularly true if the recommended
modular approach (separation of domain-dependent rules from
instructionally oriented ones) is used for the construction of the
knowledgebases.
The underlying strategy, behind this development plan, is to deploy an
initial version of Mac CIC as soon as possible. Subsequent versions, each
with additional functionality, would be phased in. This incremental
approach is strongly recommended in the literature. While permitting the
earliest possible deployment, this approach also allows post-
implementation feedback from the students to be incorporated into later
versions.
Implicit in the plan is the need to focus effort on a prioritized work list,
based on what is directly applicable to the CIC function. Early in Phase 1, a
management decision, on these priorities, is scheduled. This decision would
be based on a review of the documented SuperCard linkages and the
omissions identified. It is recommended that any further work on the
Documentation and CIC Golden Rules components be deferred. Review of
these indicates that much of this material has already been incorporated
into other module components. The priorities should then list actual
system corrections first, then modifications to existing functionality, again
within the perspective of what is relevant to the CIC. The prioritized list
would then be worked within the 4 1/2 week window allocated to
reprogramming.
New facilities would then be developed for the Teach Me About module.
The query capability would simply be a series of questions that would test
the trainee's understanding of the material. The debriefing facility would
provide both a series of questions, and a free-form display for eliciting the
student's comments regarding the Mac CIC method of instruction. The
preliminary student model would be an individualized file for maintaining
a history of the student's comments and test answers. File update would be
provided by the XCMD function resident in SuperCard. After undergoing
verification and validation, the Teach Me About module would be available
XXXIII-3
for student use. Post-implementation documentation of any changes to the
SuperCard linkages, would then follow.
Knowledge acquisition may begin upon completion of Phase 1. Since
knowledge will be drawn from mission-specific videotapes and documents,
these sources need to be made available by this date. Identification of the
Specific Behavioral Objectives (SBO), i.e. the trainee learning goals, should
occur early in the knowledge acquisition process. A Functional System
Design of the module can then be derived based upon these goals.
Programming, verification, validation and implementation of the module
follow, based upon the agreed design. Teach Me About module test cases
are rerun at this point to ensure that there are no unforeseen implications
of installing the new module. Documentation of the SuperCard linkages is
then updated to reflect integration with the NEXPERT knowledgebase.
Phase 3, development of the Scenario module, follows the same
sequence of activities as Phase 2. The duration of Phase 3 is anticipated to
be significantly less than Phase 2, since it primarily integrates knowledge
previously acquired, and functions previously programmed. The scenarios
developed initially would be "canned", i.e. all trainees would execute
them.As a history of student responses is built up, the student model can
be progressively refined and validated. Future iterations of the Mac CIC
scenarios would be intelligently selected by the system based on the
specific levels of proficiency, and the specific problems indicated in the
enhanced student model.
REFERENCES
1) Dumslaff, U. and Ebert, J., "Structuring the Subject Matter" in
Proceedings of the Fourth International Conference on Computers and
Learning . Wolfville, Nova Scotia, Canada, June 17 - 20, 1992, P. 174 - 186.
2) Farrow, M., "Knowledge Engineering Using HyperCard: A Learning
Strategy for Tertiary Education", Journal of Computer-Based Instruction,
Vol. 20, No. 1, Winter 1993, P. 9 - 14.
3) Ignizio, J.P.. Introduction to Expert Systems . McGraw-Hill. New York,
1991.
4) Schneider, M.P., "An Intelligent Position-Specific Training System for
Mission Operations ". NASA Technical Memorandum 108381. October, 1992.
XXXIII-4
JC
Q
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA OF HUNTS VILLE
ERROR CODING SIMULATIONS
Prepared by:
Academic Rank:
Institution and
Department:
MSFC Colleague(s):
NASA/MSFC:
Office:
Division:
Branch:
Viveca K. Noble
Instructor
Tuskegee University
Bernd Seiler
Helen L. Thomas
Astrionics Laboratory
Computers and Communications
Flight Data Systems
XXXIV
Introduction
There are various elements such as radio frequency interference (RFI) which may induce
errors in data being transmitted via a satellite communication link. When a transmission is
affected by interference or other error-causing elements the transmitted data becomes
indecipherable. It becomes necessary to implement techniques to recover from these
disturbances. The objective of this research is to develop software which simulates error
control circuits and evaluate the performance of these modules in various bit error rate
environments. The results of the evaluation provides the engineer with information which helps
determine the optimal error control scheme.
The Consultative Committee for Space Data Systems (CCSDS) recommends the use of
Reed-Solomon (RS) and Convolutional encoders and Viterbi and RS decoders for error
correction (Reference [2]). The use of forward error correction techniques greatly reduces the
received signal-to noise needed for a certain desired bit error rate. The use of concatenated
coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain.
The 16-bit Cyclic Redundancy Check (CRC) code is recommended by CCSDS for error
detection ( Reference [2]).
Evaluation and Implementation
The initial development phase of the simulator required evaluation of custom error
correction software generated for Goddard Space Flight Center (GSFC) to determine what
modules were applicable to Marshall Space Flight Center's (MSFC) planned laboratory
capabilities as illustrated in Figure 1. A block diagram which illustrates the operation of the
GSFC software is shown in Figure 2.
Random File
Generator
Data
Compression
CCSDS
Formatting
Reed-Solomon
Encoder
Convolutional
Encoder
"►
CRC
Encoder
/
Exxor
Statistics
RS
Decoder
Viterbi
Decoder
CRC
Decoder
Error
Generator
F
igu
re 1
Zero Input
Sequence
Error
Generator
Viterbi
Decoder
Reed-Solomon
Decoder
■ Error Statistics
Figure 2
Since the software assumes an all zero input sequence, there is no need for an encoder
because the encoded sequence will still be all zeros. This makes the task of determining the
error rate a matter of only determining the percentage of non-zero decoder outputs (Reference
[5]). Since MSFC's desired system requires random or user-specific data the software from
Goddard is unusable in its present form. In order to provide error control capabilities for the
Solar Xray Imager (SXI), the remaining modules of the CCSDS telemetry system simulator
were developed. These modules include a multiplicative congruential random number
generator (RNG), a random error generator, a CCSDS formatter and a CCSDS recommended
CRC error detection encoder/decoder. The error statistics generator is currently being
developed.
XXXIV- 1
The RNG uses Equation 1 (Reference [3]):
X„+l S X n p (mod 2 k )
[1]
where X n = Up = 37 and k = 15. These variables may be assigned any value but X n andp
must be odd. The RNG produces 8968 (8920 bits, maximum transfer frame length plus 48
bits, primary header length) decimal values ranging form to 8191 with a period of 2 k " 2 .
Binary values are generated by dividing the decimal values by 4000 and assigning 1 to
resulting values greater than 0.5 and to resulting values less than or equal to 0.5. The binary
values are used as the random input data and the decimal values are used to access elements in
the CRC-encoded message to generate errors in random order.
The CCSDS formatter inserts the sync marker lACFFClD hex (Reference [1]) at the
beginning of the binary data file to conform to the CCSDS transfer frame format shown in
Figure 3 (Reference [2]).
-TRANSFER FRAME PRIMARY HEADER -
ATT.
AWC
MARK!
FRAME
IDENTTFICATION
VER
#
s/c
ID
10
VKT
CHAN
ID
OFER
CTRL
FIELD
FLAG
MASTER
CHANNEL
FRAME
COUNT
VIRTUAL
CHANNEL
FRAME
COUNT
FRAME
DATA FLELD
STATUS
SEC
HEADER
FLAG
SYNC
FLAG
PACKET SEGMENT
ORDER
FLAG
LENGTH
ID
FIRST
HEADER
POINTER
32
16
16
Figure 3
The CRC encoder looks for the 32-bit sync marker, encodes the remaining information bits
after synchronization is established and stores the first forty-eight (48) bits of the remaining
bits in a header array. The error detection encoder module is the software implementation of the
circuit in Figure 4 (Reference [2]).
L oon-oo<>[>noaoao-eH^^
tssr
wrnrr
Figure 4
This procedure generates a (n, n-16) code where n is the number of bits in the encoded
message and n-16 is the unencoded message. Equation 2 is the 16-bit Frame Check Sequence
(FCS)
FCS = [X" • M(X) + X(°-16) ♦ L(X)] modulo G(X) [2]
where M(X) is the unencoded message in the form of a polynomial, L(X) is the polynomial
XXXIV-2
used to set the 16-bit register to the all 1 state and is given by Equation 3:
15
L(X) = X Xi
and G(X) is the generating polynomial given by Equation 4:
G(X) = X»« + X»2 + X5 + 1
[33
[4]
The generator polynomial has a Hamming distance of 4 therefore it is guaranteed to detect
error sequences composed of one, two or three bit errors (Reference [4]). When this code is
applied to a block of less than 32768 (2 15 ) bits, it also has the capability to detect all odd
number of bit errors, to detect at most two bit errors, to detect all single burst errors with a
length of 16 bits or less as long as there are no other errors in the block and has an undetected
error probability of 2" 15 (or 3 x 10" 5 ) for a random error sequence containing an even number
of bit errors greater than or equal to 4.
The error detection decoder module is the software implementation of Figure 5 (Reference
[2]).
baCO^DQQQOQQ^mOCH©
tssr
Figure 5
Equation 5 gives the error detection syndrome.
S(X) = [X 16 • C*(X) + X* • L(X)] modulo G(X)
[5]
where C*(X) is the received block in polynomial form and S(X) is the syndrome polynomial.
The 16-bit register will contain all zeros if no error is detected and will contain non-zero values
if an error is detected. The decoder also attempts to establish synchronization, but if a sync
marker error occurs, a message will be generated to indicate this occurence and zeros will
appear in the syndrome polynomial to reflect this error.
The decoder's performance has been verified for up to 3 random errors. Tests will be
performed to verify the additional performance characteristics. In generating statistics on the
error detection capability, various bit error rate environments will be created and decoded for a
number of successive runs. The error statistics generator will assign a one for each non-zero
syndrome and a zero for each zero syndrome. It will determine the error statistics based on the
percentage of non-zero terms.
XXXIV- 3
Conclusion and Future Tasks
All of the previously discussed software is written in FORTRAN 77. Due to the inflexible
nature of this language, e.g. input data arrays must be given a declared size, it is recommended
that the code be converted to C and all future code be written in C. Appropriate error
distributions must be determined so that customized error control environments may be
developed. The current error correction portion of the system must be written for use with
random data and user specific data. Convolutional and RS encoders and a more refined and
flexible error generator must be developed. Data compression modules need to be added for the
handling of "housekeeping" data. Testing of the code for various bit error rates must be
continued in order to gather statistical data on the performance of the code. The process
presented above provides a modular, inexpensive error control environment. Its use will allow
an engineer to create an optimal error control environment for a given error distribution prior to
implementing the procedure in hardware.
References
[1] Telemetry Channel Coding, Recommendation CCSDS 101.0-B-3, Issue 3, Blue
Book, Consultative Committee for Space Data Systems, May 1992 or later issue,
p. 5- 1.
[2] Telemetry, Recommendation CCSDS 100.0-G-l, Issue 1, Green Book, Consultative
Committee for Space Data Systems, December 1987 or later issue, pp. 3-19 - 3-20
and pp. D-l - D-4.
[3] Hamming, R. W., Numerical Methods for Scientists and Engineers, Dover
Publications, Inc., New York, 1986
[4] Jain, Raj, Error Characteristics of Fiber Distributed Data Interface (FDDI) , IEEE
Transaction on Communications, Vol. 38, No. 8, August 1990, p. 1249.
[5] Odenwalder, Joseph P., Error Control Coding Handbook, Final Report, Contract
F44620-76-C-0056, July 15, 1976, p. 4 and p. 122.
XXXIV- 4
3 A
1993
NASA/ ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
SIMULATION OF CRYOGENIC TURBOPUMP ANNULAR SEALS
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague(s):
NASA/MSFC:
Office:
Division:
Branch:
Alan B. Palazzolo, Ph. D., P.E.
Associate Professor
Texas A&M University,
Mechanical Engineering Department
Dr. Steve Ryan,
Donald P. Vallely
Structures & Dynamics Laboratory
Control Systems Division
Mechanical Systems Control Branch
XXXV
In reference (1) San Andres employs the NBS software package MIPROPS to
account for density's dependence on pressure in the simulation of liquid annular
seals. His example on a LH2 seal showed a significant change in the mass coefficient
compared to a constant density model. San Andres Yang and Childs (2,3) extended
this analysis by including the pressure and temperature dependence of density,
specific heat, viscosity, volumetric expansion and thermal conductivity in a coupled
solution of the energy, momentum and continuity equations. Their example showed
very significant changes in stiffness and inertia for a high speed (38,000 rpm), large
L/D ratio (0.5) LOX seal, as compared to their constant temperature results.
The current research rederived the San Andres- Yang-Childs (SYC) analysis
and extended it to include not only the Moody friction model of SYC but also the
Hir's friction model. The derivation begins with obtaining the local differential
equations of continuity, momentum and energy conservation in the seal. These
equations are averaged across the film thickness to obtain the resulting "bulk flow"
differential equations. Shear stress and convective heat loss through the stator (seal)
and rotor are related to the Moody and Hir's friction factor model. The Holman
analogy is employed to relate heat conduction in or out of the fluid film's boundary
layer to the friction induced shear stress.
The steady state problem (d/dt=0) was solved using a shooting algorithm for
the two-point boundary value problem. This require a simultaneous integration of
the two momentum equations and the continuity and energy equation. The results
for temperature increase through the seal shows excellent agreement with the SYC
model results as shown in figure 1. The SYC papers also describes an approximate
solution algorithm which assumes constant properties and friction factors along the
length of a concentric, straight seal. This model was deciphered and programmed
and shows excellent agreement with the published SYC approximate solution
results, a comparison of which is shown in figure 1.
The linearization coefficient expressions were derived to solve the first order
(perturbation) problem for the dynamic coefficients. This linearization procedure
was performed for both the Hir's and Moody models and revealed two errors in the
SYC linearization coefficients for the viscosity and density in the circumferential
momentum equations, and a missing convective heat flux term in the energy
equation. The results showed that the Hir's model linearization coefficients were
quite different from their Moody counterparts, while maintaining a similar form as
regards to programming.
The non-dimensional equations employed in the preceding analysis were
used to derive similarity conditions and expressions to infer LOX seal characteristics
from those of a similar water seal. The branch is currently developing this tester and
required sizing information along with equations which relate characteristics of the
two seals. The similarity analysis was confirmed by running the TAMUSE AL code
for a LOX seal and for its "similar" water seal. The results of these two runs showed
nearly perfect agreement with those predicted by the similarity equations. This
XXXV- 1
numerical check was performed for both a Hir's and a Moody model type seal. The
same study identified non-dimensional dynamic coefficients which remain invariant
for seals that are mutually similar, i.e., obey the same conditions of similarity.
The detailed analysis and results of this work may be found in the 430 page
report, "Thermal and Similarity Studies for Cryogenic Liquid Annular Seals" issued
by the Summer Faculty Fellow to the Mechanical Systems Controls BrancruFuture
work includes programming the first order solution to the thermohydrodynamic
problem to obtain the resulting dynamic coefficients, including seal housing
flexibility and extending the bulk flow model to include impeller forces.
The Fellow also planned an installation of an impact damper on the TTB-
ATD-HPOTP. The proposed location of the impact damper is shown in figure 2.
This device will consist of 12-20 specially designed, cylindrical impactors contained
in a ring type fixture. This type of damper has been successfully employed in LN2
at Texas A&MTesting of the impact damper may begin as early as Summer '94 if
approved by the TTB Review Panel.
XXXV- 2
REFERENCES
1. San Andres, L.A., "Analysis of Variable Fluid Properties, Turbulent Annual
Seals," ASME Journal of Tribology. Vol. 113, October, 1991, pp. 694-702.
2. San Andres, L. A., "Thermal Effects in Cryogenic Liquid Annular Seals - Part II:
Numerical Solution and Results," ASME/STLE Joint Tribology Conference. Paper
No. 92-Trib-5, pp. 1-8.
3. Yang, Z., San Andres, L., and Childs, D., "Thermal Effects in Cryogenic Liquid
Annular Seals - Part I: Theory and Approximate Solution," ASME/ST LE Toint
Tribology Conference. Paper No. 92-Trib-4, pp. 1-10.
4. Yang, Z., San Andres, L.A., and Childs, D., "Importance of Heat Transfer from
Fluid Film to Stator in Turbulent How Annular Seals," WEAR, Vol. 160, 1993, pp.
269-277.
XXXV -3
40
35
30
o 25
Si
I s
41
1?
20
15
10
1
— Current Analysis < • Pt^O 7
• /f-f/'Aox.
*.i
# Ca
#*€**}
/
/
r t
s<
*
t— -— i
It — — I
f *
5000 10000 15000 20000 25000 30000 35000 40000
Rotor Speed (rpm)
Figure 1 - Comparison Between the Exact and Approximate
Temperature Rises
SSME-ATD HPOTP
Figure 2 - Proposed Location of the SSME-ATD-HPOTP
Impact Damper
XXXV -4
&
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
CONTROLLER MODELING AND EVALUATION FOR PCV ELECTRO-MECHANICAL
ACTUATOR
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleagues:
Joey K. Parker
Associate Professor
The University of Alabama,
Department of Mechanical Engineering
Martha Cash
Charles Cornelius
NASA/MSFC:
Laboratory:
Division:
Branch:
Propulsion
Component Development
Control Mechanisms
XXXVI
Background
Hydraulic actuators are currently used to operate the propellant control valves (PCV)
for the Space Shuttle Main Engine (SSME) and other rocket engines. These actuators are
characterized by large power-to-weight ratios, large force capabilities, and rapid accelerations,
which favor their use in control valve applications. However, hydraulic systems are also
characterized by susceptibly to contamination, which leads to frequent maintenance
requirements. The Control Mechanisms Branch (EP34) of the Component Development
Division of the Propulsion Laboratory at the Marshall Space Flight Center (MSFC) has been
investigating the application of electro-mechanical actuators as replacements for the hydraulic
units in PCVs over the last few years. This report deals with some testing and analysis of a
PCV electro-mechanical actuator (EMA) designed and fabricated by HR Textron, Inc. This
prototype actuator has undergone extensive testing by EP34 personnel since early 1993. At
this time, the performance of the HR Textron PCV EMA does not meet requirements for
position tracking.
Hardware
Dual 14 hp brushless DC motors are mounted to common valve shaft. Two motors
are used to provide redundancy, but only one motor operates at any given time. A single
rotary variable differential transformer (RVDT) is used for shaft position sensing, while dual
resolvers are used for motor position sensing. A triple pass gear arrangement with an overall
ratio of 85: 1 couples the motor shaft to the valve. A pneumatic cylinder backup system is also
provided to close the valve completely in case of control system failure.
A combined analog/digital electronic controller board is used to operate the brushless
DC motors. The HR Textron EMA controller sequences the current flow to the coils through
three integrated gate bipolar transistors (IGBT's). A resolver-to-digital interface chip uses the
resolver position feedback to determine which IGBT and coil to energize next. The resolver-
to-digital chip also provides an analog voltage proportional to the motor velocity, which is
used as an additional feedback signal in the controller circuitry. The output signal from the
RVDT is used to provide a conventional position control loop as well. The controller board is
designed to be a "drop-in" replacement for the current hydraulic PCV actuator controllers.
The interface is designed to be transparent to the Honeywell SSME engine controller, i.e., the
engine controller is unchanged and operates as if a hydraulic actuator were in place.
Objectives
In the current state, the PCV EMA actuator and controller is not able to meet the
desired position tracking performance. To address this problem, the goals and objectives of
this summer's project were:
a) develop an analytical model to predict PCV EMA performance,
b) verify the model with experimental results,
c) modify the modeled controller to reduce tracking errors,
d) incorporate controller changes in prototype hardware, and
e) test the modified controller for acceptable performance.
The remainder of this report will focus primarily on the first two items, with some discussion
of the last three.
xxxvi -1
PCV EMA Controller Model
The simplified model (shown below in Figure 1) was developed for the PCV EMA
which assumed a conventional permanent magnet DC motor and a lumped inertia due to the
motor shaft, gearbox, and valve. This model uses the same controller structure as the
prototype hardware, for example the position and velocity feedback's and both voltage and
current limits. The final version of the model was developed by adjusting parameter values to
fit the experimental results.
Most of the parameter values were developed from a step response of the prototype.
The initial slope of the step response gives the maximum acceleration capabilities of the
system, which is determined by <» max = K t i a>ma x/J = 2600 rad / sec . Since i^max is
assumed to be known, the values for K t and J were adjusted to give the appropriate values.
With an ideal DC motor, the torque constant is related to the back EMF constant, so these
two values were adjusted together to give the maximum velocity shown in the step response.
The motor resistance and inductance were adjusted to give approximately the same "curved"
response near the maximum velocity.
Position
gain
Voltage threshold Motor Current limit
inductance Torque constant
3Q— -H k p[ )
66.7
10
>%->
270V
J
-270V
' ' — '+-30A
30A-
T
Back EMF
W k
66.7
Velocity gain
K,
■vfb S
RVDT
10V
jt/2 rad
Actuator position
Gear ratio
1
85
& total inertia
K
Js
Motor
position
e m
Figure 1 - Simplified Model for PCV EMA
Model Performance and Results
Experimental and simulation results are availiable for the nominal position gain of 5.8
as well as gains of 4.8 and 6.8. Space limitations prevent their display in this report. Note
that all testing and simulation of the PCV EMA system was done in the unloaded state. The
simulation results closely match the experimental output, particularly while the valve is
opening (position increasing). Frequency response tests for both the simulation and
experimental hardware were also conducted. The analytical or simulation results were
XXXVI -2
obtained by applying discrete sine wave inputs to the model and continuing until steady-state
was reached. The experimental results were obtained from a sine sweep (from a function
generator) applied to the hardware. Although the data for the two curves (simulation and
experimental) were obtained differently, the general trends appear to match. The close match
between the simulation and experimental results indicates that the model is a reasonable
representation of the experimental system. The modeled controller can be easily modified for
improvements in tracking error which could be tested later on the prototype hardware.
Controller Improvements
From the simplified controller model, the steady-state error for a ramp input is given
by the following equation
Tracking Error =
2NK vfb
K p K RVDT
(Ramp Magnitude)
where Krvdt * s me ^ xe ^ S 3 ^ 1 of the RVDT position transducer, and the other terms are
defined below. Since the gear ratio N is also fixed, the tracking error for a constant ramp
magnitude can be reduced by one of three ways: increasing the position gain, K p , decreasing
the velocity gain, K^ , or add a compensator (integrator, phase lag/lead, etc.).
Step responses of the model with different position loop gains K p were determined.
With a small step of ± 1 degree applied, no overshoot was apparent, even at the large gains of
15 and 20. With a larger step of ± 5 degrees applied, the response associated with a gain of
20 showed a pronounced overshoot, while the remaining gains did not. Finally, the responses
to a ± 30 degree step were found. Essentially all of the gains (except the nominal value of
5.8) cause some overshoot. The overshoot responses would be a problem if the PCV were
operated near one of the position limits (approximately and 85 degrees). However, the
Honeywell SSME engine controller reportedly limits its outputs to 3% of full stroke per 20
millisecond sampling period. This would prevent the system from requesting large step
changes in the PCV.
The analytical model indicates that increasing the position control gain to 15-20 is a
simple means of improving the PCV EMA controller performance. However, excessive
overshoot occurs for large step inputs (which do not occur with the Honeywell engine
controller). Unfortunately, attempts to verify the analytical results led to an electrical failure
in the prototype controller. Two of the three IGBT power transistors were "blown" during a
test with large (+/- 30 degree) step inputs. Several other circuit components associated with
the IGBT drivers were also destroyed during the mishap. Since only a single PCV EMA
controller circuit board exists, a repair effort was begun.
Controller Debugging
The prototype EMA PCV controller board was difficult to repair due to a variety of
reasons including inconsistent documentation, inaccurate circuit diagrams, and uncommon (or
not readily available) circuit components. For example, the written documentation which
accompanied the PCV EMA hardware was evidently for an earlier version of the controller
which had since been changed. The latest set of circuit schematics were in general agreement
with the actual hardware, but many significant differences existed. Finally, many of the
XXXVI -3
electronic components on the controller board were not readily available from NASA sources.
Some damaged components were replaced with the nearest equivalent part which was
available. For example, the original Toshiba #MG100J2YS9 IGBT's were replaced with
Powerex #CM100DY-12E models which were of similar, but not identical rating.
Instrumentation and technical assistance from EB24 personnel (particularly Justino
Montenegro) was invaluable in repairing the damaged controller board.
The efforts to "debug" the PCV EMA controller board were undertaken for two
reasons; to repair the system so testing could continue, and to determine the cause of failure.
Since the original failure occurred during large (± 30 degree) step inputs, early speculation
was that voltage spikes on the power lines caused the IGBT's to fail. However, testing during
the first week of August indicated that the existing system maintains voltages of less than 300
volts (with a nominal voltage of 270 volts). Since the IGBT's are rated at 600 volts and the
system does not suffer from voltage spikes, it is unlikely that this is the source of the system
failure, or that additional "snubber" networks would prevent future failures.
The most likely cause of the system failure was the electrical design and/or the power
dissipation capability of the IGBT's themselves. The safe operating area for the Toshiba
#MG100J2YS9 IGBT's depends on both collector current (which goes to the motor coils) and
the collector-emitter voltage. Although these IGBT's are "rated" at 600 volts and 100 amps,
clearly these two values do not apply simultaneously. The operating level for the current PCV
EMA controller appears to be marginal for continuous operation over a 0.25 second period.
If the power dissipation capabilities of the IGBT did not cause the system failure, then the
most likely cause is the physical construction of the prototype circuit board. The overall
appearance of the controller gives it an experimental "look" which does not inspire confidence
in its performance or longevity.
Conclusions
1) A simple analytical model which treats the brushless DC motor as a conventional
permanent magnet DC motor has been developed which matches the prototype PCV EMA
performance. A computer program is available for simulating this model's performance
with a variety of commanded inputs.
2) The simulations and initial testing results indicate that increasing the position gain to the
level of 15-20 should provide acceptable performance for typical ramp type inputs.
Excessive overshoot will be a problem at these gain levels if large step inputs (of ± 5
degrees or more) are applied.
3) It is unlikely that additional "snubber" networks placed on the IGBT's of the prototype
controller board would prevent system failure if large step inputs were applied.
4) The power dissipation capability of the IGBT is the most likely cause of the system failure.
Large step inputs cause an excessively long series of relatively long duration (100-200
usee) pulses to be applied to the IGBT's. Manufacturer's data indicates that these pulses
may cause the IGBT's to operate outside their safety margin.
Acknowledgements
The author would like to thank Martha Cash, Brad Messer, Rae Ann Weir, and
Charles Cornelieus of the Component Development Division of the Propulsion Laboratory
for their time and efforts as well as my opportunity to participate in this program this summer.
XXXVI -4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
THE MEASUREMENT AND ANALYSIS OF LEAF SPECTRAL REFLECTANCE
OF TWO STANDS OF LOBLOLLY PINE POPULATIONS
A 4 4
Prepared By:
Academic Rank
Institution and
Department:
Anthony D. Paul
Assistant Professor
Oakwood College,
Biology Department
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
JeffLuvaU
Space Science Laboratory
Earth Science & Applications
Earth System Processes & Modeling
XXXVII
My research was under the mentorship of Dr. Jeff LuvalL I worked at
Marshall from June 1 through August 6, 1993. My proposal titled "The
Measurement and Analysis of Leaf Spectral Reflectance of Two Stands of
Loblolly Pine Populations." The populations for this study were chosen from a
larger population of 31 families managed by the International Forest Seed
Company, Odenville, Alabama. The technology for mobile ground base
spectral detecting is new and therefore the majority of time, June 2 through
July 9, this summer was spent on learning the techniques of the Spectrometer
II spectroradiometer used in the gathering of spectra information The
activities included in the learning process were as follows:
• calibration of the equipment
• programming the associated computer for data management
• operation of the spectral devices
• input and output of data
From July 12 through August 3 the time was spent on learning the
'STATGRAP' computer software. This software will be used in the analysis of
the data retrieved by the Spectrometer II spectroradiometer.
Dr. Greg Carter, at Stennis, a colleague of Dr. Luvall, has been conducting
similar work with different instruments and procedures and has agreed to host
us for a training session on data gathering and analysis. This visit, which was
previously planned for July 9, 1993, but had to be postponed because of
schedule conflicts, is now confirmed for August 18-22, 1993. This trip to
Stennis will provide the knowledge for conducting the field operations in my
study, i.e., gathering of data and file conversions.
XXXVI I- 1
4 4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
LRAT: LIGHTNING RADIATIVE TRANSFER
Prepared by:
Academic Rank:
Dieudonne D. Phanord
Assistant Professor
Institution and
Department:
MSFC Colleagues:
University of Alabama in Huntsville
Department of Mathematical Sciences
William Koshak, Ph.D.
Richard Blakeslee, Ph.D.
Hugh Christian,Ph.D.
NASA / MSFC:
Laboratory:
Division:
Branch:
Space Science
Earth Sciences /Applications
Remote Sensing
xxxviii
I. INTRODUCTION
In this report, we extend to cloud physics the work done in (5-9) for single and
multiple scattering of electromagnetic waves. We consider the scattering of light (visible
or infrared) by a spherical cloud represented by a statistically homogeneous ensemble
of configurations of N identical spherical water droplets whose centers are uniformly
distributed in its volume V. The ensemble is specified as in (8), by the average number
p of scatterers in unit volume, and by pf(K) with /(R) as the distribution function
for separation R of pairs. The incident light, <f> = koe lko ' T , a plane electromagnetic
wave with harmonic time dependence, is from outside the cloud. The propagation
parameter k and the index of refraction rj determine physically the medium outside
the distribution of scatterers.
We solve the interior problem separately to obtain the bulk parameters for the
scatterer equivalent to the ensemble of spherical droplets (2-5). With the interior
solution or the equivalent medium approach, the multiple scattering problem is reduced
to that of an equivalent single scatterer excited from outside illumination. A dispersion
relation which determines the bulk propagation parameter K and the bulk index of
refraction rj of the cloud is given in terms of the vector equivalent scattering amplitude
G and the dyadic scattering amplitude g of the single object in isolation.
Based on this transfer model we will have the ability to consider clouds composed
of inhomogeneous distribution of water and/or ice particles and we will able to take
into account particle size distributions within the cloud. We will also be able to study
the effects of cloud composition (i.e., particle shape, size, composition, orientation,
location) on the polarization of the single or the multiple scattered waves. Finally, this
study will provide a new starting point for studying the problem of lightning radiative
transfer (3-4).
In general, we work in spherical coordinates. We use bold face or an arrow to denote
a vector or a vector operator. A circumflex indicates a vector of unit magnitude. A
tilde on the top of a letter denotes a dyadic (second rank tensor). For brevity, we use
[5:4] for equation 4 of Ref. ( 5) etc.
II. MATHEMATICAL MODELING/SOLUTION INSIDE THE CLOUD
FOR OUTSIDE INCIDENCE
The solution inside the cloud for outside illumination corresponds to the multiple
scattering of a plane electromagnetic wave by an ensemble of configurations of N iden-
tical spherical water droplets. To obtain the solution inside the cloud, we consider first
the single scatterer in isolation, second a fixed configuration of N identical scatterers,
and third an ensemble of the above-mentioned configurations.
For an incident plane electromagnetic wave <f> = a a e" Cl ' r , k x — h i\^ and t\ x being the
complex relative index of refraction for the host medium inside the cloud but outside
each droplet, the total outside solution for the single scatterer in isolation (outside
the single water droplet but still inside the host medium) •$ = § + u satisfies the
following differential equation obtained from Maxwell's equations after suppressing the
XXXVI I I- 1
harmonic time dependence factor e~ tut
[VxVx+k*]^=0,V-^=0. [1]
The solution inside the single spherical water droplet in isolation ^„ satisfies [1]
with k x replaced by k 2 . Here, k 2 = K1V2 — ^VxH^ "with. rj a being the complex relative
index of refraction for the medium inside the spherical water droplet. The propagation
parameters « 2 and k 2 correspond (within the distribution of identical spheres) to the
media outside and inside the water droplet respectively.
Similar to Twersky (7), we have
^=a 1 e^ r +{A(/c 1 |r-r'|),u(r')},«(r)={^u} =
-it J [(* x A) • (V x u) - (V x fe) • (n x u)]dS(r'). P1
Here, h = fl+ ^W«i|r- r'|), h(x) = ^-, and I being the identity dyadic. It
is important to note that r, and r' denote the observation point and a point on the
surface S or in the volume v of the water droplet respectively.
Asymptotically (n x r » i) we can write
u(r) = h( Kl r)g(r , « x : a,), g(f)= £ • g(r). [3]
Here, I* = 1 1 — rr J is the transverse identity dyadic and a x • k x = 0. The spectral
representation of u is
u(r) = ±Je i *" y S (r)dn(0 c , ( p c ), r > (r-r'), « 1C = K lc r c (e c ,ip c ), [4]
c
and the single scattering amplitude g(r,K x : a x ) = |lte~** 1 " r ',u(r') [ can also be
evaluated from Mie scattering theory.
Now, we consider a fixed configuration of N identical scatterers with centers located
by r m(tn=i,2,3,...,JV) > The total outside field
N
9(r) = $(r) + J2 U m (r - r ro ), U m (r - r m ) ~ fcfolr - r m |)G m , |r - r m | -> oo. [5]
m=i
Equivalently, for the scatterer located at rt, we use the self-consistent approach of
(6-7) to obtain the total outside configurational field
»t(r) = #r) + £'u m (r-r m ) + U,(r-r,), £' = £ . ®
XXXVIII-2
Using [6] and the general reciprocity relation < 9, ^ f = for any arbitrary direction
of incidence and polarization j^ , we derive as in (2) the self-consistent integral equation
for the multiple configurational scattering amplitude
Gt(f ) = g t (r, « x ) • kJ* T < + E' / *(* ' *«) ' Gm(fc)e^ c - Rtm , [7]
c
with Rt m = Tt — r m , J = ■%£ J dO c , g(f, k x ) • a,. = g(f, Kj : 1^), and the magnitude of
c
the separation distance |Rt m | is bounded above by the diameter D or the cloud.
We take the ensemble average of [7], use the quasi-crystalline approximation of Lax
(2), the equivalent medium approach and Green's theorems, to obtain (7) the dispersion
relation determining the bulk parameters
Vb— v
where dR denotes volume integration over (Vj) — v). Here, Q is the equivalent scattering
amplitude and U is a radiative function defined by U = / g(f , f c ) -Q ( k 1c | K J e ,/c "' R , and
C = Ki/Airi. The bulk propagation parameter K = k x t) with rj being the bulk index of
refraction, and {[/, #]} = / [fd n g — gd n f]dS is the Green surface operator with outward
S
unit normal from v. Equation (16) solves formally the interior problem for the cloud
with outside illumination.
III. BULK PARAMETERS AND LEADING TERM APPROXIMATIONS
To simplify [8], we force the model to neglect all phase transition effects (1), and
to take only into account pair interaction due to central forces. If the inter-droplet
potential is negligeable, we can choose /(R) to be always equal to unity. Hence, [8]
is reduced to
(k 2 - /c 2 )i + (£)g(?,K)J -g(it,\ k) = o.
[9]
In [9], let f = K. In addition, because optical scattering from a cloud is highly
forward peaked (7), we neglect back scattering and reduce [9] to
(K 3 - k\% (k) + ^)g(K,K)].^(K| k) = 0. [10]
If the scatterers preserve the incident polarization (7:68), we have from [10]
(K =- K =) = -(^i) g (K J K),,= - l = -(^i)g(K,K). [U,
XXXVIII- 3
Equation [11] determines the bulk propagation parameter K and the bulk index of
refraction rj of the equivalent medium for the bounded distribution of the spherical
water droplets.
IV. CONCLUSION
The multiple scattering problem has been reduced to that of a single equivalent
scatterer in isolation. Formulae are given for the bulk propagation parameter K and
the bulk index of refraction rj of the equivalent medium. The results are quite general
in nature and can be extended to non-spherical geometries. Also, they can be applied
immediately to the problem of pulsating optical point sources arbitrarily distributed
throughout a scattering medium. When /(R) — 1/0, [8] can be approximated or
solved numerically.
ACKNOWLEDGMENT
The author expresses his appreciation to William Koshak, Richard Blakeslee,
Hugh Christian, and Richard Solakiewicz for their time, help, and ideas during his
appointment as a NASA/ASEE Summer Faculty Fellow. The financial support of
the NASA/ASEE Summer Faculty Fellowship Program and the assistance of Gerald
F. Karr, Michael Freeman, Director and Frank Six, Administrator, are gratefully
acknowledged
References
1. H. Eyring, D. Henderson, and W. Jost, Physical Chemistry, An Advance Treatise,
Volume VIIIA (Academic Press, New York, 1971).
2. Lax . M., "Multiple Scattering of Waves." , Rev. Modern Phys. 23, (1951), 287-
629.
3. Solakiewicz. R., "Electromagnetic Scattering in Clouds" NASA-MSFC, Summer
(1992), XLVIII.
4. Thomason, L. W., and E. P. Krider, "The effects of clouds on the light produced by
lightning." J. Atmos. Sci., 39, (1982), 2051-2065.
5. Twersky, V., 1962: " On a General Class of Scattering Problems.", J. Math. Phys.
3, 4, (1962), 716-723.
6. Twersky, V., "Coherent scalar field in pair-correlated random distributions of aligned
scatterers." J. Math. Phys., 18, 12, (1977), 2468-2486.
7. Twersky, V., "Coherent electromagnetic waves in pair-correlated random distribu-
tions of aligned scatterers." J. Math. Phys., 19, 1, (1978), 215-230.
8. Twersky, V., Multiple Scattering of Waves by correlated distributions. In ( Math-
ematical Methods and Applications of Scattering Theory, Springer- Verlag, New York,
1980).
9. Twersky, V., "Propagation in correlated distributions of large-spaced scatterers." J.
Opt. Soc. Am., 73, (1983), 313-320.
XXXVI II-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
SPACE SHUTTLE MAIN ENGINE PERFORMANCE ANALYSIS
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleague:
NASA/MSFC
Laboratory:
Division:
Branch:
L. Michael Santi, Ph.D.
Associate Professor
Christian Brothers University
Mechanical Engineering Department
John P. Butas
Propulsion
Motor Systems
Performance Analysis
xxxix
I. BACKGROUND
For a number of years, NASA has relied primarily upon periodically updated versions
of Rocketdyne's Power Balance Model (PBM) to provide Space Shuttle Main Engine (SSME)
steady-state performance prediction. A recent computational study (1) indicated that PBM
predictions do not satisfy fundamental energy conservation principles. More recently, SSME
test results provided by the Technology Test Bed (TTB) program have indicated significant
discrepancies between PBM flow and temperature predictions and TTB observations (2).
Results of these investigations have diminished confidence in the predictions provided by
PBM, and motivated the development of new computational tools for supporting SSME
performance analysis.
A multivariate least squares regression algorithm was developed and implemented
during this effort in order to efficiently characterize TTB data. This procedure, called the
"gains model" , was used to approximate the variation of SSME performance parameters such
as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms
of six assumed independent influences. These six influences were engine power level,
mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and
temperature. A BFGS optimization algorithm (3) provided the base procedure for
determining regression coefficients for both linear and full quadratic approximations of
parameter variation. Statistical information relative to data deviation from regression derived
relations was also computed.
A new strategy for integrating test data with theoretical performance prediction was
also investigated. The current integration procedure employed by PBM treats test data as
pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance.
Within PBM, this integration procedure is called "data reduction". By contrast, the new data
integration procedure, termed "reconciliation", uses mathematical optimization techniques,
and requires both measurement and balance uncertainty estimates. The reconciler attempts
to select operational parameters that minimize the difference between theoretical prediction
and observation. Selected values are further constrained to fall within measurement
uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy
conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME
subsystems. The parameter selection problem described above is a traditional nonlinear
programming problem. The reconciler employs a mixed penalty method to determine
optimum values of SSME operating parameters associated with this problem formulation.
The new data reconciliation procedure was used to analyze performance
characteristics of two SSME subsystems, the high pressure fuel turbopump and fuel
preburner subsystem (HPFTP), and the high pressure oxidizer turbopump and oxidizer
preburner subsystem (HPOTP). Reconciliation results for these subsystems were compared
to data from TTB test sequence 25 and to PBM data reduction analysis predictions. Typical
comparison results are presented in the next section of this report.
XXXIX- 1
H. ANALYSIS RESULTS
Gains model regression analyses were performed using HPFTP data from TTB-25,
a 205 second duration SSME firing. Data from 59 time slices were used to obtain both
linear and quadratic fits to operating parameter variation. Results for three such parameters
are plotted relative to data slice start time in Figures 1 through 3. Multivariate linear fits
provided excellent agreement with both high pressure fuel turbine flow and discharge
temperature data as exhibited in Figures 1 and 2. For these parameters, the standard
deviation of data from functional fit was 0.23 lb/sec and 3.81 degrees Rankine respectively.
A multivariate quadratic fit accurately (a=0.0018 mru) described fuel preburner 2 /H 2
mixture ratio as shown in Figure 3. The gains model used in this study was uniformly
efficient and reliable in identifying performance influences for all test data examined.
Comparisons of TTB-25 test data, PBM reduction analysis predictions, and
reconciliation analysis results are presented in Figures 4 through 6. Regarding high pressure
oxidizer turbine flow, alarming differences, both in magnitude and trend, exist between PBM
prediction and TTB-25 data as displayed in Figure 4. Reconciliation results for HPOT flow
are seen to agree well with TTB-25 data. Large differences, on the order of 100-160
degrees R, are observed between PBM prediction and TTB-25 data for the oxygen preburner
combustion temperature, as displayed in Figure 5. Reconciliation analysis results are seen
to lie between test data and PBM predictions, approximately 60-100 degrees greater than
PBM predictions. TTB-25 data for high pressure oxidizer turbine temperature drop are
significantly greater than both PBM and reconciliation predictions as displayed in Figure 6.
In general, the reconciliation procedure appears to provide a reasonable integration of flow
thermo-physics and test data. In addition, it provides a logical scheme for indicating test
data integrity.
m. RECOMMENDATIONS
1. Gains model regression fits should be extended to a larger range of engine operating
conditions and/or multiple engine tests to determine range and order limitations.
2. The gains model should be expanded to support decisions regarding the health and
operation of the SSME.
3. Development of the reconciliation strategy should be continued.
4. Assumptions underlying PBM predictions should be evaluated.
IV. REFERENCES
1 . Santi, L. M. , "Validation of the Space Shuttle Main Engine Steady State Performance
Model," NASA Contractors Report CR-18404-XLI, October, 1990.
2 "Technology Test Bed Program - Engine 3001 - with Instrumented Turbopumps -
First Test Series Test Report," NASA report TTB-DEV-EP93-001, January 15, 1993.
3. Fletcher, R., "A New Approach to Variable Metric Algorithms,"
Comput. J. . Vol. 13, 1970, pp. 317-322.
XXXIX-2
u
ill
Ul
\
J3
y
a
3
O
_l
FIGURE 1. HPFT FLOU FROM TTB-2S
.Tost 1st Order Gains
158
156
154
152
150
148
146
1 44
u —
25 50 75 100 125
SLICE START TIME (sec)
150
175
200
DBsiisn
LI
a
3
r-
<n
a
ui
a
z.
u
FIGURE 2. HPFT DISCHARGE TEMPERATURE - AUG FROM TTB-25
_Test 1st Order Gains
1925
/•> 1300
a
?. 1S75
1850
1825
1800
faiL.
33X1
25
50 75 100 125
SLICE START TIME (sec)
150
175 200
BK/11.SU
<C
C£
Ld
o:
3
X
D
FIGURE 3. FPB MIXTURE RATIO FROM TTB-25
Test 2nd Order Gains
1.04
1.03
1.02
1.01
1.00
0.99
tl 0.98
0.97
D
rj^^d
LlA-LlP
/n
25 50 75 100 125
SLICE START TIME (sec)
150
175
200
XXXIX-3
u
w
N
D
hi
I-
"X
a
3
O
_l
a
FIGURE 4. HPOT FLOW FROM TTB-25
Tost PBtt Reduction Reconciliation
110
100
90
80
70
£0
50
s~
t —
/
/
,'
.__-/
__/
J
pj_L
raim-i JJXr
f LJUUUuu f u SUUU-n rti i i i i VH
I rpmuat
| m 1 1 ii
w
25 50 75 100 125 150
SLICE START TIME (seO
175 200
FIGURE 5. OPB COMBUSTION TEMPERATURE FROM TTB-25
Tost PBM Reduction Reconciliation
0>
ID
TJ
y
a
I-
<X
a
y
a
z:
y
1800
1700
1600
1500
1400
1300
1200
25 50 75 100 125 150
SLICE START TIME (sec)
175 200
FIGURE 6. HPOT TEMPERATURE DROP FROM TTB-25
.Test PBM Reduction Reconciliation
300
275
250
0*
111
■2.-2Z
TJ
\s
200
III
a
175
3
H
<c
150
a
y
a
125
>_
y
<!-
100
75
3=fH
mtctfn
l^H
qJtn
*
^
25 50 75 100 125 150
SLICE START TIME (sec)
175 200
XXXIX-4
N
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
EVALUATION OF THE EFFICIENCY AND FAULT DENSITY OF SOFTWARE
GENERATED BY CODE GENERATORS
Prepared by!
Academic Rank:
Institution and
Department :
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch:
Barbara Schreur, Ph.D.
Associate Professor
Texas A&I University
Department of Electrical
Engineering and Computer Science
Kenneth S. Williamson
Astrionics Laboratory
Software Division
Systems Engineering
XL
Introduction
Flight computers and flight software are used for GN&C (Guidance,
Navigation and Control), Engine Controllers and Avionics during
missions. The software development requires the generation of a
considerable amount of code. The engineers who generate the code make
mistakes and the generation of a large body of code with high
reliability requires considerable time.
Computer-Aided Software Engineering (CASE) Tools are available
which generate code automatically with inputs through graphical
interfaces. These tools are referred to as code generators. In theory,
code generators could write highly reliable code quickly and
inexpensively. The various code generators offer different levels of
reliability checking. Some check only the finished product while some
allow checking of individual modules and combined sets of modules as
well. Considering NASA's requirement for reliability, an in house
comparison of the reliability of automatically generated code and of
manually generated code is needed.
Furthermore, automatically generated code is reputed to be as
efficient as the best manually generated code when executed (2). In
house verification is warranted.
Evaluation of CASE Tools
A software project of suitable complexity has yet to be provided
for evaluation. When delivered, in the form of hardware and software
requirements, this project will lead to a segment of software with
1. a length of at least 2000 lines.
2. a minimum of three levels of hierarchy.
3. one level having a minimum of two routines.
4. minimal complexity.
The plan is to develop the software package using two developers
each using a CASE Tool and standard methods (4). Two candidate CASE
Tools are ASTER and MATRIX X .
CASE Tools are rigid in how they generate programs. They may, for
instance, make extensive use of nested ifs rather than case statements.
In some applications, this rigidity may produce inefficient code
outright or may not mesh well with the characteristics of the compiler
thereby causing inefficient execution. The generated code will be
examined for such characteristics and the effects of any such
characteristics will be investigated.
The spiral model of the software process is characteristic of CASE
Tools. They also allow program changes without using patches because
the code is regenerated as an internally consistent whole (1).
Additionally, the blocks of code in the CASE Tool libraries are
reputedly highly reliable. The principal question is whether a
combination of many such blocks retains the high reliability or whether
the way they interact is capable of producing faults (2). The generated
code will be tested for the existence of faults as the modules are
completed, if that is allowed by the CASE Tool. This will be followed
by testing of the completed segment.
The metrics selected are those contained in MM 8075. 1A (3), which
may be tailored. A database will be developed to serve as a collector
of the measures. These measures will be provided by metrics generating
tools available in the public domain and by tools to be acquired for
XL-1
this project. The metrics will include the following:
1. Software size: The number of lines of code that must be
maintained .
2. Software Staffing: The number of software engineers and
immediate supervisor involved in the development.
3. Requirements Stability: The total number of requirements
that must be implemented.
4. Development Progress: The number of successfully completed
modules .
5. Computer Resource Utilization: Percent utilization of CPU,
disk, and I/O channel.
6. Test Case Completion: Percent of successfully completed test
cases.
7. Discrepancy Report Open Duration: The time between the
report of a problem and the resolution of the problem.
8. Fault Density: The number of open Discrepancy Reports and
the total defect density normalized by the software size
over time.
9. Test Focus: Percentage of problem reports resolved through
software solutions.
10. Software Reliability: Probability that the software works
under specified conditions for a specified time.
11. Design complexity: Number of modules that have a complexity
greater than a predetermined number.
12. Ada Instantiations: Size and number of generic subprograms
developed and the number of times they are used. (For C++,
the number of object invocations.)
In addition to the metrics, the effectiveness of the CASE tools
will be evaluated using the following criteria:
1. The languages available for code generation.
2. The ability to test modules as they are developed both
individually and as part of the system.
3. The language the code generator is written in.
4. The libraries, including icons, that are available.
5. The ability to import code from other files and/or projects.
6. The ability to trace variables through the code and
determine the effects they have.
7. The documentation of the software created by the code
generator.
8. Check on the ability of the tool to "reverse" engineer a
section of code for reusability.
A requirements document and test procedures will be developed for
typical flight modules.
The original plan was to begin training on ASTER starting with
week five. ASTER has not yet been delivered. When it became apparent
that ASTER would not be delivered, training was started on MATRIX X .
Training in MATRIX X is progressing and should be completed by week ten.
Draper Labs will conduct a two week training session on ASTER in
October, 1993 so training on ASTER cannot begin until then.
Future Analysis
Recommendations for future work include the following:
1. The use of at least three Code Generators using non-trivial
complex GN&C source code or the equivalent.
2. Analyzing the source code with respect to McCabe complexity,
fault density (per 1000 Lines of Code), and efficiency.
XL- 2
3. Performing Software Verification and Validation (V&V) .
4. Recommending V&V Methodology and Work-Arpunds for Software
Source Code Generators.
Conclusion
The project is ambitious. Training is required with several tools
as they become available. This report is a delineation of the project
and a substantial portion of the training. It is true that a great deal
about CASE Tools and metrics has been learned by this Summer Fellow.
Whether this work is continued by this Fellow or another, this report
provides the basis for an evaluation of the CASE Tools.
References
1. Billmann, L., Mirab, H. and Winkler, U., "CASCSD-CASE Tools",
Measurement and Control, Vol. 25, June 1992, pp. 137-143.
2. Dellen, C. and Liebner, 6., "Automated Code Generation from
Graphical, Reusable Templates", 10th IEEE/AIAA Digital Avionics
Systems Conference Proceedings, IEEE, 1991, pp. 299-304.
3. MSFC,"MSFC Software Management and Development Requirements
Manual", MM 8075. 1A, NASA, August 1993.
4. Williamson, K., "The ASTER Code Generator CASE Tool Evaluation",
Internal Report, MSFC, May 12, 1993.
XL- 3
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTS VILLE
MICROMECHANICAL SIMULATION OF DAMAGE PROGRESSION IN CARBON
PHENOLIC COMPOSITES
Prepared By:
Kerry T. Slattery, Ph.D.
Academic Rank:
Assistant Professor
Institution and
Department:
Washington University in St. Louis
Department of Civil Engineering
MSFC Colleagues:
Raymond G. Clinton, Ph.D.
Roy M. Sullivan, Ph.D.
NASA/MSFC:
Office:
Division:
Branch:
Materials and Processes Laboratory
Nonmetallic Materials
Ceramics and Coatings
XLI
INTRODUCTION
Carbon/phenolic composites are used extensively as ablative insulating materials in the
nozzle region of solid rocket motors. The current solid rocket motor (RSRM) on the space
shuttle is fabricated from woven rayon cloth which is carbonized and then impregnated with
the phenolic resin. These plies are layed up in the desired configuration and cured to form the
finished part. During firing, the surface of the carbon/phenolic insulation is exposed to
5000°F gases from the rocket exhaust. The resin pyrolizes and the material chars to a depth
which progresses with time. The rate of charring and erosion are generally predictable, and
the insulation depth is designed to allow adequate safety margins over the firing time of the
motor. However, anomalies in the properties and response of the carbon/phenolic materials
can lead to severe material damage which may decrease safety margins to unacceptable levels.
Three macro damage modes which have been observed in fired nozzles are: ply lift, "wedge
out", and pocketing erosion. Ply lift occurs in materials with plies oriented nearly parallel to
the surface. The damage occurs in a region below the charred material where material
temperatures are relatively low — about 500°F. Wedge out occurs at the intersection of
nozzle components whose plies are oriented at about 45°. The corner of the block of material
breaks off along a ply interface. Pocketing erosion occurs in materials with plies oriented
normal to the surface. Thermal expansion is restrained in two directions resulting in large
tensile strains and material failure normal to the surface. When a large section of material is
removed as a result of damage, the insulation thickness is reduced which may lead to failure of
the nozzle due to excessive heating of critical components. If these damage events cannot be
prevented with certainty, the designer must increase the thickness of the insulator thus adding
to both weight and cost.
One of the difficulties in developing a full understanding of these macro damage
mechanisms is that the loading environment and the material response to that environment are
extremely complex. These types of damage are usually only observed in actual motor firings.
Therefore, it is difficult and expensive to evaluate the reliability of new materials. Standard
material tests which measure mechanical and thermal properties of test specimens can only
provide a partial picture of how the material will respond in the service environment. The
development of the ANALOG test procedure (2) which can combine high heating rates and
mechanical loads on a specimen will improve the understanding of the interactive effects of
the various loads on the system. But a mechanistic model of material response which can
account for the heterogeneity of the material, the progression of various micromechanical
damage mechanisms, and the interaction of mechanical and thermal stresses on the material is
required to accurately correlate material tests with response to service environments. A
model based on fundamental damage mechanisms which is calibrated and verified under a
variety of loading conditions will provide a general tool for predicting the response of rocket
nozzles. The development of a micromechanical simulation technique has been initiated and
demonstrated to be effective for studying across-ply tensile failure of carbon/phenolic
composites.
XLI-1
APPROACH
The finite element method Is used to simulate the progression of micromechanical
damage mechanisms in the carbon/phenolic material. Two damage mechanisms are
considered: fiber/matrix interface debonding and matrix cracking. The Mure process in
across-ply tension appears to initiate at the fiber/matrix interface and progress to adjacent
fibers. A crack eventually reaches the interface between two pEes and propagates along that
interface resulting in specimen rupture. Fiber breakage is observed where yams are severely
kinked, but this damage mode is assumed to occur after the development of a critical flaw and
is not currently accounted for in the model.
A two-dimensional finite element model is created to simulate the failure of a section
of the composite. A typical model consists of one yam end along with parts of the
surrounding in-plane yams. A sketch of a typical model is shown in Fig. 1 . The model
consists of three types of finite elements: out-of-plane fiber (OPF), in-plane fiber (IFF), and
matrix (MAT). The elements are square with four-nodes and eight-degrees-of-fi*eedom. The
OPF element represents a fiber end surrounded by a small amount of matrix. The IFF element
has the same dimensions and represents a composite oriented at the yam angle at the element
location. The MAT element is pure matrix and is placed in resin-rich areas. The OFF, IFF,
and MAT are H superelements n whose properties are determined from detailed finite element
analyses of the constituent materials. Stiffness, thermal expansion, and crack-tip displacement
properties are tabulated for many possible damage states for each superelement type. For
example, damage in the OPF element is characterized by the location and length of debonds
along the fiber/matrix interface. Finite element models are generated and analyzed for
approximately 1000 different debond configurations. The results are stored and used to
determine superelement properties in the simulation based on the initial interface flaws and the
progression of those flaws. This method allows efficient simulation of micromechanical
damage progression on models of significant sections of composite.
IN-PLANE FIBERS
YARN END
RESIN-RICH AREA
Figure 1. Micromechanical Simulation Model of Woven Carbon/Phenolic Composite
XLI-2
The damage growth model is based upon fracture mechanics principles. A simple
model for initial flaws is assumed at the beginning of the simulation. All initial flaws are on
the fiber/matrix interface. In the detailed finite element model of the OPF, there are 32 nodes
on the interface. An interface flaw is modeled by "disconnecting" the fiber from the matrix at
a node. A large debond is formed when several adjacent nodes are disconnected. Flaw
distribution schemes are usually random. The simulation method allows the flexibility to
investigate are variety of flaw configurations. The two used in this work were placing a fixed
length debond (e.g. 45 degrees) on some percentage of randomly selected fibers and
specifying a percentage of disconnected nodes on the fiber/matrix interface. Flaw growth is
determined using the crack closure method. This method has been used to study failure
modes in metal matrix composites (1). Each existing flaw in a superelement is analyzed in
several possible propagated states given the current nodal displacement. Tabulated data on
crack tip displacements are used to determine the distance between the nodes at the current
crack tip and the displacement caused by a unit force at those nodes. The amount of work
required to close the crack to its current state from the assumed, propagated state is
calculated and compared with the amount of energy required to create the new surface. The
crack propagates if the work exceeds the surface energy. The model is idealized since the
fibers, which are modeled as circular, actually have irregular shapes and since the quality of
the bond between the fiber and matrix also varies around and along the fiber, however, the
interface model should provide sufficient flexibility to adequately match the response of the
interface by varying the surface energy and the flaw distribution.
A material configuration is selected based on photomicrographs of the composite. A
simple mesh generation subroutine is written to define the distribution of the three types of
elements and the direction of the IPF elements in the finite element simulation. The nodes on
the bottom of the model are fixed, and a uniform tensile stress is applied to the opposite face.
The stress level is increased in small increments, and the model is analyzed. After each load
step, the properties of each element are updated based upon the crack propagation models.
The simulation continues until a maximum stress is reached and severe damage occurs in the
model.
RESULTS
Figure 2 shows the progression of damage in a simple section of out-plane fibers with
some pure matrix elements. Initial flaws on the fiber/matrix interface are represented by thick
lines in Fig 2a. These initial flaws begin to progress at about 0.07% strain as shown in Fig.
2b. The interface flaws propagate to adjacent fibers and eventually coalesce to form a critical
flaw which leads to specimen rupture as shown in Fig. 2c. The technique was also applied to
a more complex model such as mat shown in Fig. 1. The results of many simulations using a
range of values for various parameters demonstrated that the response of carbon/phenolic
materials can be simulated effectively using this technique.
XLI-3
TT
ra:
■i-'Vsr'H'-WM M
■Rffl-fi rrrff
*t»
y -1/ ■! -
+*
XX
' f ! "'1 "1 "J"
f<"
i i .jy :
4Jh
or
rrr
» i
LL
t » *» •)
r> r
-an
M^-
!^-ri-Ti-i IT
TT^r
• j t • : T
TT
■s j -N*i. .„■
-^44-
irf
Vi rf >"H
r
4
4^
- t ?;; i,K I 1-Ui 1-
IT-
-T=r : I ! j-t-f rr-rri "I tf-H
(a) (b) (c)
Figure 2. Damage Progression in Composite Loaded in Transverse Tension
CONCLUSIONS
A technique to perform micromechanical simulations of damage progression in
carbon/phenolic composites has been developed. The technique is effective at modeling
across-ply tensile response although additional calibration and verification based on damage in
tested specimens must be performed to refine the estimates of critical parameters. Thermal
loads can also be applied in the simulation, and preliminary results demonstrate that cracking
during post-cure cooldown can be predicted using this technique. Given values for the three
principle model parameters: fiber/matrix interface surface energy, interface flaw distribution,
and matrix surface energy, along with standard material properties for the constituent
materials, any loading condition can be easily simulated. Of course, some of these properties
cannot be measured directly, so the simulation technique can aid in determining these values
by performing simulations of the material response under a variety of loads and finding the
optimum values for the parameters which yield the best results for most conditions. This
method can also be extended to three-dimensions if extensive computer resources are
available, but two-dimensional simulations can provide substantial new insights into the
behavior of carbon/phenolic composites.
REFERENCES
1. Mital, S.K., Caruso, J. J., and Chamis, C.C., "Metal Matrix Composites Microfracture:
Computational Simulation," Computers & Structures, Vol. 37, No. 2, February, 1990, pp.
141-150.
2. Poteat, R.M., Ohler, H.C., Koenig, J.R., Wendel, G.M., Crose, J.G., and Marx, DA,
"Nozzle Ablative Simulation Apparatus Development," Proceeding of JANNAF Rocket
Nozzle Technology Subcommittee Meeting, December 1992.
XLI-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
A CHEMICAL SENSOR AND BIOSENSOR BASED TOTALLY AUTOMATED WATER
QUALITY MONITOR FOR EXTENDED SPACE FLIGHT: STEP ONE
Prepared by:
Academic Rank:
Institution:
Department :
MSFC Colleague:
NASA/MSFC:
Office:
Laboratory
Division:
Support
Branch:
Robert S. Smith, Ph.D.
Assistant Professor
St. John Fisher College
Chemistry Department
Layne Carter
Structures and Dynamics
Thermal Engineering and Life
Life Support Systems
XLII
This report is the result of a literature search to
consider what technologies should be represented in a totally
automated water quality monitor for extended space flight. It
is the result of the first summer in a three year JOVE
project.
The next step will be to build a test platform at the
Authors' school, St. John Fisher College. This will involve
undergraduates in NASA related research. The test flow
injection analysis system will be used to test the detection
limit of sensors and the performance of sensors in groups.
Sensor companies and research groups will be encouraged to
produce sensors which are not currently available and are
needed for this project
A ground base water lab follows standard methods (4) . As
technology evolves there is a lag time incorporating the new
technologies into standard methods since new methods must be
validated and approved by the appropriate government agencies.
The priorities for method development for a ground based
system vs a space system are almost diametrically opposed,
e.g, throughput is a major concern for a ground based system
but the sample load will be relatively small in the extended
flight system.
A totally automated water quality monitor for extended
space flight, e.g., use on the Space Station Freedom, needs to
meet the criteria shown in Table 1. It must have sufficient
detection limits to analyze for the parameters listed in Table
2 to NASA specifications. Design of a system is aided if an
exact list of Organic Toxicants is given rather than general
categories, e.g., organic acids. NASA performs evaluations of
all materials used in spacecraft to determine candidate
compounds, e.g., plasticizer offgases.
Table 1
Water Quality Monitor Criteria
Totally Automation for routine operation
Minimal maintenance requirements
Low power usage
Low weight
Low space requirement
Low use of expendable items
Low use of reagents
Minimal sample size
Work in Microgravity
Withstand Launch
Meet NASA material limitations
Meet NASA safety criteria
Provide data directly to main computer system
Analyze for parameters listed in Table 2
XLII-1
Table 2
pH
Conductivity
Color
Bactericide
Turbidity
Dissolved Gas
Free Gas
Inorganic Anions
Inorganic Cations
Total Organic Carbon
Organic Toxicants
Till recently the development of a totally automated
water quality monitor would have been built around the same
instruments found in earth based analytical laboratories.
Methods would evolve around separation based instruments,
e.g., liquid and gas chromatography, which use non-specific
detectors unless hyphenated system are used such as gas
chromatography-Mass spectroscopy where the separation is
performed by the first instrument and specific peak
identification is done by the second. These instruments are
complex, heavy, have relatively high power requirements and
require a moderate amount of skill to service and maintain.
Figure 1 shows the revolution in water quality related sensor
research that has occurred in the late 80 's and early 90 's.
Figure 1
XLII-2
The chemical sensor or biosensor is a link between a
chemical system and a computer. The computer handles only
numbers in its digital world. Information in the analog world
must be converted from voltages to numbers. The chemical
sensor provides a link between analyte concentration and a
voltage. This completes the chain to get from changing
analyte concentration to changing numbers in the computer.
The transducer in a sensor may be potent iometric,
amperometric, conductimetric, impedimetric, optical,
calorimetric, acoustic, or mechanical (3) . A biosensor links
one or more of these with a biological material that may be,
for example, organisms, tissues, cells, organelles, membranes,
enzymes, receptors, antibodies, or nucleic acids. Polymeric
materials play an important role in the mating of biomaterials
and transducers. They place structural roles as well as
active roles in time release of materials and conduction of
signals.
Some examples of sensors are ion sensitive electrodes,
enzyme electrodes, immunosensors , quartz crystal microbalance,
chemically sensitive field-effect transistors, fiber optic,
slab waveguide, bioluminescence, and electrochemical. Many
variations of sensors have been reported (1) .
Ion sensitive electrodes may be used for the inorganic
anions, non-metal cations and dissolved gas. The metals can
be determined using potentiometric stripping analysis. A
diode array spectrometer can determine color, bactericide,
turbidity, and free gas. A conductivity cell will be used for
conductivity determination. TOC can be determined by
commercially available TOC detector. Organic Toxicants can be
determined by immunosensors and enzyme based sensors (2) .
An extensive list of literature references of sensors for
water quality management is available from the author via
internet at rss@s jfc.edu. A macro written in Microsoft Word
was used to prepare the output from STN searches for entry
into Borland's Paradox database program. This allowed offline
searches and sorting of the reference material.
The ultimate flow injection system can be envisioned with
a backplane for power, signals, reagents, and sample.
Ultimately electronic components and sensors will be
fabricated on the same wafers to the extent that the output of
the sensor package will be network compatible. Sensor modules
would plug into this backplane to receive their input needs
and give their output on the computer network. The modules
could contain their own diagnostics and notify ground control
or the astronauts when they need replacing. The astronauts
would simple unplug a module that might be the size of a 35mm
slide and plug in a hew one.
XLII-3
This system would make an ideal candidate for a
technology reinvestment or transfer program to be developed as
a water quality monitor for home/ industrial use. As sensors
useful for water quality monitoring are mass produced their
cost should drop dramatically. The system could monitor raw
water quality to a house and direct the water to in-house
purification on a as need basis. It could also monitor the
performance of the in-house water purification system. A
version of the system could be used for those using unfamiliar
water, e.g., travelers, campers, hikers, etc.
Acknowledgement
The author wishes to thank NASA, ASEE, St. John Fisher
College, and Dr. Wayne Lewis, Physics Department, St. John
Fisher College for the opportunity to participate in this
program. Special thanks goes to Mr. Layne Carter for serving
as the author's NASA colleague at Marshall Space Flight
Center.
References
1. Biosensors and Chemical Sensors ACS Symposium Series 487;
Edelman, P.G. ; Wang, J.W. , Eds.; American Chemical Society:
Washington D . C . , 1992.
2. Bonting, S.L. ; "Utilization of Biosensors and Chemical
Sensors for space applications"; Biosensors and
Bioelectronics ; 7(8), 1992; 535-548.
3. Carstens, J.R. ; Electrical Sensors and Transducers;
Regents/Prentice Hall: Englewood Cliffs, N.J., 1993.
4. Standard Methods for the Examination of Water and
Wastewater; Greenberg, A.E.; Clesceri, L.S.; Eaton, A.D.,
Eds.; American Public Health Assoc: Washington D.C., 1992;
18th Ed.
XLII-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
MICROSTRUCTURAL ANALYSIS OF
THE 2195 ALUMINUM-LITHIUM ALLOY WELDS
444
Prepared by:
Academic Rank:
Institution and
Department :
MSFC Colleague:
NASA/MSFC:
Office:
Division:
Branch;
George E. Talia, Ph.D.
Associate Professor
The Wichita State University
Department of Mechanical Engr.
Arthur C . Nunes , Jr . , Ph . D .
Materials & Processes Laboratory
Metallic Materials & Processes
Metallurgical Research
XLIII
Introduction
The principal objective of this research was to explain a
tendency of 219 5 Al-Li alloy to crack at elevated temperature
during welding. Therefore, a study was made on the effect of
welding and thermal treatment on the microstructure of Al-Li
Alloy 2195. The critical roles of precipitates, boundaries,
phases, and other features of the microstructure were inferred
from the crack propagation paths and the morphology of fracture
surfaces of the alloy with different microstructures . Particular
emphasis was placed on the microstructures generated by the
welding process and the mechanisms of crack propagation in such
structures. Variation of the welding parameters and thermal
treatments were used to alter the micro/macro structures, and
they were characterized by optical and scanning electron micros-
copy. A theoretical model is proposed to explain changes in the
microstructure of welded material. This model proposes a chemi-
cal reaction in which gases from the air (i.e., nitrogen) release
hydrogen inside the alloy. Such a reaction could generate large
internal stresses capable to induce porosity and crack-like
delamination in the material.
Experimental Procedures
2195 Al-Li alloy plates were produced by the Reynolds Metals
Company, one pass (root pass) and two passes (root pass and cover
pass) welds were performed at the Marshall Space Flight Center.
Transverse and longitudinal sections of the welds were analyzed
by optical micrographic techniques. Each metallographic sample
was prepared for examination using standard polishing preparation
techniques and etched with Keller's reagent. Optical microscopy
observations were performed using a Nixon inverted microscope.
One pass autogenous welds were selected for further thermal
processing, i.e., heat treatment at different temperatures in
vacuum, air, or Helium atmosphere.
Results
Optical micrographs of the fusion zone of a single pass and
two-pass welds in 2195 Al-Li alloy are shown in Figure 1. The
initial metallographic analysis of the single pass weld revealed
a well formed grain structure with a small amount of porosity.
This porosity compares with the initial porosity of the parent
metal. See Figure 1-a. For two pass welds a large amount of
porosity is observed in the first pass fusion zone (but not in
the second or cover pass) and some of the pores take a crack-like
shape as shown in Figure 1-b.
To separate the temperature effects from stresses effects
generated by the second pass weld some of single pass welded
material was furnace heat treated at 450 C for a minute in air
and in vacuum. A comparison of the different structures is made
XLIII-1
(a)
. 5 mm
(b)
Figure 1.- Optical micrographs of 2195 Al-Li alloy subjected to
(a) a fusion pass weld and (b) a fusion pass plus a heating (but
not melting) cover pass.
XLIII-2
in Figure 2. Figure 2-a presents a microstructure similar to the
as-welded materials. In contrast, the air-heated Al-Li alloy-
shows evidence of a dendritic or grain boundary reaction. See
Figure 2-b. In addition to the solid state boundary reaction, an
increases in the porosity was observed in the air-heated materi-
al.
Furthermore, 1.2 % nitrogen contamination of the helium
shield gas of a weld pass was observed to generate a large amount
of porosity while, in contrast, electron beam (EB) welds per-
formed in vacuum or welds thermally treated in helium present a
porosity similar to that of the parent metal. All these results
support nitrogen as a cause of the porosity observed in welds in
Al-Li Alloy.
Discussion
Chemical analysis of Al-Li alloy 2195 base and weld metal
indicated hydrogen contamination at levels much higher than
expected for Alloy 2219, which lacks lithium. It is conjectured
that the hydrogen is present in the form of a lithium compound.
When the welds are heated in air, nitrogen penetrates rapidly
into the material along dendritic boundaries. Then it begins to
diffuse into the solid metal. When it encounters a hydrogen-
lithium compound, replaces and releases hydrogen as a gas. At
elevated temperatures high gas pressure form porosity and promote
cracking.
Conclusions and Recommendations
Initial results have led to the following tentative conclu-
sions:
a) Reheating (e.g., by a cover pass) generates both round
porosity and crack-like porosity observed in 2195 Al-Li Alloy
welds .
b) A tentative model has been developed to predict and
understand the porosity formation.
c) Additional work is necessary to verify the proposed
model and the mechanical properties of the 2195 Al-Li welds.
Microhardness tests at room temperature should be employed to
characterize the mechanical properties of the different features
observed in the microstructure, especially in the welding zone.
Hot tensile tests should also be performed to evaluate the weld-
ing zone strength and the effect of the temperature variation on
the integrity of the welding.
Acknowledgments
The authors are extremely grateful to Dr. J. Singh for
helpful discussions and experimental assistance.
XLIII-3
ai"" '<^J" ' ~^"
(a)
25 /urn
(b)
•"'iSff* 1
Figure 2.- Micrographs showing the effect of the heating at
450 C for a minute in vacuum (a) and in air (b) .
XLIII-4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA
TORQUE EQUILIBRIUM ATTITUDES
FOR THE SPACE STATION
Prepared by:
Academic Rank:
Institution and
Department:
MSFC Colleague:
NASA/MSFC:
Office:
Division
Branch:
Roger C. Thomspon, Ph.D.
Assistant Professor
The Pennsylvania State University
Department of Aerospace Engineering
Connie Carrington, Ph.D.
Program Development
Subsystems Design
Guidance, Navigation, and Control
XLIV
Introduction
All spacecraft orbiting in a low earth orbit (LEO) experience external torques due to envi-
ronmental effects. Examples of these torques include those induced by aerodynamic, gravity-
gradient, and solar forces. It is the gravity-gradient and aerodynamic torques that produce the
greatest disturbances to the attitude of a spacecraft in LEO, and large asymmetric spacecraft,
such as the space station, are affected to a greater degree because the magnitude of the torques
will, in general, be larger in proportion to the moments of inertia. If left unchecked, these
torques would cause the attitude of the space station to oscillate in a complex manner and the
resulting motion would destroy the micro-gravity environment as well as prohibit the orbiter
from docking. The application of control torques will maintain the proper attitude, but the
controllers have limited momentum capacity. When any controller reaches its limit, propellant
must then be used while the device is reset to a zero or negatively-biased momentum state.
Consequently, the rate at which momentum is accumulated is a significant factor in the
amount of propellant used and the frequency of resupply necessary to operate the station.
A torque profile in which the area under the curve for a positive torque is not equal to the
area under the curve for a negative torque is "biased," and the consequent momentum build-up
about that axis is defined as secular momentum because it continues to grow with time. Con-
versely, when the areas are equal, the momentum is cyclic and bounded. A Torque Equilib-
rium Attitude (TEA) is thus defined as an attitude at which the external torques "balance"
each other as much as possible, and which will result in lower momentum growth in the con-
trollers. Ideally, the positive and negative external moments experienced by a spacecraft at the
TEA would exactly cancel each other out and small cyclic control torques would be required
only for precise attitude control. Over time, the only momentum build-up in the controllers
would be due to electro-mechanical losses within the device. However, the atmospheric
torques are proportional to the density of the atmosphere and the density varies with the
orbital position, time of day, time of year, and the solar cycle. In addition, there are unmodeled
disturbances and uncertainties in the mass and inertias. Therefore, there is no constant attitude
that will completely balance the environmental torques and the dynamic TEA cannot be
solved in closed form. The objective of this research was to determine a method to calculate a
dynamic TEA such that the rate of momentum build-up in the controllers would be minimized
and to implement this method in the MATRIX X simulation software by Integrated Systems,
Inc.
Description of Research
Previous methods for calculating TEAs have relied upon approximations of the atmo-
spheric density and have assumed that the atmosphere was constant with respect to the orbital
path of the spacecraft. The TEA calculation was reduced to a quasi-closed-form method in
which the approximate torques were substituted into the equations of motion, and the result-
ing system was solved numerically. It was decided to research the possibility of determining
dynamic TEAs for the space station while using accurate models of the atmosphere and
including all six of the rigid-body degrees-of-freedom (DOF) in the numerical simulations.
A TEA is essentially the "optimal" attitude where the moments required of the controllers
are zero-biased, and the research focused on formulating the optimization problem. Although
XLIV-1
MATRIXx has an optimization module available, this feature was not included in the license
of the Program Development Office. Consequently, minimization routines for single and mul-
tiple variables were adapted from Fortran codes collected by Press et al. (4). The appropriate
algorithms were then translated into MATRIX X executable files.
To determine the feasibility of the optimization approach, a one DOF model was the first
case to be tested. The inertia, aerodynamic moment, and gravity-gradient moment coefficients
used in the model were taken from space station data so that the numerical results would be of
the same order. The aerodynamic moment was given the form
M aero = (a-ean<Df)e [1]
to simulate the variable atmosphere. The equation of motion for this system is essentially
Mathieu's equation (3) with a constant forcing function and has the form
79 + [ (mgr - a) + Esinoof] 9 = mgrd Q [2]
where I is the inertia, mgrQ is the gravity-gradient moment, and 9 is the angle at which the
gravity-gradient moment is zero. The cost function used in the optimization algorithm was
/ = \\Md\ [3]
where M is the sum of the environmental torques. This cost function allows the positive
moments to cancel the negative moments, but returns a positive-definite value for all possible
solutions.
Because this problem can be solved in closed form, the solution from the optimization
algorithm could be compared to the analytical solution; the results were very good but also
quite surprising. The TEA was successfully calculated with negligible error, but the unex-
pected result was in the torque profile. A very strong beat phenomenon was displayed where
the low frequency component had a period of 20 orbits and the high frequency occurred at the
orbital period. Further investigations indicated that the beat is very sensitive to the interaction
between the forcing term (the gravity-gradient null position) and the amplitude of the time-
varying component of the aerodynamic torques. The beat occurred only when the parameters
had a certain proportional value and the range of the proportional constant at which the beat
occurred was very small. However, this would seem to indicate that a given spacecraft config-
uration would exhibit this kind of motion at a certain atmospheric density and this subject will
be investigated further.
The next test case was a three DOF model in which the attitude equations were imple-
mented with the simplified gravity-gradient and aerodynamic torques. The environmental
torques about each axis had different magnitudes and were completely independent of each
other. The attitude dynamics, however, were coupled through Euler's Equations and the equa-
tions of motion governing the attitude of the spacecraft (1). With this model, the multi-vari-
able optimization algorithm could be tested with the coupled, nonlinear attitude dynamics but
without the complexity of the six DOF simulations. This system could not be solved in closed
XLIV-2
form, but the attitude at which the torques about each axis are statically balanced could be
determined and the TEA would be expected to be somewhere in the neighborhood of this atti-
tude.
The cost function and the MATRIX X simulation for this system were substantially differ-
ent from the simple form used in the previous case. The attitude of a spacecraft will vary as
the spacecraft reacts to the external torques, but to maintain the micro-gravity environment, a
fixed attitude (the TEA.) is desired. Therefore, when the actual attitude and the fixed attitude
coincide, no control torques are required even though the spacecraft is experiencing external
torques at that attitude. When the actual attitude differs from the fixed attitude, the corre-
sponding external torques will differ, and it is this difference that should be zero-biased. The
simulation must therefore simultaneously integrate the motion of a spacecraft flying at a fixed
attitude and a spacecraft allowed to react to the external moments. The moments are calcu-
lated for each spacecraft and the difference is the integrand of the cost function. The cost func-
tion is the magnitude of the vector resulting from the integration and is represented
mathematically by
[4]
1=1
where the superscript indicates the i th element of the moment vector.
The optimization algorithm was able to find a TEA that drove the cost function to zero and
this TEA was indeed very close to the static equilibrium attitude in the pitch and yaw axes, but
differed significantly in the roll axis as shown in Table 1. Additional calculations proved that
there was no other TEA in the neighborhood of the static equilibrium attitude and the large
roll angle, necessary to obtain the zero-biased torques, was a consequence of the coupling
between the axes. The beat phenomenon was again clearly displayed in the torque profiles.
Table 1: TEA for the 3 DOF model
Angles (rad)
Torques Balance Statically
TEA from optimization
Yaw
0.1048
0.0994
Pitch
0.1746
0.1761
Roll
0.0499
0.1899
The next stage of the research was to implement this method of calculating TEAs in the
space station simulations. The procedure is essentially the same as that used in the three DOF
example. The simulations were changed such that a fixed attitude model was integrated simul-
taneously with a free-flying model, but the simulations now included all six rigid-body
degrees-of-freedom, an accurate atmospheric density model, and detailed atmospheric drag/
moment calculations. The cost function remained exactly the same as used in the three DOF
model. Examples of the Human Tended Configuration (HTC) and the International Human
Tended Configuration (IHTC) were completed.
XL IV- 3
The results for both configurations were unexpected and were thought, at first, to be in
error. Neither configuration had a TEA that resulted in zero-biased torques, and in both cases,
the yaw torque was the only one that did not reduce to a zero bias. Additional calculations
proved that the result returned from the optimization algorithm was indeed the minimum of
the cost function. The explanation for this result is due to the coupling between the axes; an
arbitrary body may have an equilibrium condition in which a biased torque about one axis is
necessary to produce a zero-biased stable attitude about the other two. The yaw axis is the
biased axis because the gravity-gradient torque about the yaw axis is extremely weak.
This type of behavior has been observed in previous studies (2) where yaw-biasing was
necessary to provide a stable attitude. Previous attempts to determine the proper yaw bias
were accomplished through trial and error methods. A yaw bias was chosen, the optimal atti-
tude was determined for the roll and pitch axes, and the total momentum was calculated. The
procedure was repeated for several different yaw angles and the momentum was plotted as a
function of the yaw angle. The yaw bias was finally chosen at the point where the momentum
was minimized. The calculation of TEAs using the method developed in this research seeks a
solution in which the external torques are zero-biased. If such a solution does not exist, how-
ever, the optimization algorithm still seeks the minimum bias which in most cases will be the
yaw biased attitude.
Conclusions
Calculating TEAs through minimizing the bias of the external torques was shown to be
very promising. The method has distinct advantages over quasi-closed-form approaches used
in the past because no assumptions about the mathematical behavior of the torques is required.
The numerical simulations may contain any degree of complexity in the nonlinear dynamics
and calculation of the external torques. The method is very robust, and with the proper optimi-
zation routine, can incorporate equality and inequality constraints. Finally, the method will
find the zero-bias TEA if such a solution exists, or reduce to the yaw-biased solution. The
method was tested on two simple models and several of the space station configurations with
excellent result returned in all cases.
References
1. Hughes, Peter C, "Spacecraft Attitude Dynamics," John Wiley and Sons, New York, New
York, 1986.
2. Kelly. J. J., "Optimum Yaw-Biasing for Arrow Mode," Memorandum A95-J845-M-
9102083, 15 May 1991.
3. Pearson, Carl E., ed., "Handbook of Applied Mathematics, 2 nd Ed.," Van Nostrand Rein-
hold Co., New York, New York, 1983, pp. 712-717.
4. Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling,
"Numerical Recipes: The Art of Scientific Computing," Cambridge University Press, New
York, New York, 1986, pp. 274-301 .
XLIV-4
/f A
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
PROPERTIES AND PROCESSING CHARACTERISTICS
OF LOW DENSITY CARBON CLOTH
PHENOLIC COMPOSITES
Prepared By:
C. JEFF WANG
Academic Rank:
Assistant Professor
Institution
and Department:
Tuskegee University, AL,
Chemical Engineering Department
MSFC Colleague:
Corky Clinton, Ph.D.
NASA/MSFC
Office:
Materials and Processes Laboratory
Division:
• Non-Metallic Materials
Branch:
Ceramics and Coatings
XLV
I. INTRODUCTION
Ply-lift and pocketing are two critical anomalies of carbon cloth phenolic composites
(CCPC) in rocket nozzle applications [1]. Ply lift occurs at low temperatures when the A/P
and in-plane permeabilities of the composite materials are still very low and in-plane porous
paths are blocked. Pocketing occurs at elevated temperatures when in-plane permeability is
reduced by the A/P compressive stress. The thermostructural response of CCPC in a rapid
heating environment involves simultaneous heat, mass, and momentum transfers along with
the degradation of phenolic resin in a multiphase system with temperature- and time-
dependent material properties as well as dynamic processing conditions [2], Three
temperature regions represent the consequent chemical reactions, material transformations,
and property transitions, and provide a quick qualitative method for characterizing the
thermostructural behavior of a CCPC.
In order to optimize the FM5939 LDCCP (low density carbon cloth phenolic) for the
nozzle performance required in the Advanced Solid Rocket Motor (ASRM) program, a
fundamental study on LDCCP materials has been conducted [3]. The cured composite has a
density of 1.0 +.0.5 gm/cc which includes 10 to 25 % void volume. The weight percent of
carbon microballoon is low (7-15 %). However, they account for approximately one third of
the volume and historically their percentages have not been controlled very tightly. In
addition, the composite properties show no correlation with microballoon weight % or fiber
properties (e.g. fiber density or fiber moisture adsorption capacity. Test results concerning
the ply-lift anomaly in the MNASA motor firings were [3]:
- Steeper ply angle (shorter path length) designs minimized/eliminated ply lifting
- Material with higher void volume ply lifted less frequently
- Materials with high (>9%) microballoon content had a higher rate of ply lifting
- LDCCP materials failed at microballoon-resin interfaces.
The objectives of this project are:
1. To investigate the effects of carbon microballoon and cabosil fillers as well as fiber heat
treatment on plylift-related mechanical properties.
2. To develop a science-based thermostructural process model for the carbon phenolics.
The model can be used in the future for the selection of the improved ASRM materials.
3. To develop the micro-failure mechanisms for the ply-lift initiation and propagation
processes during the thermoelastic region of phenolic degradation, i.e. postcuring and
devolatilization.
H. FHXER-RESIN INTERACTION AND FD3ER HEAT TREATMENT
Six lots of LDCCP (Table 1) were fabricated by varying the fiber heat treatment
condition, type of carbon microballoon, and the use of silica filler. Parameters governing the
across-ply tensile properties, interlaminar shear strength, and plylift failure modes will be
examined. The effects of the resin-filler interaction on gas permeability and thermal
expansion behavior will also be investigated.
XLV-1
Table 1. Material Description
Prepreg Material
Fabric
Resin
Microballoon
Cabosil
FM5939 LDC 1722
BP CCA-8 +
Ironsides 91LD
T
No
FM5939 LDC-X1 1723
BP CCA-8+
Ironsides 91LD
T
Yes
FM5055 LDC 1724
BP CCA-8
Ironsides 91LD
A
Yes
FM5055 LDC-X2 1725
BP CCA-8
Ironsides 91LD
T
Yes
FM5055 LDC-X3 1726
BP CCA-8
Ironsides 91LD
T
No
FM5939 LDC-X1 1727
BP CCA-8 +
Ironsides 91LD
T
Yes
The FM5055 LDC material, fabricated with a carbon microballoon type A, is a
"historical" LDCCP material, and FM5939 LDC, with a CCA-8 + carbon fabric and carbon
microballoon type T, is under development for the ASRM program. The effects of
microballoon type and the presence of cabosil on the specific gravity and volatile content are
shown in Table 2.
Table 2. Composite density and volatile content, preliminary data
Prepreg material
Specific gravity 1
Residual volatile, % 2
FM 5939 LDC 1722
1.076
1.684
FM 5939 LDC XI 1723
1.071
1.850
FM 5939 LDC XI 1727
1.063
1.834
FM 5055 LDC 1724
1.034
2.405
FM 5055 LDC X2 1725
1.064
2.387
FM 5055 LDC X3 1726
1.073
2.260
1. - ASTM D 792: Standard Test Methods for Specific Gravity (Relative Density) and
Density of Plastics by Displacement
2. - Thiokol Specification for RSRM, STW 5-2845E: Nozzle Reinforced Plastic Component
Testing and Accepting Criteria
HI. Polymer Degradation Model: An Initial Model Framework
Following the work published in wood pyrolysis [4], a one-dimensional material
balance equation for the gases generated in the composite is given as:
d(epz g ) d(p g u)
+ = ^ (1)
at dx
XLV-2
where p g = density of gas, u = superficial gas velocity, e = porosity, and R g = gas
generation rate. Using Darcy's law for a porous medium, the momentum balance equation
on gases permeation can be expressed as:
K dp
u + = o (2)
(i dx
where K = permeability, n - viscosity of gas. By defining an effective thermal
conductivity, the energy balance equation on the solid phase is:
dT d dT dT
(l-e)p,C— = ~-[k*~~] - P s uC g -- + h R R s (3)
dt dx dx dx
where k* = (l-e)l^ + kg, C s = heat capacity of solid, C g = heat capacity of gas,
h R = heat of reaction, 1^ = solid generation rate (=-Rg). In the material balance equation,
the rate of polymer degradation is defined as:
d AM R da
r, = .^ = ~~[(l- e )p] = - _ (4)
at v dt
da da ;
.... « Sw . (5)
dt dt
where Rs = solid generation rate, Wj = weight fraction of volatile, pyrolysis gases, and
carbon char, respectively, and a { = degree of degradation for devolatiliaation, pyrolysis, and
charring, respectively.
In the above equations, the material properties, e.g. thermal conductivity, heat
capacity, and permeability, have to be estimated as functions of temperature, degree of
degradation, and fiber or resin volume fraction. The rate of degree of degradation, da/dt,
will be determined from the experimental data.
IV. Microscopic Analysis of Residual Thermal Stress in a Single Fiber-Matrix System
Efforts have been made to develop analytical tools for predicting stresses and internal
pressure created when the composite is heated rapidly [1]. These models provide good
insight into the thermal and mechanical responses of composites. However, the fracture
mechanics of these models was based on macro-mechanics. In this section, a model
framework for polymer thermal degradation and composite micro-mechanics will be
presented.
Consider a single fiber embedded in a matrix, and the system is cooled by AT. Due
to the differential thermal contraction a contact pressure, p, is developed at the fiber-matrix
interface. The fiber is subjected to an external pressure, p, at r f (radius of fiber) and resin is
subjected to an internal pressure, p. Based on this thick cylinder model, the radial
displacements and residual thermal stress can be calculated by the following equations [4].
XLV-3
(l-v f )
Uf = r f P (6)
Ef
ffP r f 2 + r m 2
^ = (. + Vm ) (7)
Em r m " r f
where u f , i^ are radial displacements of fiber and matrix, Ef, E,,, are elastic modulus of fiber
and matrix, r m is the radius of matrix, and v f , v m are volume fraction of fiber and matrix.
Compatibility at the fiber-matrix interface requires that
Um " Uf = (<*m - <Xf) r f AT (8)
Combining Eqs. (6)-(8), the residual thermal stress at the microscopic level is:
(<*m - «f)
P/AT = (9)
1 r f 2 + r m 2 l-v f
— ( + vj +
Em r m - r f E f
The typical values of E,,, and a m for phenolic resin are 5 GPa and 70 x 10^/°C, and E f
and af for medium-modulus carbon fibers are 270 GPa and 3.5 x 10^/°C, respectively. In
the case of v f =0.6 and r f 2 /r m 2 =0.6, the value of P/AT in Eq. (9) will be equal to around 70
KPa/°C. When the phenolic composite is cooled from a curing temperature of 160°C to a
room temperature of 25°C, the micro-level residual thermal stress is -9.4 MPa.
V. Acknowledgements
Technical support provided for this project by Dr. Raymond G. Clinton of Ceramics
and Coating, Material and Processes Laboratory in MSFC is greatly acknowledged. Special
thanks are addressed to Mr. John R. Koenig and Mr. Eric H. Stokes, Southern Research
Institute, Birmingham, AL, as well as Prof. Bor Z. Jang, Auburn University, AL for their
valuable discussion and encouragement throughout this project.
REFERENCES
1. R. M. Sullivan and N. J. Salamon, "A Finite Method for the Thermochemical
Decomposition of Polymeric Materials - II. Carbon Phenolic Composites," Jnt. J. Engng
Sci. 30, 939, 1992.
2. M. R. Tant and J. B. Henderson, "Thermochemical Expansion of Polymer Composites,"
Handbook of Ceramics and Composites . Vol. 1, Chap. 13, 1990.
3. A. Canfield, R. G. Clinton, S. Brown, and J. Koenig, "Fundamental Understanding of
LDC Materials/Ply Lifting," JANNAF/RNTS Meeting, December 1992.
4. E. J. Kansa, H. E. Perlee, and R. F. Chaiken, "Mathematical Model of Wood Pyrolysis
Including Internal Forced Convection," Combustion and Flame 29, 311 (1977).
5. L.-R. Hwang, "Processing-Structure-Property Relationships of Ceramic Fiber Reinforced
Si-C-O Matrix Composites," Ph.D. Dissertation, Auburn University, AL, 1991.
XLV-4
n
4
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
Effects of Thermal-Sclutal Convection en Temperature
and Solutal Fields under Varaous Gravitational Orientations
Prepared By:
Academic Rank:
Institution and
Department :
MSFC Colleague(s)
NASA/MSFC
Office:
Division :
Branch :
Jai-Ching Wang, Ph. D.
Associate Professor
Alabama A&M University
Department of Physics
Sandor L. Lehoczky, Ph. D.
Dale Watring
Frank Szofran, Ph. D.
Space Science Laboratory
Microgravity Science & Application Division/ES 71
Electronic & Photonic Materials Branch/ES 75
ILVI
Introduction
Semiconductor crystals such as H?i-^4tT* grown by unidirectional
solidification Bridgmann method have shown compositional segregations
in both the axial and radial directions (Lehoczky et . al. .1980, 1981,
1983). Due to the wide separation between the liquidus and the solidus
of its pseudobinary phase diagram (Lehoczky and Szofran 1981), there is
a diffusion layer of higher HgTe content built up in the melt near the
melt-solid interface which gives a solute concentration gradient in
the axial direction. The value of effective diffusion coefficient
calculated from fitting of the data to ID model varies with Hfi-jCUe
growth conditions (Szcfran 1984). This indicates that the growth
condition of the H£j-]£4zT* j_ s not p Ure iy diffusion controlled. Because of
the higher thermal conductivity in the melt than that in the crystal in
the growth system, there is a thermal leakage through the fused silica
crucible wall near the melt-solid interface. This gives a thermal
gradient in the radial direction. Hart (1971), Thorpe, Hutt and Soulsby
(1969) have shown that under such condition a fluid will become
convectively unstable as a result of different diffusivities of
temperature and solute. It is quite important to understand the effects
of this thermosolute convection on the compositional segregation in
both axial and radial directions in the unidirectional ly solidified
crystals under various gravitational directions. To reach this goal, we
start with a simplified problem to study the effects of thermal-solutal
convection on the temperature and solutal fields under various
gravitational orientations. We begin by reviewing model governing
equations .
Governing Equations
In this study we adopt the Boussinesq approximation; The equation
of state takes the form that density is constant except that in the
presence of the gravitational field a buoyancy force exists due to
density variations which is caused by the temperature variation and
concentration variation in the melt.
Under the Boussinesq approximation and axial symmetric boundary
conditions, the governing equations in cylindrical coordinates for
incompressible fluid flow of the system are:
du du du i dp , „
dt dr dz Pdr
d2u i d u d 2 n n
dr 2 r <*r dz 2 r 2
[1]
da da dm idp ...f^oidrc^aj
dt dr ^z Pdz [ dr 2 r dr a z 2
+ g(ft(r-ToKWC-CbJI # [2]
r dr dz rvi
XLVI-1
1 J J
dT dT dT /d2T. i 8T d2T|
at dr dz |dr 2 r dr da?/ /aild r41
dC dC dC
dt or dz
dr2 " r dr + a z 2J r( -,
1 L "" J
Scale the dimensional variable
The equations can be nondimensionalized by scaling the variables
by a factor F; i . e V = FV*. Scaling length by ^C, velocity by V ^C,
time by Rc^, pressure by R5W""s nondimensionalize temperature by
©=■
T-T m
settina T m and nondimensionalize solute concentration by setting
Q> . After the scaling and dropping all the *, the dimensionless
equations become:
[6]
do du do \ dp . / 32o 1 do d2u u
, — + b — + £0 — fc= - -£- + I + — — — 4- —
U dr dzf dr \dr 2 r ar dz 2 r 2
'dm da dto\ dp /d 2 £a i dm d 2 to \
dt dr <iz I dz I dr 2 r dr dz 2 I
+ Gr^ T + (Gr c /Gr T ) C) / r 7 3
D /dT dT dT\ Jd 2 T 1 dT d 2 T> K .
p H-37 + u I" + m T- = K TT r 17 TT K=^-; i=melt, s=soUd
Tdt dr dzf Idr 2 r dr az 2 /. anc j K $
ac ac ac_ jjdic jdc g
"aT +1i 'a7 +u3 ars3ar 2+r er az 2 )
[8]
[9]
Where the thermal and solutal Grashof numbers respectively , Grj, Grc
are defined by:
« 2 and u 2 [10]
The Prandtl number, Pr, and Schmidt number, Sc, are defined by:
ScQtoc and Pr=«%. [11]
Compare the nondimensionalized equations with the FIDAP
equations, we use the following inputs to the FIDAP for strongly
coupled actuations
XLYI-2
Density
Viscosity
Specific Heat,*-?
Conductivity
Capacity, *-?*
Diffusivity, D
Thermal Volume
Expansion,
Solutal Volume
Expansion,
Pr
Settincr
1
1
Pr Pr= V /Bt
Kj/K s or 1
1
1/Sc S c= v /D
GrT G fT =gfer Re/ \P
Values used
1
i
0.233
2
1
0.0143
These governing equations show that the flow characteristic are
determined uniquely by Gtt, Gtc, Pr and Sc. These equations have been
solved by the FIDAP program developed by Fluid Dynamics International,
Inc. The boundary conditions on the velocity field are no slip at all
wails. The boundary conditions on the solute field are constant at the
top of the melt and satisfy segregation condition at the growth
interface .
Conclusions
Preliminary simulation results for the input values listed above
and Grc=0 with Gft =0 reveal that CdTe compositional profile under ID
diffusion controlled growth condition agree well with the result
obtained by Han et . al. (Han et . al. 1992). Fixed grid simulation for
Grc=Q with Gr T = 10 4 has also been obtained. Results indicated that CdTe
concentration profiles has been effected by convection due to
horizontal thermal gradients. (Figs 2). Although a great effort has been
applied, the steady state simulations for the effects of concentration
profiles under deformed grids has never been converged. The planed
studies will be continued by doing transient simulations.
Fig.l.Gr c =0,GrT=0,
Fig. 2 Grc=o,Grifeia*
ORIGINAL PMQE m
Of POOR QUALITY
XLYI-3
Acknowledgment
I would like to express my sincere appreciation to the NASA/ASEE
Summer Faculty Fellowship Program Administrators Drs. Gerald Karr and
Frank Six for providing me the opportunity to participate in this
program. The seminars and the Education Retreat are very helpful.
Special thanks go to my NASA counterparts Dr. Sandor L. Lehoczky, Mr.
Dale Watring and Dr. Frank Szofran for their suggestions and guidance
and technical consultations on the use of the FIDAP program. I would
also like to extend my sincere appreciation to Dr. Qiing-Hua Su for his
valuable discussions. The hospitality and friendships of all the
members in the Electronic & Photonic Materials Branch has made this
summer very enjoyable for me.
REFERENCES
I.Han J. C.,S. Motakef and P. Becla, "Residual Convection During
Directional Solidification of I I -VI Pseudo-Binary Semiconductors. " in
30th Aerospace Sciences Meeting & Exhibit, Jan. 6-9, 1992, Reno, NV.
2. Kim D. H. and JR. A. Brown, "Models for convection and Segregation in
the Growth of HgCdTe bv the Vertical Bridgman Method," J. Crystal
Growth, 96 (1939) 609-627
3. Lehoczky, S. L. and F. R. Szofran, "Directional Solidification and
Characterization of H£i-:£4Je Alloys", in Materials Research Society
Symposium Proceeding — Material Processing in the Reduced Gravity
Environment of Space, ed., Guy E. Rindone (Elsevier, New York), 409
(1983).
4. Lehoczky, S. L. and F. R. Szofran, "Advanced Methods for preparation
and Characterization of Infrared Detector Materials", NASA report,
NAS8-33107 (September 1981).
5. Lehoczky, S. L., F. R. Szofran, and B. G. Martin, "Advanced Methods
for preparation and Characterization of Infrared Detector Materials",
NASA" report, NAS8-33107. (July 1980).
6, Hart, J.E., "On sideways diffusive instability", J. Fluid Mech. 49
(1971), pp 279-298.
7. Thorpe, S.A.,P.E. Hutt and R.Soulsby, J. Fluid Mech. 38 (1969), 375-400 .
8. Szofran F. R. , D Chandra, J. C. Wang, E. K. Cothran and S. L.
Lehoczky, "Effect of Growth Parameters on Compositional Variations in
Directional Solidified HgCdTe Alloys", J. Crystal Growth 70 (1984)
PP343-348.
ILVI-4
4"™ y\ tt. 41*
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
USING NEURAL NETWORKS TO ASSIST IN OPAD DATA ANALYSIS
Prepared By:
Academic Rank:
Kevin W. Whitaker, Ph.D.
Assistant Professor
Institution and Department:
The University of Alabama
Department of Aerospace Engineering
MSFC Colleagues:
W. T. Powers
Anita E. Cooper
NASA/MSFC:
Laboratory:
Division:
Branch:
Astrionics
Instrumentation and Control
Instrumentation
XLVn
INTRODUCTION
Plume emission spectroscopy can be applied to rocket engine testing by treating the
engine plume as a precisely-controlled laboratory flame for chemical analysis. Test stand or
remotely-mounted telescopes can collect engine plume emissions and direct the light, via a
grating spectrometer system, onto a linear array of silicon photodetectors. In a quantitative
manner, light from many wavelengths of interest can be compared to identify elements,
ratioed to recognize alloys, or monitored as a function of time to establish trends and the
onset of significant material erosion.
The space shuttle main engine (SSME) became the subject of plume emission
spectroscopy in 1986 when researchers from NASA-Marshall Space Flight Center (MSFC),
Arnold Engineering Development Center (AEDC), and Rocketdyne went to the SSME test
stands at the NASA-Stennis Space Center and at Rocketdyne's Santa Susana Field
Laboratory to optically observe the plume. Since then, plume spectral acquisitions have
recorded many nominal tests and the qualitative spectral features of the SSME plume are
now well established. Significant discoveries made with both wide-band and narrow-band
plume emission spectroscopy systems led MSFC to promote the Optical Plume Anomaly
Detection (OP AD) program with a goal of instrumenting all SSME test stands with
customized spectrometer systems.
A prototype OPAD system is now installed on the SSME Technology Test Bed
(TTB) at MSFC. The OPAD system instrumentation consists of a broad-band, optical
multiple-channel analyzer (OMA) and a narrow-band device called a polychrometer. The
OMA is a high-resolution (1.5-2.0 Angstroms) "super-spectrometer" covering the near-
ultraviolet to near-infrared waveband (2800-7400 Angstroms), providing two scans per
second. The polychrometer consists of sixteen narrow-band radiometers: fourteen
monitoring discrete wavelengths of health and condition monitoring elements and two
dedicated to monitoring background emissions. All sixteen channels are capable of
providing 500 samples per second. To date, the prototype OPAD system has been used
during 43 SSME firings on the TTB, collecting well over 250 megabytes of plume spectral
data.
One goal of OPAD data analysis is to determine how much of an element is present in
the SSME plume. Currently these element concentrations are determined iteratively with the
help of a computer code, SPECTRA4, developed at AEDC. Experience has shown that
iteration with SPECTRA4 is an incredibly labor intensive task and not one to be performed
by hand. What is really needed is the "inverse" of SPECTRA4 but the mathematical model
for this inverse mapping is tenuous at best. However, the robustness of SPECTRA4 run in
the "forward" direction means that accurate input/output mappings can be obtained. If the
mappings were inverted (i.e., input becomes output and output becomes input) then an
"inverse" of SPECTRA4 would be at hand but the "model" would be specific to the data
utilized and would in no way be general. Building a generalized model based upon known
input/output mappings while ignoring the details of the governing physical model is possible
through the use of a neural network.
XLVn-1
The research investigation described in this report involves the development of a
neural network to provide a generalized "inverse" of SPECTRA4. The objectives of the
research were to design an appropriate neural network architecture, train the network, and
then evaluate its performance.
NEURAL NETWORK MODEL OF SPECTRA4
The computer code SPECTRA4 generates spectra (intensity versus wavelength plots)
based on concentrations of fourteen elements in the SSME plume. The goal of the current
research project was to quickly and accurately predict these concentrations from a given
spectrum using a neural network. To that end, an optimally connected neural network
architecture was selected for study because of its fast training and subsequent execution
speed. In contrast, a traditional neural network is usually fully-connected, requiring more
training and slightly longer execution times. Also, by locating and removing all redundant
connections, the resulting optimally connected network will be more robust and efficient.
SPECTRA4 generates spectra for wavelengths ranging from 3092 A to 7000 A for a
given set of element concentrations. These concentrations are values ranging anywhere from
0.01 ppm to 100 ppm. Past experience with OP AD data analysis has revealed that the
region of primary interest in any spectrum lies in the wavelength band of 3300 A to 4330 A.
In order to discretize a spectrum, this region was broken into 42 subintervals of 25 A each.
The maximum intensity in each of these subintervals was then used as a neural network
input, resulting in a network with 42 input neurons. The corresponding element
concentrations which produced the spectrum in question were used as desired outputs,
dictating a network with 14 output neurons. With the number of input and output neurons
specified, the network was then trained for varying numbers of hidden neurons.
The design and training of an optimally connected neural network consists of two
distinct phases. In the first phase, all connections between neurons in the network are fully
established. Random numbers are assigned as interconnection weights. Then a genetic
algorithm 1 optimizes the connections, de-linking all those found to be unnecessary. In the
second phase, backpropagation of error is used to adjust the remaining weights.
Backpropagation is a supervised mode of learning wherein the partial derivatives of the error
with respect to the weights are used to adjust the weights until a minimum error is reached. 2
Once training is completed, the neural network with optimized connections and weights can
be used to predict element concentrations given intensity versus wavelength information.
RESULTS
Once a neural network was trained, it was tested against randomly generated spectra.
Typical results for a network with 60 hidden neurons and a training sample consisting of 50
data sets can be seen in Figure 1. The prediction error for some elements is very small while
for others it is quite large. This suggests that the error criteria or the discretization of the
spectra during training were not correct. However, it does appear that an optimally
connected network is capable of modeling the "inverse" of SPECTRA4.
XLVH-2
A study was also carried out to determine how the number of hidden neurons in a
network affects the prediction error. Three networks with 30, 60 and 90 hidden neurons
were considered. The total prediction error dependence upon the number of hidden neurons
is presented in Figure 2. What is readily apparent is that blindly increasing the number of
hidden neurons in a network does not guarantee increased prediction accuracy. This
suggests that after a point the network is memorizing patterns rather that learning the
relationships between them. An optimum number of neurons exists and must be determined.
SPECTRA4 SENSITIVITY STUDY
Another aspect of the current investigation was the sensitivity of the SPECTRA4
code. Since the concentrations of all the fourteen elements could vary between 0.01 ppm
and 100 ppm, network training became extremely time-consuming. Also, the mapping space
was found to be very large and noisy. In order to address these concerns, a sensitivity study
of SPECTRA4 was initiated. To obtain a robust neural network, training data must be
chosen from those regions of the mapping space for which the concentrations of elements
are most sensitive.
Some preliminary results from the sensitivity study currently underway are available.
They show that perturbing the concentrations of elements such as copper, sodium, lithium or
magnesium do not cause any change in the values of the intensity peaks of all the
subintervals in a discretized spectrum. Other elements, such as calcium, manganese, silver or
aluminum cause a change in only a few subintervals. Elements such as iron, molybdenum,
cobalt and nickel were found to be extremely sensitive as they cause a change in the intensity
peaks of almost all intervals. These preliminary results are very interesting but more study is
required to substantiate them.
CLOSING REMARKS
Optimally connected neural networks have been developed to grossly model the
"inverse" of SPECTRA4. They will certainly aid in the analysis of OP AD data by
eliminating some of the time-consuming iteration currently utilized. However, in order for
the networks developed to be useful, the prediction error must be reduced for all elements
and the robustness of the network demonstrated. These aspects are currently under study.
REFERENCES
1. Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning,
Addison- Wesley Publishing Co., Inc., 1989.
2. Werbos, P., "Beyond Regression: New Tools for Prediction and Analysis in the
Behavioral Sciences," Ph.D. Dissertation, Committee on Applied Mathematics, Harvard
University, Nov. 1974.
XLVII-3
Predicted and Actual Element Concentrations
E3 Predicted ■ Actual
I
Element
Figure 1. A comparison of predicted and actual element concentrations for a
network with 60 hidden neurons
I
Neural Network Prediction Error versus
Number of Hidden Units
g 14 •
UJ
£
o
£
0.
•.,
***-.^
* - •
5
£ 10 •
1
1
1 1 1 1 1
20 30 40 SO 60 70
Number of Hidden Units
80
90
100
Figure 2. Relationship between prediction error and number of hidden neurons
XLVII-4
/I
1903
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
THE FAR ULTRAVIOLET (FUV) AURORAL IMAGER FOR
THE INNER MAGNETOSPHERIC IMAGER (IMI) MISSION: OPTIONS
Prepared By:
Academic Rank:
Institution and
Department:
MSFC Colleagues:
NASA/MSFC:
Office:
Division:
Branch:
Gordon R. Wilson, Ph.D.
Assistant Research Professor
The University of Alabama in Huntsville
Department of Physics, and
Center for Space Plasma and Aeronomic Research
Les Johnson
Dennis Gallagher, Ph.D.
Program Development
Payload & Orbital Systems
SP Science and Applications
XLVIII
Introduction
The change from an intermediate class mission (cost ceiling of $300 million) to n solar-terreitrial
probe class mission (cost ceiling of $150 million) will require some major changes in the configuration
of the IMI mission. One option being considered is to move to a small spin-stabilized spacecraft (with
no despun platform) which could be launched with a smaller Taurus or Conestoga class booster. Such a
change in spacecraft type would not present any fundamental problems (other than restrictions on mass
and power) for the He + 304 A plasmasphere imager, the high and low energy neutral atom imagers,
and the geocoronal imager, but would present a challenge for the FUV auroral imager since the original
plan called for this instrument to operate from a despun platform. Since the FUV instrument is part
of the core payload it cannot be dropped from the instrument complement without jeopardizing the
science goals of the mission. A way must be found to keep this instrument and to allow it to accomplish
most, if not all, of its science objectives. One of the subjects discussed here are options for building an
FUV instrument for a spinning spacecraft. Since a number of spinning spacecraft have carried auroral
imagers, a range of techniques exists. In addition, the option of flying the FUV imager on a separate
micro-satellite launched with the main IMI spacecraft or with a separate pegasus launch, has been
considered and will be discussed here.
Instrument Requirements
In order to accomplish its mission, and be at least current with the state of the art in auroral
imaging, the FUV auroral imager will need to have the following characteristics (as identified by the
science working group for the original baseline design):
1. A large field-of-view of 30° x 30°.
2. A small angular resolution of 0.03° x 0.03°.
3. Ability to obtain separate images of the auroral oval at 1304 A, 1356 A and in the LBH band
(1200-1800 A).
4. High time resolution; image repetition rate of one minute or less.
Despinning the Image
If the FUV imager is carried on a spinning spacecraft then one task it must perform is the despinning
of the image. Several auroral imaging instruments have flown on spinning spacecraft in the past which
have performed the despinning task in three different ways. These include (1) the Scanning Auroral
Imager (SAI) which flew on the DE 1 spacecraft 3 . This instrument used the spacecraft's rotation to scan
a small instantaneous field-of-view (0.32°) across the sky in one dimension. Scanning in the perpendicular
direction was accomplished by a movable mirror. This technique gave long image construction times
(12 min) and short image exposure times (4 ms). (2) The second technique was used on the V5
instrument flown on the Swedish Viking satellite. This instrument had a large instantaneous field-
of-view (20° x 25°) through which the image would sweep each spacecraft rotation 1 . To compensate for
rotation the accumulated charge in the CCD rows were stepped across the detector at the same rate the
image swept across the field-of-view 6 . With this system an image was obtained each spacecraft rotation
(20 s) with an exposure time of 1.2 s. (3) The third technique was used by the ATV instrument flown on
the Japanese satellite EXOS-D (Akebono). This instrument used a despun mirror, which spun opposite
to the direction the spacecraft was spinning, to compensate for image motion 7 .
Telescopes
One way to get the large total field-of-view that the FUV instrument will need is to build it up from
successive scans as was done by the SAI instrument which flew on DE 1. The alternative is to use an
optical system with a large instantaneous field-of-view. There exists a number of space flown (or soon
to be flown) telescope designs which have large instantaneous fields-of-views. These include: (1) the
VIKING V5 Instrument 1 which is an inverse Cassegrain, Burch-type with a field-of-view of 20° x 25°,
a focal length of 22.4 mm (f/1) and an angular resolution of 0.077° x 0.077°; (2) the NUVIEWS
Astronomical Instrument 2 ' 10 which is a three mirror anastigmat (TMA) off axis imager with a 20° x 40°
field-of-view, a focal length of 90 mm (f/3) and an angular resolution of 0.058°; and (3) the POLAR
VIS Earth Camera 1 which is also a three mirror anastigmat (TMA) off-axis system with a 20° x 20°
field-of-view with an angular resolution of 0.08°.
XLVIII-1
Among this list the NUVIEWS telescope comes closest to meeting the requirements for the FUV
instrument. As originally designed the NUVIEWS instrument had a 40° x 40° field-of-view. Down sizing
to a telescope with a 30° x 30° field-of-view would not present a problem. It would have the added benefit
of increasing the angular resolution (to less than 0.058°) and reducing various aberrations (spherical,
coma, astigmatism) which affect image quality and resolution.
Filtering The Image
All instruments designed to image the aurora in the VUV have had to filter the incoming light so
as to remove scattered sunlight in the visible and near ultraviolet. The SAI instrument on DE 1, the V5
instrument on VIKING, the ATV instrument on EXOS-D, and the VIS Earth Camera on POLAR all
use fairly broadband (150-500 A FWHM) filters which would be inadequate for the FUV instrument on
IMI. The filtering system to be used on the POLAR UVI instrument was designed for spectral resolution
close to the IMI requirements. It is based on the use of specifically designed multilayer reflection and
transmission filters 9 . Each of the five filters is a small optical system with three flat mirrors and a
transmission filter The band widths of the five filters are: 1304 - 30 A, 1356 - 50 A, LBHs - 80 A, LBH1
- 90 A, and Solar Spectrum - 100 A 8 . The FUVIM instrument proposed for the IMAP small explorer 5
would use a diffraction grating, in place of transmission filters, to spectrally separate the incoming light.
Since FUVIM will be a line scanning instrument it will be an imaging diffractometer. The position of
the diffraction grating (moved by a stepper motor) will determine which part of the spectrum, from the
imaged slit, falls on the detector. With the characteristics of the diffraction grating (3600 lines/mm,
blaze angle of 13.5°), the internal geometry of the instrument, and the size of the detector, FUVIM will
have a FWHM pass band of 34 A at any desired wavelength.
Detectors
Imagers which do single pixel or line imaging (such as the SAI instrument on DE 1) can use simple
detectors that do not require special cooling. Imagers which do instantaneous two-dimensional imaging
require more sophisticated detectors. There are two basic types which can be used. One involves
an image intensifier coupled to a charge coupled device (CCD) and the other involves a microchannel
plate (mcp) connected to a position sensitive anode. The CCD based detector is the detector of choice
because the mcp/anode detector is a single event detector. That is it counts one photon at a time and
while the anode electronics is reading out" the results of one photon event it cannot see another which
might arrive in the mean time. The total number of counts per second which such a detector can see
before performance is degraded depends then on the speed of the anode readout electronics. Current
performance for these detectors is low enough so that will be saturated by auroral VUV light intensities.
CCD detectors do not have this problem since each pixel in the array can count photons independent of
whether the other pixels are also currently counting photons.
Instrument Sensitivity
One of the most important criteria for measuring an imaging instrument's performance is its
sensitivity S. S can be expressed thus: S = (F/4ir)Ar n F r n p T g Q c C m T e where F is the flux of photons
(photons/cm 2 /s), 4ff is the number of steradians in a full sphere, A is the aperture area of the imager,
r is the reflectivity of the mirrors in the optical system, n is the number of such mirrors, F r is the filter
response, n p is the solid angle of the pixel, T g is the transmission of the detector's glass window, Q e is
the quantum efficiency of the photocathode material, C m is the collection efficiency of the microchannel
plate, and T r is the exposure time. The units of S are counts/kR/pixel/Ip where kR is kiloRayleighs
and Ip is the integration period. S will depend on the wavelength of the photons since r, F T , T g , and
Q c are all wavelength dependent. As an example of the use of this equation the SAI derived instrument
planned for the MARIE mission had the following values for each factor (at 1304 A): A — 20.3 cm 2 ,
r = 0.95, n = 4, F r = 0.3, fi p = 1.9 x 10 -5 st, T g = 0.95, Q c = 0.13 electrons/photon, C m = 0.85, and
T e = 0.004 s. With a flux of 1 kiloRayleigh (F = 10 9 photons/cm 2 /s) S = 3.2 counts/kR/pixel/Ip. This
sensitivity is small enough that some of the weaker, but important, signals would not be seen by this
instrument. The main thing that can be done to increase S is to increase the exposure time T„ but this
value can't be larger than the desired time resolution. Another thing that can be done is to increase fl p
but this action degrades the angular resolution of the instrument which is undesirable. Achieving high
sensitivity is always a trade-off with achieving small angular and temporal resolution.
XLVIII-2
Possible Configurations for an FUV Auroral Imager
Option 1. The first option for the IMI FUV auroral imager would be to use the Far Ultra Violet
Imaging Monochromator (FUVIM) as proposed for the IMAP small explorer, as it is. Advantages of
using the FUVIM instrument are: (1) it is small, has a low mass, small power need, and low data rate;
(2) the design has over twenty years of flight heritage; (3) the FUVIM uses detectors which do not require
special cooling; (4) FUVIM can also perform the task of geocoronal imaging; and (5) it does not place
extreme requirements on the spin axis stability of the spacecraft. Disadvantages of this option include:
(1) the angular resolution is not very small being 0.25° x 0.25°; and (2) it may lack the sensitivity to
produce images with statistically significant count levels for the 1356 A and LBH images.
Option 2. For the second option one could use four or five VIKING V5 cameras where each camera
is optimized for a desired wavelength. The transmission filter at the front entrance and the reflection
filter coatings applied to the two mirrors in each camera would be designed after the Zukic method 9 .
During one minute of elapsed time images of the aurora at each of four or five passbands (1216 A, 1304 A,
1356 A, 1400-1600 A, and 1600-1800 A) could be obtained with an exposure time of 4 s (assuming an
instrument field-of-view of 25° x 25°). For the weaker features longer exposure times could be used
without sacrificing one minute, or shorter, time resolution for the stronger features. Estimates of the
sensitivity of each camera using a Csl photocathode and the angular resolution of the V5 instrument
give values of 150 (1304 A), 274 (1356 A), 223 (1500 A), and 100 (1700 A) counts/kR/pixel/Ip. There
also appears to sufficient out of band rejection to separate these four features from hydrogen Lyman-a
although the 1356 A feature will be partially contaminated by 1304 A radiation.
Advantages of this approach include: (1) small total instrument mass < 20 kg; (2) the basic camera
design has about 4-5 years of flight heritage; (3) the instrument could perform the task of geocoronal
imaging; (4) This instrument could obtain all of the separate auroral images, at different wavelengths,
simultaneously; and (5) image motion is compensated for by electronic scanning, eliminating the need
for moving mirrors. Disadvantages of this option include: (1) the angular resolution (0.076° x 0.076°)
is larger then the IMI requirements; (2) the original V5 camera design had problems with stray light
which may persist; (3) using the full temporal and spectral resolution which this instrument concept
could provide would require a fairly large data rate; (4) additional cooling for the detectors would be
needed; and (5) the spacecraft spin axis would be required to remain stable to about 0.08°/min.
Option 3. For this option one could use a single imaging head with an optical system based
on the NUVIEWS telescope modified to have a 30° x 30° field-of-view, with an angular resolution of
0.03° x 0.03° (or as close to that as possible). The instrument would stair out the side of the IMI
spacecraft (perpendicular to the spacecraft's spin axis) and use electronic sweeping of the CCD array to
provide longer integration times of about 5 s in a one minute period. The filter system would be that
designed for the POLAR UVI instrument with the possible inclusion of a filter designed for hydrogen
Lyman-a at 1216 A. In operation this camera could sum images gained in successive revolutions until
the one minute period was reached or sufficient counts had been obtained. The detector would be an
image intensifier/CCD combination using a large format CCD array (1000x1000 pixels). Estimates
of the sensitivity of such an instrument, based on the POLAR UVI sensitivities scaled for the shorter
integration time, are: 27 (1304 A), 46 (1356 A), 76 (1500 A), and 24 (1700 A) counts/kR/pixel/Ip.
Advantages of this approach include: (1) small total instrument mass ~ 22 kg; (2) this instrument
could perform the task of geocoronal imaging; and (3) image motion is compensated for by electronic
scanning, eliminating the need for moving mirrors. Disadvantages of this option include: (1) the angular
resolution may not reach the IMI goal (it would at least be 0.05° x 0.05°); (2) the design may not have
sufficient sensitivity; (3) the CCD detectors would need to be cooled to at least —55° C; and (4) a stable
spacecraft spin axis is required (0.05°/min).
Option 4- In this design one could use the imager described in option 3 above, but instead of
seating the instrument so that it looked out the side of the spacecraft perpendicular to the spin axis,
it is positioned so that it looks out one end of the spacecraft parallel to the spin axis and into a
despun mirror tilted at 45° to stair continuously at the earth. This would allow much longer integration
times, and increase the instrument sensitivity. Estimates of such sensitivities, based on the POLAR
UVI values with a 30 s integration time are: 163 (1304 A), 277 (1356 A), 456 (1500 A), and 144
XLVIII-3
(1700 A) counts/kR/pixel/Ip. (These sensitivities assume a 0.03° x 0.03° angular resolution, an aperture
size, mirror reflectivity, filter response and detector response of the POLAR UVI instrument.) These
sensitivities would allow the possibility of achieving the IMI goals of angular resolution and temporal
resolution for the FUV instrument.
Disadvantages of this option include: (1) the angular resolution may not reach the IMI goal (it
would at least be 0.05° x 0.05°); (2) the despun mirror would add complexity and cost to the instrument
design, (3) the design would not allow the possibility of geocoronal imaging; (4) the CCD detectors
would need to be cooled to at least —55° C; and (5) a stable spacecraft spin axis would be required
(0.08°/min).
Option 5. This last option would take the instrument from option 3 and place it on a nadir viewing
three-axis stabilized micro-satellite. This approach would provide the high sensitivities of option 4
without the need for the complexity of a despun mirror. There would also be no need for electronic
scanning of the image for motion compensation. It may also eliminate some of the pressure on the
resources of the spinning satellite portion of IMI. The added complexity of a second spacecraft would
have to be evaluated carefully to see if it was worth these potential gains.
Advantages of this approach include: (1) much higher sensitivities would be possible, comparable
to those in option 4; and (2) the instrument would be simpler, since it would not need a despun mirror.
Disadvantages of this option include: (1) the angular resolution may not reach the IMI goal (it would at
least be 0.05° x 0.05°); (2) the micro-sat might not be able to provide the pointing stability, accuracy or
knowledge without excessive cost; (3) adding a second spacecraft would add to the overall management
and operations cost of the mission; and (4) the CCD detectors would need to be cooled to at least
-55° C.
From this list of options one can conclude that an FUV like instrument can be carried on a small
spinning spacecraft. Options 4 and 5 illustrate ways that such an instrument could meet, or come close
to meeting the IMI requirements. If option 4 or 5 is ruled out because of cost or some other factor then
fall back positions exist which are still fairly attractive. They would however, require the sacrifice of
some of the original goals for the IMI FUV instrument.
References
1. Anger, C. D., et al., An ultraviolet imager for the Viking spacecraft, Geophya. Res. Lett., 14 (1987)
387-391.
2. Fleischman, 3. R., C. Martin, and P. G. Friedman, Rocket survey instrument to map diffuse C IV, H2
fluorescence, and far UV continuum in the galaxy, Bull. Am. Astron. Soc, 24, (1992) 1281.
3. Frank, L. A., Craven, J. D., Ackerson, K. L., English, M. R., Eather, R. H., and Carovillano, R. L.,
Global auroral imaging instrumentation for the dynamics explorer mission, Space Sci. Instr., 5,
(1981)369-393.
4. Frank, L. A., J. B. Sigwarth, R. L. Brechwald, S. M. Cash, T. L. Clausen, J. D. Craven, J. P. Cravens,
J. S. Dolan, M. R. Dvorsky, J. D. Harvey, P. K. Hardebeck, D. W. Muller, H. R. Peltz, and P.
S. Reilly, The visible imaging instrument for the POLAR spacecraft, GGS PI Instrument Manuel,
(1992).
5. Frank, L. A., D. J. Williams, and E. C. Roelof, Imagers for the magnetosphere, aurora and
plasmasphere (IMAP), SPIE, 2008, (1993) 11-34.
6. Murphree, J. S., and L. L. Cogger, The application of CCD detectors to UV imaging from a spinning
spacecraft, SPIE, 932, (1988) 42-49.
7. Oguti, T., E. Kaneda, M. Ejiri, S. Sasaki, A. K. Kadokura, T. Yamamoto, K. Hayashi, R. Fujii,
and K. Makita, Studies of aurora dynamics by aurora-TV on the Akebono (EXOS-D) satellite, J.
Geomag. Geoelectr., 42, (1990) 555-564.
8. Torr, M. R., D. G. Torr, M. Zukic, 3. Spann, and R. B. 3ohnson, An ultraviolet imager for the
international solar-terrestrial physics mission, submitted, (1993).
9. Zukic, M., D. G. Torr, 3. Kim, 3. F. Spann, and M. R. Torr, Far ultraviolet filters for the ISTP UV
imager, SPIE, 1745, (1992) 99-107.
10. Zukic, M., D. G. Torr, 3. Kim, 3. R. Fleischman, and C. Martin, Wide field of view 83.4 nm
self-filtering camera, SPIE, in press, (1993).
XLVIII-4
1 l kd& £ k
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
EVALUATION OF ADVANCED MATERIALS THROUGH
EXPERIMENTAL MECHANICS AND MODELLING
Prepared By:
Yii-Ching Yang, Ph.D.
Academic Rank:
Assistant Professor
Institution and
Department:
Tuskegee University
Aerospace Science Engineering
MSFC Colleague:
Samuel Russell, Ph.D.
NASA/MSFC:
Laboratory:
Division:
Branch:
Material Processes
Engineering Physics
Non-Destructive Evaluation
xlix
INTRODUCTION
Composite materials has been frequently used in aerospace
vehicles. Very often it is inherited defects during the
manufacture or damages during the construction and services. It
becomes critical to understand the mechanical behavior of such
composite structure before it can be further used. One good
example of these composite structure is the cylindrical bottle
of solid rocket motor case with accidental impact damages.
Since the replacement of this cylindrical bottle is expensive,
it is valuable to know how the damages affects the material, and
how it can be repaired. To reach this goal, the damage must be
characterized and the stress/ strain field must be carefully
analyzed.
First the damage area, due to impact, is surveyed and
identified with a shear ography technique which uses the
principle of speckle shearing interferometry to measure
displacement gradient (1) . Within the damage area of a composite
laminate, such as the bottle of solid rocket motor case, all
layers are considered to be degraded. Once a lamina being
degraded the stiffness as well as strength will be drastically
decreased. It becomes a critical area of failure to the whole
bottle. And hence the stress/ strain field within and around a
damage should be accurately evaluated for failure prediction.
To investigate the stress/ strain field around damages a
Hybrid-Numerical method which combines experimental measurement
and finite element analysis is used. It is known the stress or
strain at the singular point can not be accurately measured by
an experimental technique. Nevertheless, if the location is far
away from singular spot, the displacement can be found
accurately. Since it reflects the true displacement field
locally regardless the boundary conditions, it is an excellent
input data for a finite element analysis to replace the usually
assumed boundary conditions. Therefore, the Hybrid-Numerical
method is chosen to avoid the difficulty and to take advantage
of both experimental technique and finite element analysis.
Experimentally, the digital image correlation technique (2-
4) is employed to measure the displacement field. It is done by
comparing two digitized images, before and after loading.
Numerically, the finite element program, ABAQUS (version
5.2) (5), is used to analyze the stress and strain field. It
takes advantage of the high speed and huge memory size of modern
supercomputer, CRAY Y-MP, at NASA Marshall Space Flight Center.
DIGITAL IMAGE CORRELATION
Digital image correlation is based on the comparison
between two digital images. The system uses a standard CCD
video camera attached to video digitizer card to acquire digital
images. The digitizer transforms an image to a 512 x 512 set of
numbers representing the image. Each number represents the
XLIX-1
intensity of light impinging on a small area of camera sensor,
which is called a pixel. The value of each pixel ranges from
to 255 with the lowest value representing black, highest value
representing white, and values in between representing different
shades of gray. An image processing software in a personal
computer is then used to compare subsets of numbers between the
two digital images. To measure how well the subsets match, a
correlation function is used. By minimizing the correlation
factor, the values of displacement and strain at any location
of image can then be determined.
FINITE ELEMENT ANALYSIS
Finite element analysis for stress/ strain of a structure is
based on the following equations of equilibrium:
[K] {q} = {F} [1]
It is resulted by minimizing the potential energy of the whole
structure. Where {q}, {F}, and [K] represent nodal deformation,
nodal loads, and structural stiffness matrices, respectively.
Each member in {q} matrix is a degree of freedom. It is
corresponding to a nodal force or moment in the same direction.
For the static linear elastic problem, a degree of freedom is
either unknown or known by fact or assumption. In the later
case, the corresponding nodal force is unknown and to be solved
as a reaction. In the Hybrid-Numerical approach, some parts of
{q} matrix will be filled with the displacements measured by the
digital image correlation besides the regular assumed boundary
conditions. Providing the stiffness matrix of structures, [K] ,
the unknowns in both {q} and {F} can be solved with a high speed
computer .
The stiffness matrix of a structure, [K] , is assembled
from the stiffness matrices of element. Each member of [K]
matrix relates a degree freedom to an associated nodal force or
moment. The value of each member is determined by the geometry
and the material properties of associated elements. Since
composite laminates is used as examples, the stiffness matrix of
each layer, [Q] , must be first formed in the structural
coordinates system, or loading directions. And the load-
displacement relations is then constructed as the following
form (6) :
'[A] [B]
JB] [D]
[k] ' I [M]
(2)
Where [A] , [B] , and [D] are determined by integrating the
stiffness of all layers. Using above equations as the constitute
equations of thin shell elements, the stiffness matrix of
elements made of composite laminate can be formed.
XLIX-2
This stiffness matrix of elements can be different
depending on the material properties of individual element. In
this study, a degraded material has been assumed to the damage
areas. The elastic constants related to the transversal
direction of a degraded lamina is assumed to be decreased by a
degradation factor. By Using these constants the load-
displacement relations of damaged lamina can be found, and hence
the stiffness matrix of damaged elements.
FAILURE ANALYSIS
As it has been described, the above combination of
experimental technique and finite element analysis will provide
a more accurate results of stress and strain in the singular
zone. Assuming the composite materials responds linearly under
a set of given load, the output stress from finite element
analysis can be used to predict the loading level of lamina and
laminate failure. The Tsai-Wu Tensor Theory (7) is chosen to
determine the stress level of failure since it is mostly adopted
for a polymer composite lamina. According to this theory a
lamina will have initial crack in polymer matrix, and hence
degraded if its stress state fail to satisfy the following
inequality:
*V <*Ps + F Pi < x •••• C 3 ]
Furthermore, since the linear elasticity has been assumed, the
ratio of the stress state at failure to that under the given
load, R, can be calculated with the following equation:
(Fjopj) R 2 + (Ffl t ) R - 1 [4]
This ratio can be interpreted as how many times of given load
would cause a lamina to degrade. Once a lamina is degraded the
stresses in every layer will be redistributed so that the next
lamina may be degraded at a higher loading level. The loading
level of that all laminae being degraded is referred as the Last
Ply Failure of laminates. At this stage an intensive acoustic
events of fiber breaking may be heard experimentally.
EXAMPLES AND CALCULATION
In this study the cylindrical rocket motor cases are
investigated. They are cylindrical pressure vessels made of
IM7/Epoxy with the winding layout of [78.5/-78.5/0/0] 2 from
inside out. In which the degree is referred to the
circumferential direction. It is about 5.75 inches in diameter
and 4 inches long (does not count both semispherical dome at
ends) . Every bottle has been subjected to a low speed impact
test. They are three different impact energy levels, 3, 5, and
7 foot-pound applied at the middle of bottle and perpendicular
to the composite laminate skin. The size of damage areas has
XLIX-3
been measured with shearography technique. It has been seen the
smallest damage is scattered within l"xl" area; and the highest
is 3"x3" . Based on the identified patten of damage, the
associated elements in the finite element analysis are assigned
to the degraded material group.
During the burst test of each pressure vessel, two images
has been taken, one at free load and the other at 1000 psi
pressure level. The calculation of digital image correlation
runs over about 300 by 300 pixels. It covers an area of
composite laminate about 1.90" by 1.61". The resulting
displacements are then input as boundary in the finite element
analysis. A mesh diagram with 20 by 20 rectangular thin shell
elements is constructed. Using a computer code, ABAQUS, the
stresses and strains of shell elements are calculated. And the
stresses is then checked with Tsai-Wu Tensor Theory to predict
the pressure level at the Last Ply Failure of cylindrical bottle
skin. The preliminary results show it agrees with that of the
acoustic observation.
REMARK AND FUTURE WORK
Due to the complexity of test and shortage of facility and
manpower, only few pressure vessels have been bursted. Although
the preliminary result shows promising, more vessels should be
tested; and more analyses must be done before a firm conclusion
can be reached. By then it may be better understood how an
impact affects the rocket motor cases and how to repair it if
necessary.
REFERENCES
1. Toh, S.L., Shang, H.M. , Chaw, F.S., and Tay, C.J., "Flaw
Detection in Composites Using Time-Average Shearography,"
Optics & Laser Technology, 23 (1991)
2. Peters, W.H. and Ranson, W.F., "Digital Imaging Techniques
in Experimental Stress Analysis," Opt. Eng. , 21 (1982)
427-431
3. Sutton, M.A. , Cheng, M. , Peters, W.H. , Chao, Y.J., and
McNeill, S.R., "Application of an Optimized Digital
Correlation Method to Planar Deformation Analysis," Image
and Vision Computing 4 (1986) 143-150
4. Bruck, H.A. , McNeill, S.R., Sutton, M.A. and Peters, W.H. ,
"Digital Image Correlation Using Newton-Raphson Method of
Partial Differential Correction," Experimental Mechanics,
29 (1989) 261-267
5. ABAQUS version 5.2, Bibbitt, Karlsson & Sorensen, Inc.
(1993)
6. Jones, R.M. , Mechanics of Composite Materials, Mcgraw-Hill
Book Company (1975)
7. Tsai, S.W., and Wu, E.M., "A General Theory of Strength for
Anisotropic Materials," Journal of Composite Materials,
January (1971) 58-80
XLIX-4
^
1993
NASA/ASEE SUMMER FACULTY FELLOWSHIP PROGRAM
MARSHALL SPACE FLIGHT CENTER
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
USING CONTOUR MAPS TO SEARCH FOR
RED-SHIFTED 511 keV FEATURES
IN BATSE GRB SPECTRA
Prepared By:
Peter G. Varmette
Academic Rank:
Graduate Student
Institution and
Department:
Mississippi State University
Department of Physics
and Astronomy
MSFC Colleague:
Gerald Fishman, Ph.D.
NASA/MSFC:
Laboratory:
Division:
Branch:
Space Science
Astrophysics
Gamma-Ray Astronomy
Since their discovery twenty years ago, the origin of gamma-ray bursts (GRB's) has
remained an intriguing mystery. The quest to understand these objects has given rise
to a plethora of competing theories. Several theories suggest that GRB's are galactic
in origin while others suggest that GRB's are cosmological (Harding 1993).
One piece of evidence that might provide scientists with a key to understanding the
origin of GRB's may be whether or not spectral emission and absorption features exist
in burst spectra. If the features exist and can be attributed to either cyclotron lines or
to red-shifted 511 keV annihilation lines then credence would be given to those
theories that support a galactic origin, i.e. near neutron stars (Barat 1984, Mazets 1980,
Mitrofanov 1984, Nolan 1984).
A method of searching for spectral features in burst spectra (BATSE HER data) will
be outlined in this paper. The method was used to investigate the energy range
between approximately 350 keV to 600 keV. This energy range was chosen because
previous experiments have reported emission features in gamma-ray bursts around
400 keV to 500 keV. These features have been interpreted as gravitationally
red-shifted 511 keV annihilation radiation produced near a neutron star (Barat 1984,
Mazets 1980, Mitrofanov 1984, Nolan 1984).
The first step was to calculate a background model representing the ambient back-
ground radiation. The model was used to separate the burst spectrum from that of
the background. Next, we construct the incident "photon" spectrum from the record-
ed "count" spectrum. To do this involves convolution with matrices that contain
information on the detector's efficiency as a function of energy, as a function of angle
of incidence of radiation, and also the detector's sensitivity to that fraction of the inci-
dent radiation caused by scattering off the Earth's atmosphere. The combination of all
of these is called the detector response matrix (DRM) shown in Figure 1.
The BATSE HER data for a single burst can be binned into different time intervals
and each interval forms a spectrum. Burst IB 911221 was binned into 8 spectra each
lasting approximately 9 sees. A fit of the spectrum that ranged in time from 9.7 sees
L-l
to 18.2 sees produced the best fit results. Figure 2 shows the fit that was made to this
spectrum using a Broken Power Law, the form of which can be seen in Equation 1.
3urst numoir 1200
Eirt
Eout
OmiyQM)
Figure 1 A detector response matrix. Figure 2 A fit using a Broken Power Law.
E )*■ ifE<Ebreak
' Epivot
or
[1]
A{
■)* (— — -)** ifE>Ebreak
Epivot '
Ebreak
The fit shown in Figure 2 produced a % of 23.4 with 22 degrees of freedom. After
the initial fit was made to this spectrum, a batch fit was made to the other 7 spectra by
adjusting the parameters of the first fit to find the best fit for each of the others.
The batch fits form the basis of a continuum model which was then subtracted from
the data. These residuals were then divided by the standard deviation, a, that was
associated with each energy value. Contour maps of the residuals plotted against
energy and time were then generated. Figure 3 shows the contour map that was
generated for burst IB 911221.
The contour lines are displayed for values of 2 a, 3 a, 4 o, and 5 a. When exarrrining
the structure in contour plots the resolution of the detector at the particular energy
must be considered in order to determine whether the structure is real or not.
Equation 2 gives the resolution of the detector as a function energy.
L-2
Photon Excess, (si gmo)
-,. - .MM, '
Smndf sine* bunt W99W
Figure 3 Contour map generated for burst IB 911221.
Res = 0.079E(^)
E x-0.42
[2]
At 545 keV the resolution is 42 keV. Therefore, the structure seen at 545 keV
between 36 sees and 57 sees is probably a detector anomaly. The detector resolution
at 490 keV is 39 keV. The observed structure ranges from 480 keV to 510 keV, so the
feature is probably not real but further investigation is warranted. Figure 4 shows a
plot over a larger energy range chosen to show the features at 490 keV in the context
of a larger continuum. The figure shows that, in the energy range of 480 keV to 510
keV, there are no significant features.
The feature searching method described above provides a means of searching
through a vast amount of data, looking for regions which warrant further and more
thorough searches.
The new searching method also allows us to evaluate our background surtoacting
and fitting routines. For instance, if there were a lot of structure around 511 keV it
might indicate that the background subtraction routines were not working properly.
Future work will be done to improve and enhance this searching method while
analyzing GEB's for spectral emission features.
L-3
Burst number: 1200
Figure 4 Fit of a broken power law over the energy range 170 keV to 900 keV.
Acknowledgment:
I would like to thank the members of the BATSE group for unselfishly aiding me
with my research especially G. J. Fishman, C. A. Meegan, M. S. Briggs, G. N.
Pendleton, W. S. Pariesas, R. D. Preece, and M. N. Brock.
1 . Barat, Get al, Possible Short Annihilation Flashes in the 1978 November 4
Gamma-ray burst, The Astrophysical Journal, 286:L11-L13, November 1, 1984.
2. Hardin, A. K., Gamma-ray burst theory: back to the drawing board, ApJ,
Supplement, January 11-15, 1993.
3. Mazets, E. P. et al, Lines in the Energy Spectra of Gamma-ray Bursts, Pis'ma
Astron. Zh. 6,706-711, November 1980.
4. Mitrof anov, I. G. et al, Rapid Spectral Variability of Cosmic Gamma-ray Bursts,
Astron. Zh. 61, 939-943, September-October, 1984.
5. Nolan, P. L. et al, Spectral Feature of 31 December 1981 y-ray Burst not Confirmed,
Nature 311, September 27, 1984.
L-4