Sie sind auf Seite 1von 38

AIR FORCE

19.A SMALL BUSINESS TECHNOLOGY TRANSFER (STTR)


PROPOSAL PREPARATION INSTRUCTIONS

The Air Force (AF) proposal submission instructions are intended to clarify the Department of Defense
(DoD) instructions as they apply to AF specific requirements. Firms must ensure their proposal meets
all requirements of the Broad Agency Announcement currently posted on the DoD website at the
time the solicitation closes.

The AF Program Manager is Mr. David Shahady. The AF SBIR/STTR Program Office can be contacted
at afsbirsttr-info@us.af.mil. For general inquiries or problems with the electronic submission, contact the
DoD SBIR/STTR Help Desk via email at sbirhelp@bytecubed.com (9:00 a.m. to 6:00 p.m. ET, Monday
through Friday). For technical questions about the topics during the pre-announcement period ( 28 Nov
2018 through 7 Jan 2019), contact the Topic Authors listed for each topic on the Web site. For
information on obtaining answers to your technical questions during the formal announcement period (8
Jan 2019 through 6 Feb 2019), go to https://sbir.deensebusiness.org.

General information related to the AF Small Business Program can be found at the AF Small Business
website, http://www.airforcesmallbiz.af.mil/. The site contains information related to contracting
opportunities within the AF, as well as business information and upcoming outreach/conference events.
Other informative sites include those for the Small Business Administration (SBA), www.sba.gov, and
the Procurement Technical Assistance Centers, http://www.aptacus.us.org. These centers provide
Government contracting assistance and guidance to small businesses, generally at no cost.

PHASE I PROPOSAL SUBMISSION

Read the DoD program announcement at https://sbir.defensebusiness.org/ for program requirements.


When you prepare your proposal, keep in mind that Phase I should address the feasibility of a solution to
the topic. For the AF, the contract period of performance for Phase I shall be nine (9) months, and the
award shall not exceed $150,000. We will accept only one Cost Volume per Topic Proposal and it must
address the entire nine-month contract period of performance.

The Phase I awardees must accomplish the majority of their primary research during the first six months
of the contract with the additional three months of effort to be used for generating final reports. Each AF
organization may request Phase II proposals prior to the completion of the first six months of the contract
based upon an evaluation of the contractor’s technical progress and review by the AF technical point of
contact utilizing the criteria in section 6.0 of the DoD announcement. The last three months of the nine-
month Phase I contract will provide project continuity for all Phase II awardee (see “Phase II Proposal
Submissions” below); no modification to the Phase I contract should be necessary.

Limitations on Length of Proposal

The Phase I Technical Volume has a 20-page-limit (excluding the Cover Sheet, Cost Volume, Cost
Volume Itemized Listing (a-j), Company Commercialization Report. The Technical Volume must be
in type no smaller than 10-point on standard 8-1/2" x 11" paper with one (1) inch margins. Only the
Technical Volume and any enclosures or attachments count toward the 20-page limit. In the interest of
equity, pages in excess of the 20-page limitation will not be considered for review or award. The
documents required for upload into Volume 5 using “Other” category do not count towards the 20-page
limit.

AF - 1
NOTE: The Fraud, Waste and Abuse Certificate of Training (Volume 6) is required to be completed prior
to proposal submission. More information concerning this requirement is provided below under
“PHASE I PROPOSAL SUBMISSION CHECKLIST”.

Phase I Proposal Format

Proposal Cover Sheet: If your proposal is selected for award, the technical abstract and discussion of
anticipated benefits will be publicly released on the Internet. Therefore, DO NOT include proprietary
information in these sections.

Technical Volume: The Technical Volume should include all graphics and attachments but should not
include the Cover Sheet or Company Commercialization Report (as these items are completed
separately). Most proposals will be printed out on black and white printers so make sure all graphics are
distinguishable in black and white. To verify that your proposal has been received, click on the “Check
Upload” icon to view your proposal. Typically, your uploaded file will be virus checked and converted to
a .pdf document within the hour. However, if your proposal does not appear after an hour, please contact
the DoD SBIR/STTR Help Desk via email at sbirhelp@bytecubed.com (9:00 am to 6:00 pm ET Monday
through Friday).

Key Personnel: Identify in the Technical Volume all key personnel who will be involved in this project;
include information on directly related education, experience, and citizenship. A technical resume of the
principal investigator, including a list of publications, if any, must be part of that information. Concise
technical resumes for subcontractors and consultants, if any, are also useful. You must identify all U.S.
permanent residents to be involved in the project as direct employees, subcontractors, or consultants. You
must also identify all non-U.S. citizens expected to be involved in the project as direct employees,
subcontractors, or consultants. For all non-U.S. citizens, in addition to technical resumes, please provide
countries of origin, the type of visa or work permit under which they are performing and an explanation
of their anticipated level of involvement on this project, as appropriate. You may be asked to provide
additional information during negotiations in order to verify the foreign citizen’s eligibility to participate
on a contract issued as a result of this announcement.

Phase I Work Plan Outline

NOTE: THE AF USES THE WORK PLAN OUTLINE AS THE INITIAL DRAFT OF THE
PHASE I STATEMENT OF WORK (SOW). THEREFORE, DO NOT INCLUDE
PROPRIETARY INFORMATION IN THE WORK PLAN OUTLINE. TO DO SO WILL
NECESSITATE A REQUEST FOR REVISION AND MAY DELAY CONTRACT AWARD.

At the beginning of your proposal work plan section, include an outline of the work plan in the following
format:

1) Scope: List the major requirements and specifications of the effort.


2) Task Outline: Provide a brief outline of the work to be accomplished over the span of the Phase I
effort.
3) Milestone Schedule
4) Deliverables
a. Kickoff meeting within 30 days of contract start
b. Progress reports
c. Technical review within 6 months

AF - 2
d. Final report with SF 298

Cost Volume

Cost Volume information should be provided by completing the on-line Cost Volume and including the
Cost Volume Itemized Listing (a-j) specified below. The Cost Volume information must be at a level of
detail that would enable Air Force personnel to determine the purpose, necessity and reasonability of each
cost element. Provide sufficient information on how funds will be used if the contract is awarded. The
on-line Cost Volume and Itemized Cost Volume Information will not count against the 20-page limit.
The itemized listing may be placed in the “Explanatory Material” section of the on-line Cost Volume (if
enough room), or may be submitted in Volume 5 under the “Other” dropdown option. (Note: Only
one file can be uploaded to the DoD Submission Site). Ensure that this file includes your complete
Technical Volume and the information below.

a. Special Tooling and Test Equipment and Material: The inclusion of equipment and materials will
be carefully reviewed relative to need and appropriateness of the work proposed. The purchase of special
tooling and test equipment must, in the opinion of the Contracting Officer, be advantageous to the
government and relate directly to the specific effort. They may include such items as innovative
instrumentation and/or automatic test equipment.

b. Materials: Justify costs for materials, parts, and supplies with an itemized list containing types,
quantities, and price and where appropriate, purposes.

c. Other Direct Costs: This category of costs includes specialized services such as machining or
milling, special testing or analysis, costs incurred in obtaining temporary use of specialized equipment.
Proposals which include leased hardware, must provide an adequate lease vs. purchase justification or
rational.

d. Direct Labor: Identify key personnel by name if possible or by labor category if specific names are
not available. The number of hours, labor overhead and/or fringe benefits and actual hourly rates for each
individual are also necessary.

e. Travel: Travel costs must relate to the needs of the project. Break out travel cost by trip, with the
number of travelers, airfare, per diem, lodging, etc. The number of trips required, as well as the
destination and purpose of each trip should be reflected. Recommend budgeting at least one (1) trip to the
Air Force location managing the contract.

f. Cost Sharing: If proposing cost share arrangements, please note each Phase I contract total value
may not exceed $150,000 total, while Phase II contracts shall have an initial Not to Exceed value of
$750,000. Please note cost share contracts or portions of contracts do not allow fee. NOTE: Subcontract
arrangements involving provision of Independent Research and Development (IR&D) support are
prohibited in accordance with Under Secretary of Defense (USD) memorandum “Contractor Cost Share”,
dated 16 May 2001, as implemented by SAF/AQ memorandum, same title, dated 11 July 2001.

g. Subcontracts: Involvement of a research institution is required in the project. Involvement of other


subcontractors or consultants may also be desired. Describe in detail the tasks to be performed in the
Technical Volume and include information in the Cost Volume for the research institution and any other
subcontractors/consultants. The proposed total of all consultant fees, facility leases or usage fees, and
other subcontract or purchase agreements may not exceed 60 percent of the total contract price or cost,
unless otherwise approved in writing by the Contracting Officer. The STTR offeror’s involvement must

AF - 3
equate to not less than 40 percent of the overall effort and the research institutions must equate to not less
than 30 percent.

Support subcontract costs with copies of the subcontract agreements. The supporting agreement
documents must adequately describe the work to be performed, i.e., Cost Volume. At a minimum, an
offeror must include a Statement of Work (SOW) with a corresponding detailed cost proposal for each
planned subcontract.

h. Consultants: Provide a separate agreement letter for each consultant. The letter should briefly state
what service or assistance will be provided, the number of hours required, and hourly rate.

i. Any exceptions to the model Phase I purchase order (P.O.) found at


http://www.afsbirsttr.af.mil/Program/Overview/ should be discussed with the Phase I Contracting Officer
during negotiations.

NOTE: If no exceptions are taken to an offeror’s proposal, the Government may award a contract without
discussions (except clarifications as described in FAR 15.306(a)). Therefore, the offeror’s initial proposal
should contain the offeror’s best terms from a cost or price and technical standpoint. Full text for the
clauses included in the P.O. may be found at http://farsite.hill.af.mil. Please note, the posted P.O. template
is for the Small Business Innovation Research (SBIR) Program. While P.O.s for STTR awards are very
similar, if selected for award, the contract or P.O. document received by your firm may vary in
format/content. If there are questions regarding the award document, contact the Phase I Contracting
Officer listed on the selection notification. (See item i under the “Cost Volume” section above) The
Government reserves the right to conduct discussions if the Contracting Officer later determines them to
be necessary.

j. DD Form 2345: For proposals submitted under export-controlled topics (either International Traffic
in Arms (ITAR) or Export Administration Regulations (EAR)), a copy of the certified DD Form 2345,
Militarily Critical Technical Data Agreement, or evidence of application submission must be included.
The form, instructions, and FAQs may be found at the United States/Canada Joint Certification Program
website,
http://www.dla.mil/HQ/InformationOperations/Offers/Products/LogisticsApplications/JCP/DD2
345Instructions.aspx. Approval of the DD Form 2345 will be verified if proposal is chosen for award.

NOTE: Restrictive notices notwithstanding, proposals may be handled for administrative purposes
only, by support contractors; Bytecubed, Oasis Systems, Riverside Research, Peerless Technologies
and/or Stealth Entry LLC. In addition, only Government employees and technical personnel from
Federally Funded Research and Development Centers (FFRDCs) MITRE and Aerospace
Corporations working under contract to provide technical support to AF Life Cycle Management
Center and Space and Missiles Centers may evaluate proposals. All support contractors are bound
by appropriate non-disclosure agreements. If you have concerns about any of these contractors, you
should contact the AF SBIR/STTR Contracting Officer, Michele Tritt, michele.tritt@us.af.mil.

k. The Air Force does not participate in the Discretionary Technical Assistance Program. Contractors
should not submit proposals that include Discretionary Technical Assistance.

PHASE I PROPOSAL SUBMISSION CHECKLIST

NOTE: If you are not registered in the System for Award Management, https://www.sam.gov/, you will
not be eligible for an award.

AF - 4
1) The Air Force Phase I proposal shall be a nine-month effort, and the cost shall not exceed $150,000.

2) The Air Force will accept only those proposals submitted electronically via the DoD SBIR Web site
(https://sbir.defensebusiness.org/).

3) You must submit your Company Commercialization Report electronically via the DoD SBIR website
(https://sbir.defensebusiness.org/).

It is mandatory that the complete proposal submission -- DoD Proposal Cover Sheet, Technical Volume
with any appendices, Cost Volume, Itemized Cost Volume Information, Fraud, Waste and Abuse
Certificate of Training Completion and the Company Commercialization Report -- be submitted
electronically through the DoD SBIR website at https://sbir.defensebusiness.org/. Each of these
documents is to be submitted through the Website.

Please note that the Fraud, Waste and Abuse Training shall be completed prior to submission of your
proposal. The Fraud, Waste and Abuse Certificate of Training website can be found under Section 3.9 of
the DoD 19.A STTR BAA Instructions. When the training has been completed and certified, the DoD
Submission Website will indicate this in the proposal which will complete the Volume 6 requirement. If
the training has not been completed, you will receive an error message. Your proposal cannot be
submitted until this training has been completed. Your complete proposal must be submitted via the
submissions site on or before the 8:00 pm ET, 6 February 2019 deadline. A hardcopy will not be
accepted.

The AF recommends that you complete your submission early, as computer traffic gets heavy near
solicitation close and could slow down the system. Do not wait until the last minute. The AF will
not be responsible for proposals being denied due to servers being “down” or inaccessible. Please
ensure your e-mail address listed in your proposal is current and accurate. The AF is not responsible
for ensuring notifications are received by firms changing mailing address/e-mail
address/company points of contact after proposal submission without proper notification to the
AF. Changes of this nature that occur after proposal submission or award (if selected) for Phase
I and II shall be sent to the Air Force SBIR/STTR site address, afsbirsttr-info@us.af.mil.

AIR FORCE PROPOSAL EVALUATIONS

The AF will utilize the Phase I proposal evaluation criteria in section 6.0 of the DoD announcement in
descending order of importance with technical merit being most important, followed by the qualifications
of the principal investigator (and team), and followed by Commercialization Plan.

The AF will utilize Phase II evaluation criteria in section 8.0 of the DoD announcement in descending
order of importance with technical merit being most important, followed by the potential for
Commercialization Plan, followed by the qualifications of the principal investigator (and team).

The proposer's record of commercializing its prior SBIR and STTR projects, as shown in its Company
Commercialization Report, will be used as a portion of the Commercialization Plan evaluation. If the
"Commercialization Achievement Index (CAI)”, shown on the first page of the report, is at the 20th
percentile or below, the proposer will receive no more than half of the evaluation points available under

AF - 5
evaluation criterion (c) in Section 6 of the DoD 19.A STTR instructions. This information supersedes
Paragraph 4, Section 5.4e, of the DoD 19.A STTR instructions.

A Company Commercialization Report showing the proposing firm has no prior Phase II awards will not
affect the firm's ability to win an award. Such a firm's proposal will be evaluated for commercial
potential based on its commercialization strategy.

Proposal Status and Debriefings

The Principal Investigator (PI) and Corporate Official (CO) indicated on the Proposal Cover Sheet will be
notified by e-mail regarding proposal selection or non-selection. Small businesses will receive a
notification for each proposal submitted. Please read each notification carefully and note the Proposal
Number and Topic Number referenced. If changes occur to the company mail or email address(es) or
company points of contact after proposal submission, the information shall be provided to the AF
at afsbirsttr-info@us.af.mil.

As is consistent with the DoD SBIR/STTR announcement, any debriefing requests must be submitted in
writing within 30 days after receipt of notification of non-selection. Written requests for debrief must be
submitted via www.afsbirsttr.af.mil through the SBIR system. Requests for debrief should include
the company name and the telephone number/e-mail address for a specific point of contract, as well as an
alternate. Also include the topic number under which the proposal(s) was submitted, and the proposal
number(s). Debrief requests received more than 30 days after receipt of notification of non-selection will
be fulfilled at the Contracting Officers' discretion. Unsuccessful offerors are entitled to no more than one
debriefing for each proposal.

IMPORTANT: Proposals submitted to the AF are received and evaluated by different offices within the
Air Force and handled on a Topic-by-Topic basis. Each office operates within their own schedule for
proposal evaluation and selection. Updates and notification timeframes will vary by office and Topic. If
your company is contacted regarding a proposal submission, it is not necessary to contact the AF to
inquire about additional submissions. Additional notifications regarding your other submissions will be
forthcoming.

We anticipate having all the proposals evaluated and our Phase I contract decisions within approximately
three months of proposal receipt. All questions concerning the status of a proposal or debriefing should
be directed to the local awarding organization SBIR/STTR Program Manager.

PHASE II PROPOSAL SUBMISSIONS

Phase II is the demonstration of the technology found feasible in Phase I. Only Phase I awardees are
eligible to submit a Phase II proposal. All Phase I awardees will be sent a notification with the Phase II
proposal submittal date and a link to detailed Phase II proposal preparation instructions. If the mail or
email address(es) or firm points of contact have changed since submission of the Phase I proposal, correct
information shall be sent to the AF at afsbirsttr-info@us.af.mil. Phase II efforts are typically 27 months
in duration (24 months technical performance, with 3 additional months for final reporting) with an initial
value not to exceed $750,000.

NOTE: Phase II awardees should have a Defense Contract Audit Agency (DCAA) approved accounting
system. It is strongly urged that an approved accounting system be in place prior to the AF Phase II award
timeframe. If you have questions regarding this matter, please discuss with your Phase I Contracting
Officer.

AF - 6
All proposals must be submitted electronically at https://sbir.defensebusiness.org/ by the date indicated in
the notification. The technical proposal is limited to 50 pages (unless a different number is specified in the
preparation instructions). The Commercialization Report, any advocacy letters, and the additional Cost
Volume itemized listing (a-j) will not count against the 50-page limitation and should be placed as the last
pages of the Topic Proposal file uploaded. (Note: For Phase II applications, only one file can be uploaded
to the DoD submission site. Ensure this single file includes your complete Technical Volume and the
additional Cost Volume information.) The preferred format for submission of proposals is Portable
Document Format (.pdf). Graphics must be distinguishable in black and white. Please virus-check your
submissions.

AIR FORCE STTR PROGRAM MANAGEMENT IMPROVEMENTS

The Air Force reserves the right to modify the Phase II submission requirements. Should the requirements
change, all Phase I awardees will be notified. The Air Force also reserves the right to change any
administrative procedures at any time to improve management of the Air Force STTR Program.

AIR FORCE SUBMISSION OF FINAL REPORTS

All Final Reports will be submitted to the awarding AF organization in accordance with the Contract.
Companies will not submit Final Reports directly to the Defense Technical Information Center (DTIC).

AF - 7
AIR FORCE STTR 19.A Topic Index

AF19A-T001 Maintaining Human-Machine Shared Awareness in Distributed Operations with Degraded


Communications
AF19A-T002 3D-Bioprinted Living System for Sensor Development
AF19A-T003 Remote cardiopulmonary sensing
AF19A-T004 Intelligent Robot Path Planning System for Grinding of Aircraft Propeller Blades
AF19A-T005 3D imaging for tracking and aim-point maintenance in the presence of target-pose changes
AF19A-T006 Vibration imaging for the characterization of extended, non-cooperative targets
AF19A-T007 Synthetic Scene Generation for Wide Application including High Performance Computing
Environments
AF19A-T008 Optimization of Sodium Guide Star Return using Polarization and/or Modulation Control
AF19A-T009 Autonomous Decision Making via Hierarchical Brain Emulation
AF19A-T010 Virtual Reality for Multi-INT Deep Learning (VR-MDL)
AF19A-T011 Diagnostics for Performance Quantification and Combustion Characterization in Rotational
Detonation Rocket Engine (RDRE)
AF19A-T012 Machine Learning Methods to Catalog Sources from Diverse, Widely Distributed Sensors
AF19A-T013 Software-Performed Segregation of Data and Processes within a Real-Time Embedded
System
AF19A-T014 Next Generation Energy Storage Devices Capable of 400 Wh/kg and Long Life
AF19A-T015 Space-Based Computational Imaging Systems
AF19A-T016 Multifunctional Integrated Sensing Cargo Pocket UAS
AF19A-T017 Tunable bioinspired spatially-varying random photonic crystals
AF19A-T018 Hardware-in-the-loop test bed for magnetic field navigation
AF19A-T019 Efficient numerical methods for mesoscale modeling of energetic materials
AF19A-T020 Guided Automation of Molecular Beam Epitaxy for Swift Training to Optimize Performance
(GAMESTOP) of New Materials
AF19A-T021 Carbon-Carbon Manufacturing Process Modeling-Aeroshells

AF - 8
AIR FORCE STTR 19.A Topic Descriptions

AF19A-T001 TITLE: Maintaining Human-Machine Shared Awareness in Distributed Operations with


Degraded Communications

TECHNOLOGY AREA(S): Human Systems

ACQUISITION PROGRAM: --

OBJECTIVE: Develop and evaluate controls, displays, and/or decision aids that help maintain human-machine
shared situation awareness during distributed operations conducted with manned and unmanned vehicles under
possibly contested and degraded conditions.

DESCRIPTION: The importance of autonomy for realizing Air Force employment of multiple manned and
unmanned teamed sensor platforms in future warfighting is well recognized. These new mixed-initiative interactive
systems must enable human-machine collaboration and combat teaming that pairs a human’s pattern recognition and
judgement capabilities with recent machine advances in artificial intelligence and autonomy to facilitate
synchronized tactical operations using heterogeneous manned and unmanned systems. Agility in tactical decision-
making, mission management, and control is also a requirement given anticipated complex, ambiguous, and time-
challenging warfare conditions. For example, shifts from a centralized to a decentralized control structure, especially
when communication links between the human and machine team members degrades, is plausible given unmanned
vehicles will have onboard computational resources to flexibly serve as autonomous and capable teammates that can
complete needed tasks. The envisioned distributed and networked operations will complicate human-machine
coordination, especially when communications are intermittent, degraded, and/or delayed. This is in addition to the
challenges of achieving multi-domain situational awareness/command and control.

Current control station interface designs do not support information sharing and coordination to synchronize human-
machine awareness whenever communications are restored. Improvements are necessary to realize effective human-
machine teaming performance in mission operations, especially when alternating between centralized and
decentralized control modes. Controls, displays, and decision support services are needed for the human operator to
efficiently retrieve integrated contextual data that helps rapid restoration and maintenance of a shared understanding
of relevant information that supports human-machine joint problem-solving, effective decision making, and ultimate
task/workload balancing. This will require agent-assisted methods to identify critical mission events and associated
information gaps, as well as intuitive interfaces by which the human and machine can rapidly gain shared situation
awareness and dynamically coordinate any adjustments needed in the vehicles’ operations, with respect to the
temporal, spatial, and mission relevant demands. Supplementing the control station with interfaces and services to
restore and maintain situation awareness will result in more resilient operations. In sum, the controls and displays in
the operator’s control station need to support human-machine shared awareness during distributed, disaggregated
operations under a variety of communication conditions.

Completion of this effort will involve identifying control and display requirements to support human-machine
teamwork (i.e., cooperative tasking) for agile, efficient mission execution. This should include an analysis of
requirements for a variety of communication conditions, as the design approach likely is situation dependent. For
example, relevant questions include: What techniques can be employed to help keep the operator in-the-loop during
communication loss? How best should the interfaces identify information gaps, present the machines’ actions during
lost communications, and cue evolving collaboration/cooperation opportunities? How should communications be
prioritized for re-establishing common ground after communications resume? What interaction modes/strategies are
useful for supporting subsequent human-machine joint decision-making and task planning/execution? What
mechanisms are best to specify alternatives, perhaps proactively, for different communication states/mission events?

This effort addresses the design and evaluation of interfaces that support human-machine shared situation
awareness. Aside from addressing (simulated or real) human and autonomy team members completing multiple
tasks under varying communication conditions, the proposer can choose systems/tasks/mission(s) to utilize, as long
as the effort considers at least two air vehicles (manned and unmanned). (Any simulated or representative system

AF - 9
employed should maintain data at an unclassified level. Proposers should not require government equipment or
facilities.)

PHASE I: Design/evaluate displays, controls, and/or decision aids to support operator-machine teaming to maintain
shared awareness of manned and unmanned air vehicles operations with limited communications. Generate final
report describing solution(s), evaluation results, and an experimental plan to establish usability improvements in
Phase II. A feasibility demonstration is desirable, but not required.

PHASE II: Perform iterative test/refine cycles on Phase 1’s design, culminating in a proof-of-concept
interface/decision support system. Using high-fidelity simulations, evaluate prototype’s effectiveness in maintaining
human-machine shared awareness during distributed, disaggregated operations under a variety of communication
conditions. Required Phase II deliverables include final report and software/hardware required to demonstrate the
interface concept on a stand-alone capability and/or suitable to be executed in a USAF simulation that is mutually
agreeable to the contractor and AFRL.

PHASE III DUAL USE APPLICATIONS: Applications include planning and executing any military or commercial
(e.g., law enforcement) plan using highly autonomous unmanned vehicles in decentralized operations with limited
communications. Some interfaces and methodologies will be applicable to other human-machine teaming
applications.

REFERENCES:
1. United States Air Force. (2015), Air Force Future Operating Concept: A View of the Air Force in 2035.
Available at: http://www.af.mil/Portals/1/images/airpower/AFFOC.pdf.

2. United States Air Force. (2015), Autonomous Horizons: System Autonomy in the Air force – A Path to the
Future, Volume 1: Human-Autonomy Teaming. USAF Office of the Chief Scientist, AF/ST-TR-15-01.

3. Patzek, M., Rothwell, C., Bearden, G., Ausdenmoore, B., and Rowe, A (2013). Supervisory control state
diagrams to depict autonomous activity. In Proceedings of the 2013 International Symposium on Aviation
Psychology, Dayton, OH.

4. Draper, M., Calhoun, G., Hansen, M., Douglass, S., Spriggs, S., Patzek, M., Rowe, A., Evans, D., Ruff, H.,
Behymer, K., Howard, M., Bearden, G., Frost, E. (2017). Intelligent multi-unmanned vehicle planner with adaptive
collaborative control technologies (IMPACT). International Symposium of Aviation Psychology.

KEYWORDS: unmanned vehicle, human-machine interface, situation awareness, decision support, intelligent agent,
communication, distributed operations, human-machine teaming

TPOC-1: Lt Tyler Goodman


Phone: 937-713-7150
Email: tyler.goodman.3@us.af.mil

AF19A-T002 TITLE: 3D-Bioprinted Living System for Sensor Development

TECHNOLOGY AREA(S): Human Systems

ACQUISITION PROGRAM: --

OBJECTIVE: The objective of this program is to develop a three dimensional (3D) bioprinted tissue or organ that
recapitulates and simulates human-level architectures, microstructures, and physiological conditions. Successful
development of a platform that encompasses functionality of human organ(s) integrated into a single platform would
allow for robust analysis human performance with the ease and flexibility in a small, light-weight, and transportable

AF - 10
device. The first phase is to develop a 3D-bioprinted tissue with human cells that simulate complex multi-cell
functions. The second phase will require integrated, real-time biosensors using chip or other microelectronic
technology for sensing and analysis of kinetic biological signals of stress and resiliency. The resultant platform
would ultimately provide a capability to respond in a physiologically-relevant manner and continually monitor
unique biosignatures from physical stressors (such as extreme temperature or hypoxic environments) or
environmental exposures (such as chemicals, particles, or radiation).

DESCRIPTION: The development of microfluidic technologies has catalyzed the merging of sensors, fabrication,
and tissue engineering on the micro- and nanometer size regime. For example, the organ-on-chip construct allows
for the ex vivo design of organ level architectures, microstructures, and physical conditions to bring life-relevant
functionality of human organs packaged in devices commonly the size of a quarter. Over the last 5 years, the
prevalence of microfluidic manuscripts have sky-rocketed, mainly for the purpose of developing sensing
applications. One aspect of photolithography constructed microdevices is that they are prepared layer-by-layer and
require sealing, interfacing, and aligning small channels to create passages for cell and matrix seeding, perfusion,
and solution delivery. Therefore, due to this planar development process, the configuration of 3D features or cellular
structures has been traditionally more difficult. However, using a bioprinter, complex 3D highly organized tissue
structures can be rapidly created and integrated with precisely sized channels. The second component of this
integrated platform will involve a sensor system that provides “real-time,” physiologically relevant alerts due to
threats from various environmental stressors.

PHASE I: Create an integrated platform encompassing a select organ(s). The platform must have the ability to
detect, collect, and display information after exposure to environmental stressors. The design concept can include,
but is not limited to, microdevices with channels, wells and/or connections that create passages for cell seeding,
perfusion, and experimental solution delivery. The resultant integrated platform must be an innovative concept and
be associated with a theoretical algorithm or software. The human cells used must be specific to the unique
perfusion and cell-types of the selected organ(s). This Phase will demonstrate the feasibility of producing a model
capable of simulating an organ with key components, must be connected physically and fluidically to an external
stimuli, and produce data that will allow for an understanding of exposure and the potential biosignatures of interest.

PHASE II: The second phase will require integration of a sensor system onto a microprocessor, such as a chip) to
provide real-time, continual monitoring of biologically-derived signatures. The sensor system will provide electrical,
biochemical, photo, and/or physical monitoring outputs in as a response to stimuli. The components of this sensor
system need to be fully integrated into the tissue model in a small, 3D configuration in order to ensure portability
and ease of use. The biosignatures should be relevant for detecting exposure, threat level, and/or resiliency
parameters to maintain Airmen health, performance, and/or cognition. A test-bed validation of known biosignatures
within the tissue or organ is necessary for product translation.

PHASE III DUAL USE APPLICATIONS: The portable device will be fully operational for up to 48 hours with
ability to detect the health effects of physical stressors or environmental exposures through known biosignatures in
real-time.

REFERENCES:
1. Jinah Jang, Hee-Gyeong Yi, and Dong-Woo Cho (2016) : 3D Printed Tissue Models: Present and Future; ACS
Biomater. Sci. Eng., Publication Date (Web): April 30, 2016 DOI: 10.1021/acsbiomaterials.6b00129

2. 3D Bioprinting for Tissue and Organ Fabrication. (2016) Zhang YS, et al Ann Biomed Eng. 2016 Apr 28. [Epub
ahead of print]

3. Label-Free and Regenerative Electrochemical Microfluidic Biosensors for Continual Monitoring of Cell
Secretomes. (2017) Shin SR et al., Adv Sci (Weinh). 6;4(5):1600522. doi: 10.1002/advs.201600522. eCollection

KEYWORDS: 3D Bioprinting, Sensor Development, Biosignatures, organ on chip

TPOC-1: Saber Hussain


Phone: 937-904-9517

AF - 11
Email: saber.hussain@us.af.mil

AF19A-T003 TITLE: Remote cardiopulmonary sensing

TECHNOLOGY AREA(S): Biomedical

ACQUISITION PROGRAM: --

OBJECTIVE: Develop a non-contact sensor for cardiopulmonary vital sign monitoring. This sensor should remotely
measure physiology and demonstrate real-time functionality. Government-furnished materials, equipment, and
facilities will not be provided.

DESCRIPTION: The U.S. Air Force is interested in developing non-invasive sensor systems for use in special
operations, command and control (e.g., human-machine teaming), and adaptive training, as examples. Such a system
would monitor vital and physiological information of an Airman and provide that information for medical or
cognitive interpretation. In some situations, it may be preferred to have these sensors be remote from the end-user’s
perspective, maximizing utility and minimizing effects on day-to-day operations. While the application of this
technology to medical monitoring is clear, proof-of-concept demonstrations have shown how physiological
information can be used to estimate current cognitive capacity in the context of human-machine teaming, and this
could be extended to facilitate individualized, adaptive training. However, currently-available sensors are not
conducive to operational environments for reasons including but not limited to invasiveness, security, privacy, and
daily use in a job performance context.

Current state-of-the-practice in persistent cardiopulmonary vital sign monitoring requires physically-worn sensors.
For example, standard clinical practice for pulse rate measurement requires adhesive electrodes that can irritate the
skin with repeated use or bulky optical sensors that impede normal behavior (e.g., for pulse oximetry). In the
commercial sector, sensor concepts have evolved to be integrated into numerous wrist- and abdomen-worn devices.
However, these wearable devices are commonly known to have less-than-accurate characterizations of one’s
physiology. Common characteristics of all wearable devices that may be less desirable in military-relevant settings
include: limited battery life, required transmission of data through wireless protocols, short product life-cycles,
single-user limitations, inability to access raw data, and fragility in operational conditions

The focus of this topic is to develop and demonstrate cardiopulmonary sensor solutions that are more appropriate for
these operational use cases. Ideal characteristics of such a sensor would provide remote, non-invasive measurement
of the Airman’s vital sign physiology. In this context, we consider a sensor to be sufficiently ‘remote’ when the
sensor-to-Airman distance is on the order of meters (e.g., sensors placed within a desk or workstation environment),
at minimum. Recent work in the area of imaging photoplethysmography (iPPG) shows great promise in providing
such a capability, although there are many other options, as well, some which are mentioned in the included
references.

PHASE I: Design a concept for remote, noninvasive cardiopulmonary vital sign measurement that meets operational
transition requirements. From this conceptual design, develop a breadboard system to determine technical
feasibility. Establish performance goals for breadboard system and perform an analysis of prototype system
performance in a laboratory environment.

PHASE II: Develop, demonstrate, and validate complete prototype system for remote, cardiopulmonary vital sign
measurement, in real-time, with limited a priori calibration. Construct and demonstrate operation of the prototype in
a relevant, simulated operational environment. Establish performance parameters through laboratory
characterization experiments. Provide a practical implementation of the sensor given operational constraints. Deliver
prototype system(s) to the government customer.

PHASE III DUAL USE APPLICATIONS: Technology transition facilitated from non-SBIR/STTR government
sources (military or private sector). End-state vision is a system, meeting operational requirements, that can be used

AF - 12
to research and develop aforementioned target applications. Pursue dual-use commercial applications (e.g., clinical).

REFERENCES:
1. Kranjec, J., Beguš, S., Geršak, G., & Drnovšek, J. (2014). Non-contact heart rate and heart rate variability
measurements: A review. Biomedical Signal Processing and Control, 13, 102-112.

2. McDuff, D. J., Estepp, J. R., Piasecki, A. M., & Blackford, E. B. (2015, August). A survey of remote optical
photoplethysmographic imaging methods. In Engineering in Medicine and Biology Society (EMBC), 2015 37th
Annual International Conference of the

3. Teichmann, D., Brüser, C., Eilebrecht, B., Abbas, A., Blanik, N., & Leonhardt, S. (2012, August). Non-contact
monitoring techniques-principles and applications. In Engineering in Medicine and Biology Society (EMBC), 2012
Annual International Conference of the IEEE (pp. 1302-1305). IEEE.

4. Estepp, J. R., Blackford, E. B., & Meier, C. M. (2014, October). Recovering pulse rate during motion artifact
with a multi-imager array for non-contact imaging photoplethysmography. In Systems, Man and Cybernetics
(SMC), 2014 IEEE International Conference on (pp. 1462-1469). IEEE.

KEYWORDS: heart rate, physiology, remote, non-contact, sensor, patient monitoring, vital signs, imaging

TPOC-1: Justin R. Estepp


Phone: 937-938-3602
Email: justin.estepp@us.af.mil

AF19A-T004 TITLE: Intelligent Robot Path Planning System for Grinding of Aircraft Propeller Blades

TECHNOLOGY AREA(S): Materials/Processes

ACQUISITION PROGRAM: --

OBJECTIVE: Develop intelligent robot path planning process for grinding of aircraft propeller blades.

DESCRIPTION: Repair operations on aircraft propellers such as C130 are carried out by robotic grinding
operations. The current process uses a fixed robot path for each blade geometry. An opportunity exists to
intelligently customize the robot path for the specific defects found on a part.

The proposed research would use coordinate measurement machine probing and 3D scanning techniques to build a
model of the blade and then automatically generate customized robot grinding tool paths specific to the defects of
the blade. The scanning should be automated as well with automation for filtering scan data and converting to a
suitable format as a starting point 3D model for path planning. The scanned model and original CAD design of the
part should be registered and aligned properly for path planning. The automatic path will then be made to correct
defects by removing material from the scanned model to move into tolerance with the original CAD design.
Additionally, the path planning system should focus on usability, flexibility, and supportability. Minimal training
and learning curve should be required to use the path planning process. The software user interface should provide
enough control to react to different blade geometries and defect types. The software should be easy to upgrade and
allow modest refinements by users via plugins, scripts, and other end user software customizations.

PHASE I: Develop a proof of concept path planning system prototype. In this phase, the process will demonstrate
making a robot path plan from 3D scan data. Customization of the path plan might be limited to only Z height
corrections. The prototyping in this phase will provide key input to specifying and defining the path planning
software to be delivered in phase II.

AF - 13
PHASE II: Develop the path planning system to a deployment ready state. Greater ability to make corrections in
full 3D will be implemented. The path plans produced by the process will be verified on target robot systems. Ease
of use will be evaluated using novice users. The goal of the phase II will be a working robot grinding software the
results in measurable improvements in the rate of success of propeller blade repair.

PHASE III DUAL USE APPLICATIONS: Robotic grinding has many commercial applications. A successful
system could be marketed to commercial aerospace industry as well as other defense customers. Additional markets
might include the construction, automotive, and shipbuilding industries.

REFERENCES:
1. Wang, W. and Choa, Y. “A Path Planning Method for Robotic Belt Surface Grinding”, Chinese Journal of
Aeronautics, Vol. 24., Issue 4, August 2011.

2. Li, S. Xie, X. and Yin, L. “Research on Robotic Trajectory Automatic Generation Method for Complex Surface
Grinding and Polishing”, ICIRA 2014: Intelligent Robotics and Applications, 2014.

3. Sufian, M., Chen, X. Yu, D., “Investigating the Capability of Precision in Robotic Grinding”, Automation and
Computing (ICAC), October 2017.

KEYWORDS: grinding, robotic, path, intelligent

TPOC-1: Shane Groves


Phone: 478-222-4066
Email: shane.groves@us.af.mil

AF19A-T005 TITLE: 3D imaging for tracking and aim-point maintenance in the presence of target-pose
changes

TECHNOLOGY AREA(S): Weapons

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop a 3D imaging approach that meets the spatial and temporal requirements needed for tracking
and aim-point maintenance in the presence of target-pose changes for directed-energy (DE) missions.

DESCRIPTION: Target-pose changes tend to be the “Achilles’ heel” to modern tracking and aim-point maintenance
solutions for realistic DE missions. Provided well-characterized targets, there are approaches that perform well
(e.g., centroid and correlation tracking [1]); however, for realistic DE missions, tracking and aim-point maintenance
techniques must function with uncharacterized targets that inevitably change pose. Such engineering constraints
necessitate the development of a 3D imaging approach that can characterize targets (through target-depth
information [2]) and perform tracking and aim-point maintenance functions in the presence of target-pose changes.
A recent dissertation effort, for instance, developed a 3D imaging approach [3] using spatial heterodyne [4]. This

AF - 14
STTR topic looks to develop a 3D imaging approach which meets the spatial and temporal requirements needed for
integration into DE systems. For realistic DE missions, the associated laser-target interaction does not provide a
mirror-like reflection and in the presence of distributed-volume aberrations, results in speckle and scintillation, in
addition to anisoplanatism, at the receiver. The identified approach must also be robust against low signal-to-noise
ratios; size, weight, and power constraints; and latency in the tracking loop.

The end goal of this STTR topic is to develop (Phase I and II) and demonstrate (Phase III) a 3D imaging approach
that can characterize targets and perform tracking and aim-point maintenance functions in the presence of target-
pose changes for realistic DE missions. As such, a Phase I effort shall develop a 3D imaging approach via detailed
theoretical and numerical studies that verify wave-optics calculations for a variety of ranges and resolutions. A
Phase II effort shall then develop experiments that verify the wave-optics calculations. For this purpose, facilities at
AFRL could provide the scaled-laboratory environment needed to explore a variety of ranges and resolutions. A
Phase III effort could then demonstrate 3D imaging at distances greater than 1 km in a field environment with
moving targets. Such testing shall ensure commercialization of the developed approach.

PHASE I: To achieve the identified Phase II objectives, a Phase I effort shall focus on the following deliverables.

• Performing wave-optics calculations for a variety of ranges and resolutions. These calculations shall
identify scalability and include the relationship between the aperture, the distributed-volume aberrations, and the 3D
targets of interest. This step shall ensure that the developed approach is ready for a Phase II effort.

PHASE II: To achieve the identified Phase III objectives, a Phase II effort shall focus on the following deliverables.

• Performing scaled-laboratory experiments (potentially at AFRL) in order to verify the wave-optics


calculations performed in a Phase I effort. This step shall ensure that the developed approach is ready for a Phase III
effort.

PHASE III DUAL USE APPLICATIONS: Military application: Demonstrating the developed approach in a field
environment at distances greater than 1 km with moving targets. This step shall ensure that the developed approach
is ready for realistic DE missions.

Commercial Application: The successfully demonstrated 3D imaging approach shall translate into a high-fidelity
solution that is available to the DoD.

REFERENCES:
1. P. Merritt and M. Spencer, Beam Control for Laser Systems 2nd Edition, Directed Energy Professional Society,
Albuquerque, NM (2012).

2. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-
held Plenoptic Camera,” Stanford Tech Report CTSR 2005-02, 1-11 (2005).

3. J. W. Stafford, B. D. Duncan, and D. J. Rabb, “Phase gradient algorithm method for three-dimensional
holographic ladar imaging,” App. Opt. 55(17), 4611-4620 (2016).

4. M. F. Spencer, “Spatial Heterodyne,” Encyclopedia of Modern Optics II Volume 4, 369-400 (2018).

KEYWORDS: 3D imaging, tracking, aim-point maintenance, beam control, adaptive optics

TPOC-1: Dr. Jonathan Stohs


Phone: 505-846-3769
Email: jonathan.stohs@us.af.mil

AF - 15
AF19A-T006 TITLE: Vibration imaging for the characterization of extended, non-cooperative targets

TECHNOLOGY AREA(S): Sensors

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop a vibration imaging approach that meets the spatial and temporal requirements needed to
perform high-fidelity characterizations of extended, non-cooperative targets (aka combat identification) at extended
standoffs with an aperture that has dual purpose for both directed-energy (DE) and intelligence, surveillance, and
reconnaissance (ISR) missions.

DESCRIPTION: Vibration imaging offers a distinct way forward with respect to combat identification at extended
standoffs for both DE and ISR missions. In practice, vibration imaging offers many advantages over a single-pixel
laser Doppler vibrometer [1]. This is said because vibration imaging simultaneously acquires and resolves velocity
data over an extended spatial area. Given a single-pixel laser Doppler vibrometer, speckle is a dominant noise
source, since the signal fades caused by speckle lead to velocity estimates which have spikes and an elevated noise
floor. Via simultaneous data acquisition with a focal-plane array, vibration imaging offers the ability to spatially
average which ultimately enables speckle-noise mitigation.

Vibration imaging works via the use of doublet-pulse vibrometry [2]. Here, the collection of multiplexed digital-
holography data enables us to simultaneously measure the complex-optical field associated with two laser pulses
separated in time [3, 4]. By estimating the phase difference between the two received pulses, we can then measure
the target’s velocity [2]. In turn, vibration imaging enables us to perform target identification at extended standoffs.
Such functionality offers promise for both DE and ISR missions, where it is important to know the characteristics of
extended, non-cooperative targets. This added functionality will ultimately enable future DE and ISR assets to
determine, for example, whether the engine is running or not (at a distance that is safe for inspection). It will also
enable us to tell how fast the speckle is changing, which is currently an unknown. With this in mind, many DE and
ISR solutions assume that the received speckle is either correlated or uncorrelated frame to frame; thus, it is
important that we characterize this phenomena in the near future, so that we can move forward with the development
of future DE and ISR assets.

The end goal of this STTR topic is to design (Phase I and II) and demonstrate (Phase III) a vibration imaging
approach that meets the spatial and temporal requirements needed to perform high-fidelity characterizations of
extended, non-cooperative targets at extended standoffs. As such, during a Phase I effort, a detailed theoretical and
numerical analysis shall be performed to explicitly verify wave-optics calculations for a variety of ranges and
resolutions. A Phase II effort shall then develop experiments that verify the wave-optics calculations. For this
purpose, facilities at AFRL could provide the scaled-laboratory environment needed to explore a variety of ranges
and resolutions. A Phase III effort could then demonstrate vibration imaging at distances greater than 1 km in a field
environment with moving targets. Such testing shall ensure commercialization of the developed approach.

PHASE I: To achieve the identified Phase II objectives, a Phase I effort shall focus on the following deliverables.

• Performing wave-optics calculations for a variety of ranges and resolutions. These calculations shall
identify scalability and include the relationship between the aperture and the extended, non-cooperative targets of

AF - 16
interest while at extended standoffs. This step shall ensure that the developed approach is ready for a Phase II effort.

PHASE II: To achieve the identified Phase III objectives, a Phase II effort shall focus on the following deliverables.

• Performing scaled-laboratory experiments (potentially at AFRL) in order to verify the wave-optics


calculations performed in a Phase I effort. This step shall ensure that the developed approach is ready for a Phase III
effort.

PHASE III DUAL USE APPLICATIONS: Military application: Demonstrating the developed approach in a field
environment at distances greater than 1 km with moving targets. This step shall ensure that the developed approach
is ready for both DE and ISR missions.

Commercial Application: The successfully demonstrated vibration imaging approach shall translate into a high-
fidelity solution that is available to the DoD.

REFERENCES:
1. P. Castellini, G. M. Revel, and E. P. Tomasini, “Laser Doppler Vibrometry,” An Introduction to Optoelectronic
Sensors, 216-229 (2009)

2. P. Gatt et al., “Phased Array Science and Engineering Research (PHASER) Program Final Report,” TR CDRL-
A001-01, Lockheed Martin Coherent Technologies (2015) [Dist. C, Export Controlled].

3. S. T. Thurman and A. Bratcher, “Multiplexed synthetic-aperture digital holography,” App. Opt. 54(3), 559-568
(2015).

4. M. F. Spencer, “Spatial Heterodyne,” Encyclopedia of Modern Optics II Volume 4, 369-400 (2018).

KEYWORDS: vibration imaging, vibrometry, combat identification, beam control, digital holography, spatial
heterodyne

TPOC-1: Mark Spencer


Phone: 505-853-1607
Email: mark.spencer.6@us.af.mil

AF19A-T007 TITLE: Synthetic Scene Generation for Wide Application including High Performance
Computing Environments

TECHNOLOGY AREA(S): Weapons

ACQUISITION PROGRAM: --

OBJECTIVE: Develop a synthetic scene generation package that meets the baseline technical requirements,
connects to other simulation tools, and operates efficiently in high performance computing (HPC) environments.

DESCRIPTION: Synthetic scene generation software produces images of distant targets within their background.
Such synthetic images reduce our dependence on expensive field tests. Some applications in the area of directed
energy include testing tracking algorithms or real-time tracking hardware, informing a beam control system in a
broader high energy laser simulation, and studying target acquisition and aimpoint identification [1,2]. Applications
outside of directed energy include remote sensing, laser radar, night vision, munitions targeting, and space
situational awareness [3,4]. Unfortunately, the community uses a large number of different scene generation codes,
each with its own advantages and disadvantages. While some codes meet most of the baseline technical
requirements, they lack critical interfaces, documentation, and compatibility with high performance computing

AF - 17
(HPC) environments; or vice versa. Thus, programs often waste significant funds by creating their own scene
generation code or by developing single-use interfaces and modifications to existing software.

This STTR topic will enhance the practical and technical capabilities of one of those codes in three ways in order to
create a product which is more broadly useful. First, this topic will modify the code for efficient execution and
cross-code communication in HPC environments. This change will allow us to quickly exercise high-fidelity models
of laser systems by leveraging HPC resources. Second, it will modularize and document the code, and improve
interfaces, allowing connections to diverse laser physics models and hardware testbeds. Third, it will improve the
technical capabilities to meet the baseline requirements of high energy laser system modeling.

The end goal of this STTR topic is to develop a scene generation package which is useful for many applications. As
such, a Phase I effort shall produce a software development plan that will meet the requirements for technical
capabilities, interfaces, documentation, and HPC-compatibility. It will also conduct proof of concept tests on an
HPC system. A Phase II effort shall execute the software development plan and demonstrate full capability in
relevant HPC environments. A Phase III effort could then focus on advanced, research-grade capabilities which
would make the final product state-of-the-art in a number of areas. Such capabilities shall ensure commercial
success of the end product.

PHASE I: To achieve the identified Phase II capabilities, a Phase I effort shall focus on the following deliverables:
• Perform an interface requirements analysis.
• Create a software development plan.
• Develop a plan for execution on HPC assets.
• Conduct proof of concept tests on relevant HPC systems.

PHASE II: A Phase II effort shall create a scene generation product which is useful for a broad range of
applications.
• Convert all modules for execution on HPCs.
• Enhance the technical capabilities as needed.
• Include software-to-software interfacing (i.e. an API) with Matlab and other codes.
• Document the modules and their interfaces.
• Perform a demonstration of full capability in a relevant HPC environment.

PHASE III DUAL USE APPLICATIONS: A Phase III effort shall develop the advanced technical capabilities
needed by future programs.
• Target heating and damage
• Depth-resolved imaging
• Earth shine and sky glow
• Clutter, horizon, cloud structure, and water surface structure
• Astronomical backgrounds
• Multiple illuminators with backscatter

REFERENCES:
1. M. A. Owens, M. B. Cole, M. R. Laine, “Integration of Irma tactical scene generator into directed-energy weapon
system simulation,” Proc. SPIE 5097 (2003).

2. N. R. Van Zandt, J. E. McCrae, and S. T. Fiorino, “PITBUL: a physics-based modeling package for imaging and
tracking of airborne targets for HEL applications including active illumination,” Proc. SPIE 8732, 87320H (2013).

3. J. F. Riker, G. A. Crockett, and R. L. Brunson, “The time-domain analysis simulation for advanced tracking
(TASAT),” Proc. SPIE 1697, 297-309 (1992).

4. D. Crow, C. Coker, and W. Keen, “Fast line-of-sight imagery for target and exhaust-plume signatures (FLITES)
scene generation program,” Proc. SPIE 6208, 62080J (2006).

AF - 18
KEYWORDS: Synthetic scene generation, image synthesis, rendering, computer graphics, target tracking, beam
control, high performance computing, application programming interface (API)

TPOC-1: Noah Van Zandt


Phone: 505-853-2914
Email: noah.van_zandt.1@us.af.mil

AF19A-T008 TITLE: Optimization of Sodium Guide Star Return using Polarization and/or Modulation
Control

TECHNOLOGY AREA(S): Sensors

ACQUISITION PROGRAM: --

OBJECTIVE: Deliver analysis, design, optical components, and controls for system to optimally couple light from
Sodium Guide Star laser (FASOR, Toptica) by manipulation of the beam's polarization state, modulation of laser, or
combination of two.

DESCRIPTION: Coupling of Sodium Guide Star light into the sodium layer is dependent upon characteristics of the
light illuminating the layer and the alignment of the sodium atom in the layer. The sodium atoms tend to align their
dipole with the magnetic field of the earth. This results in varying returns dependent upon the geographic location of
the guide star laser and geometry of the illumination relative to the magnetic field. Strong returns from the sodium
layer independent of geographic location and illumination geometry is needed to enable responsive space situational
awareness capabilities for imaging dim LEO satellites and detecting proximity objects in GEO. Controlling
polarization of the Guide Star laser relative to the sodium layer has potential to improve returns. Techniques to
modulate the polarization to match Larmor precession of the sodium atoms could allow significant improvement in
Guide Star efficiency. Other techniques for modulating the Guide Star beam and controlling polarization with the
end goal of optimizing returns are of interest.

PHASE I: 1) Develop concepts to optimize Sodium Guide Star laser return through modulation and/or polarization
control.
2) Work with sponsoring facility to determine initial proof of concept and testing methods.

PHASE II: 1) Refine design of optical components based on testing in Phase I.


2) Fabricate improved optical components and control system based on sponsoring facility requirements.
3) Deploy optical components and conduct on-sky testing to determine operational performance of the system.

PHASE III DUAL USE APPLICATIONS: 1) Demonstrate assembled prototype unit ready to deliver to field along
with an estimate of the life cycle cost.
2) Deliver, install, and test at least one prototype at Air Force operated electro-optic tracking facility.

REFERENCES:
1. “Improving sodium laser guide star brightness by polarization switching,” Tingwei Fan, Tianhua Zhou, & Yan
Feng, Scientific Reports, 22 January 2016 (https://www.nature.com/articles/srep19859.pdf)

2. “Sodium Laser Guide Star Brightness, Spotsize, and Sodium Layer Abundance,” Jian Ge, et. al., GE ’98
(http://www.oir.caltech.edu/twiki_oir/pub/Palomar/PalmLGS/LgsLinks/ge98.pdf)

3. “Sodium Guidestar Radiometry Results from the SOR's 50W Fasor,” Jack Drummond, Steve Novotny, Craig
Denman, Paul Hillman, John Telle, Gerald Moore, AMOS Technical Conference, 2006
(https://amostech.com/TechnicalPapers/2006/Lasers/Drummond.pdf)

AF - 19
KEYWORDS: Guide Star Laser, Sodium Layer, Polarization, SSA

TPOC-1: Ryan Swindle


Phone: 808-891-7746
Email: thomas.swindle@us.af.mil

AF19A-T009 TITLE: Autonomous Decision Making via Hierarchical Brain Emulation

TECHNOLOGY AREA(S): Information Systems

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: To develop methods that produce “human-like” decision capabilities. This methodology will be used
to improve computer interpretation of visual scenes without human intervention.

DESCRIPTION: Human decision making is an area that has been studied extensively but with little consensus in
how it is actually implemented in the brain. Current attempts to correlate neural activity in the brain with externally
observed human responses seems to indicate that the Bayesian paradigm is a key component of that neural
processing. For example, “Studies of human cue integration, both within modality (e.g. stereo and texture) and
across modality (e.g. sight and touch or sight and sound), consistently find cue weights that vary in the manner
predicted by Bayesian theory” [1], [2], [3], [4], [5]. In addition to lending support to the Bayesian paradigm,
understanding how humans integrate various modalities of sensor inputs will lend considerable insight into how
computers can integrate heterogeneous data sources such as infrared, visual, and RF, to name just a few. In
conjunction with the Bayesian hypothesis of neural coding, it is of key interest to determine how population of
neurons encode uncertainty [6]. As other researchers have reported human decision making appears to have the
unique ability to glean information from visual scenes by the mechanism of reducing the uncertainty or entropy [7],
[8]. In the latter reference some progress has been made in applying the entropy idea to object localization in
images.
Recent attempts to apply human decision making ideas have centered about neural networks. These approaches
have not focused on the perceived qualities of the brain in decision making, and are somewhat restricted for
surveillance applications due to their limitations in accommodating changing environments for which they are
trained. It is desired to emulate human decision making using more holistic approaches. These include, but are not
limited to, the availability of multi-modal data, the use of hierarchical decision making, confidence factors
associated with intermediary decisions, and feedback mechanisms to detect and correct erroneous intermediary
decisions.

PHASE I: Identify statistical approaches that emulate human decision making for surveillance applications based on
partial information obtained from multiple sensors to achieve objectives (e.g. detection, localization and tracking).
Quantify performance gains relative to conventional algorithms. Use synthetic data sets to demonstrate
effectiveness. A baseline approach in common use should be used for performance comparisons.

PHASE II: Further refine and develop the statistical models and algorithms for radar signal processing tasks.
Conduct high fidelity demonstration/validation of algorithm performance, based on finer grained simulations.

AF - 20
Develop a baseline embedded computing approach for meeting tactical timeline requirements for chosen
applications. Quantify performance gains relative to conventional algorithms

PHASE III DUAL USE APPLICATIONS: Military applications may include: improved detection, localization, and
tracking of various emitters in diverse military scenarios, having potential applicability across the entire DOD ISR
enterprise.
Commercial applications may include fields such as law enforcement, medical, and automotive

REFERENCES:
1. Zhou, F., et al, “Affective parameter shaping in user experience prospect evaluation based on hierarchical
Bayesian estimation”, Expert Systems with Applications, Elsevier, Jul. 2017.

2. Knill, D.C., A. Pouget, ``The Bayesian brain: the Role of Uncertainty in Neural Coding and Computation'',
Trends in Neuroscience, Dec. 2004.

3. Korn, C.W., D.R. Bach, “Heuristic and optimal policy computations in the brain during sequential decision-
making”, Nature Comm., Jan. 2018.

4. Dayan, P., L.F. Abbott, Theoretical Neuroscience, MIT Press, 2001.

KEYWORDS: autonomous vehicles, human decision making, hierarchical statistical models

TPOC-1: Dan Stevens


Phone: 315-330-2416
Email: daniel.stevens.7@us.af.mil

AF19A-T010 TITLE: Virtual Reality for Multi-INT Deep Learning (VR-MDL)

TECHNOLOGY AREA(S): Information Systems

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop a new high-fidelity modeling and simulation (M&S) framework that addresses the need for
voluminous and high-quality Multi-INT training data for deep learning networks that would be too expensive and
infeasible based on costly field experiments. The focus of this effort is on multiphysics-based modeling of radio
frequency (RF) signals in realistic physical and contested environments. Effective training methods for deep
learning systems are essential for improving the performance of autonomous systems.

DESCRIPTION: A key enabler for improved system performance while reducing operator workload is autonomy
powered by next generation machine intelligence. However, for such deep learning systems to be effective, massive
amounts of realistic and relevant “training data” is required [1]. Given the highly sensitive and variable nature of RF
collection and exploitation, it is simply not possible to conduct field experiments capable of meeting these

AF - 21
requirements. Fortunately, commercially available next generation high-fidelity physics-based M&S tools have been
developed that can form the basis for an RF “virtual reality” Multi-INT Deep Learning (VR-MDL) environment (see
for example [2]). Thus, the goal of this project is to develop an M&S environment to address the robust training
needs of deep learning networks and other machine intelligence and cognitive systems such as DARPA/AFRL
KASSPER and CoFAR projects [3-5]. The VR-MDL environment should be capable of supporting all physical
elements of the RF collection process, from raw multichannel, multiplatform in-phase and quadrature (I&Q) signals,
through the various RF processing chain (e.g., mixing, amplifiers, analog-to-digital conversion, etc.). The output of
this effort should be a general-purpose M&S environment that is agnostic to the particular machine learning
algorithm or architecture.

PHASE I: Develop a baseline design for a VR-MDL environment. The design should have the potential of achieving
the aforementioned goals of producing massive amounts of high-fidelity, physics-based, RF training data generated
via realistic CONOPS. Quantitative analyses and experiments shall be conducted that establish the scalability of the
proposed VR-MDL approach. Basic M&S examples shall be conducted in Phase I that establish the viability of the
training data generated by comparing it with actual collected data with the same type of sensor and scenario being
modeled by the VR-MDL M&S tool.

PHASE II: Further refine and develop the VR-MDL design from Phase I, and enhance the complexity and
sophistication of the VR scenarios conducted. The output of Phase II should be a VR-MDL tool that is ready to enter
low-rate initial production (LRIP) at the beginning of Phase III.

PHASE III DUAL USE APPLICATIONS: The proposer will identify potential commercial and dual use
applications such as non-military applications of deep learning techniques. These could include training for
autonomous systems such as self-driving cars and unmanned air systems (UAS) operating in civilian airspace.

REFERENCES:
1. X.-W. Chen and X. Lin, "Big data deep learning: challenges and perspectives," IEEE access, vol. 2, pp. 514-525,
2014.

2. RFView(TM). Available: http://rfview.islinc.com

3. R. Guerci, R. M. Guerci, M. Ranagaswamy, J. S. Bergin, and M. C. Wicks, "CoFAR: Cognitive fully adaptive
radar," presented at the IEEE Radar Conference, Cincinnati, OH, 2014.

4 L. Bell, C. J. Baker, G. E. Smith, J. T. Johnson, and M. Rangaswamy, "Cognitive radar framework for target
detection and tracking," IEEE Journal of Selected Topics in Signal Processing, 9(8), 1427-1439, 2015.

KEYWORDS: Multi-INT, Deep Learning, Autonomous Systems, Cognitive Systems, Sensors, Electronics,
Modeling and Simulation

TPOC-1: Dan Stevens


Phone: 315-330-2416
Email: daniel.stevens.7@us.af.mil

AF19A-T011 TITLE: Diagnostics for Performance Quantification and Combustion Characterization in


Rotational Detonation Rocket Engine (RDRE)

TECHNOLOGY AREA(S): Space Platforms

ACQUISITION PROGRAM: N/A

AF - 22
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: The objective of this STTR is to transition diagnostic techniques for characterization of the fuel-
oxidizer detonation from laboratory usage to Air Force rotation detonation rocket engine (RDRE) applications to
increase understanding critical physics, support engineering development, as well as providing validation data for
simulations. In this STTR, advanced diagnostic measurement suites are sought to quantify the performance and
characterize the detonation combustion process of RDREs, including but not limited to intrusive and non-intrusive
pressure, temperature, species and surface heat-transfer measurements in both time-averaged and temporally
resolved manners. The sought diagnostic suite is to be integrated into existing experimental/testing RDRE rigs. It is
highly desired for deployed diagnostic sensors in the suite to withstand the high-pressure, high-temperature and
high-acceleration environment of detonative combustion within actual RDRE operation, up to an order of 1 µs of
temporal resolution and a factor of 100 dynamic range in pressure and temperature.

DESCRIPTION: The Rotating Dentation Rocket Engine (RDRE) consists of a quasi–steady state operational mode
where one, or more, detonation waves travel around a closed circuit transverse to the propellant injection in an
annular channel. The incoming, propellant mixture from the oxidizer and fuel plenums is periodically combusted by
a detonation wave traveling at approximately the Chapman–Jouguet velocity, DCJ. In laboratory devices, DCJ
typically ranges between 1–3 km/s resulting in a 4-50 kHz frequency depending on annulus diameter and number of
transverse detonation waves. The detonation waves increase pressures of the injected propellants by a factor of 10-
20, resulting in a higher energy release efficiency. Immediately behind the transverse detonation wave, the high-
pressure combustion products expand and accelerate. In the high-pressure regions immediately following the
transverse detonation wave, the localized high pressure of the combustion products temporarily halts propellant
injection. As a result, injection, combustion, and combustion product acceleration are all closely linked in these
devices.
Constant volume, or detonative, combustion theoretically extracts more energy from the fuel-oxidizer mix, thereby
increasing energy conversion system performance. In rocket propulsion, devices such as RDREs use a series
traveling detonation waves to realize this potentially more efficient energy extraction scheme. The detonation wave
time separation over a single injector has been measured to vary between 20-250 µs. The fast, repeating rapid
detonation wave coupled with the harsh combustion environment produces a highly dynamic pressure and
temperature field that is beyond the ability of existing combustion diagnostic techniques presently employed in
conventional deflagration combustion research.

This solicitation is seeking to develop advanced diagnostic suite to quantify the performance and characterize the
detonation combustion process for the rotation detonation rocket engine (RDRE), including but not limited to
intrusive and non-intrusive pressure, temperature, species and surface heat-transfer measurements in both time-
averaged and temporally resolved fashion. The sought diagnostic suite is to be integrated into actually
experimental/testing RDRE rigs. It is highly desired for deployed diagnostic sensors in the suite to withstand the
high-pressure, high-temperature and high-acceleration environment in actual RDRE operation, up to an order of 1 µs
of temporal resolution and a factor of 100 dynamic range in pressure and temperature.

PHASE I: In Phase I, an offeror shall provide a creditable design of such suite based on a typical experimental
testing device relevant to Air Force RDRE development with sufficient substantiations. Key components in Phase I
include: (1) a conceptual design for quantifying the performance and characterizing the detonation combustion
process of the RDRE with scientific substantiation, leading to the conceptual design review (CoDR); (2)
Demonstration of selected sensors in conceptual design, leading to the censor substantiation review (CSR); (3) a
preliminary design of such advance suite integrated into an Air Force relevant RDRE device with engineering
substantiations and additional needed scientific substantiations, leading to the preliminary design review (PDR) and
a Phase II proposal. In the Phase I proposals, offerors shall use a publically available RDRE configuration to present

AF - 23
their diagnostic approach/ strategy and scientific logic for achieving the measurement objective mentioned above. At
beginning of Phase I execution, information of an Air Force relevant baseline RDRE experimental/testing rig will be
provided to winning offerors.

PHASE II: In Phase II, winning offeror shall (1) complete critical design, leading to the critical design review
(CDR) and (2) build the designed advance diagnostic suites, integrate the suite into the provided Air Force RDRE
rig and demonstrate the suite’s capability to achieve the measurement objective listed above.

PHASE III DUAL USE APPLICATIONS: Phase III, based on progresses in Phase II and RDRE development
needs, further enhance and extend the advanced diagnostic suite developed in Phase II to an expanded range of
measurement objectives.

REFERENCES:
1. B. R. Bigler, E. J. Paulson, and W. A. Hargus, Jr., “Idealized Efficiency Calculations For Rotating Detonation
Rocket Engine Applications,” AIAA-2017-5011, AIAA Propulsion and Energy Forum, Joint Propulsion
Conference, 10-12 July 2017, Atlanta, GA.

2. F. A. Bykovskii, S. A. Ahdan, and E. F. Vedernikov, “Continuous Spin Detonations,” Journal of Propulsion and
Power, Vol. 22, No. 6, pp. 1204–1221, Nov-Dec 2006.

3. F. K. Lu and E. M. Braun, “Rotating Detonation Wave Propulsion: Experimental Challenges, Modeling and
Engine Concepts,” Journal of Propulsion and Power, Vol. 30, No. 5, pp. 1125–1142, Sep-Oct 2014.

4. M. L. Fotia, F. Schauer, T. Kaemming, and J. Hoke, “Experimental Study of The Performance of A Rotating
Detonation Engine With Nozzle,” Journal of Propulsion and Power, Vol. 32, No. 3, pp 674–681, May-Jun 2016.

KEYWORDS: Diagnostics, Rotating Detonation Rocket Engine, Detonation, Time Resolved Combustion
Measurements, Isochoric Combustion, High Pressure

TPOC-1: William Hargus


Phone: 661-275-6799
Email: william.hargus@us.af.mil

AF19A-T012 TITLE: Machine Learning Methods to Catalog Sources from Diverse, Widely Distributed
Sensors

TECHNOLOGY AREA(S): Sensors

ACQUISITION PROGRAM: --

OBJECTIVE: Develop algorithms to automate the processing of incoming data streams from a diverse and
dynamically changing set of sensors to detect, locate, and classify seismic events and flag suspicious events (i.e.
possible nuclear tests) for human analysts at extremely low miss rates and the lowest possible false alarm rates.

DESCRIPTION: The automatic system the Air Force uses to monitor the globe for test nuclear explosions processes
incoming data streams from hundreds of seismic stations and arrays, including those of the United Nations’
Comprehensive Test Ban Treaty Organization’s (CTBTO) International Monitoring System (IMS), among others. In
near real time the system creates catalogs of seismic sources, and identifies and flags suspicious events for human
analysts. The system can be overwhelmed when large earthquakes are followed by thousands or tens of thousands of
aftershocks. Similarly, current systems can neither process the orders of magnitude more signal detections nor
dynamically incorporate the hundreds to thousands of new sensors that will be required to push detection and
classification thresholds down to meet mission requirements. New algorithmic approaches are needed to meet this

AF - 24
challenge. The most promising approaches are machine learning (ML) methods. The seismic data volumes involved
are appropriate for ML methods and the availability of HPC resources makes their processing feasible.

The first challenge is to apply ML methods to rapidly and accurately automate the recognition of similar signals in
very large data sets (e.g. Yoon, et al., 2015). Most observed seismic signals are repeated (repeating earthquakes,
aftershocks, and mining explosions) and their recognition will speed processing by simplifying the association
problem. That is, associating all signals across the network with the set of hypothesized events that are most likely to
have generated the signals is NP hard (e.g. Arora et al, 2017; Benz et al., 2017). Any signals identified as similar to
signals associated with a previously identified event can immediately be associated with a similar repeat event. In
addition, the system will need to autonomously identify sets of common nuisance signals generated near stations
(e.g. ice quakes at high latitudes, sonic booms near military airfields) and use those to cull signals that are not of
monitoring interest.

Upon detection of new signals that are not similar to previous signals, the system must then distinguish the signal
type (e.g. identify the seismic phase), determine the event location most likely to have generated that signal and
other signals recorded across the network (determining which signals are associated with each other, and with a
single hypothesized event, is the NP hard problem), and discriminate the source type. The method must also robustly
adapt to changes in network configuration and components. Instruments will range from permanent high fidelity 3-
component seismometer arrays dedicated to and designed for nuclear explosion monitoring to individual sensors
operating for other purposes (e.g. seismic hazard monitoring) that can be opportunistically added “on-the-fly” in
regions of interest.

PHASE I: Deliver a final report that 1) evaluates the performance of existing machine learning algorithms applied to
components of the network processing problem, including signal detection, identification of repeated similar events,
signal classification, event building (i.e. determination of the likeliest set of seismic events that could be the source
of the detected signals), and event location and classification, and 2) lays out a plan and rationale for further
algorithm refinement, and incorporation of the algorithms into a system that will efficiently process incoming data
streams from a dynamic network to accurately identify signals of interest (i.e. possible nuclear tests).

PHASE II: Develop an end-to-end system that incorporates refined versions of algorithms tested in phase 1.
Demonstrate substantially improved performance with respect to signal detection and classification, and event
formation, location, and classification, of the new system over that of existing state-of-the-art systems (e.g. those of
the CTBTO’s or the US Geological Survey’s National Earthquake Information Center [NEIC]. The catalogs of both
and the data used are available). The system should be validated with real data streams (e.g. from the IMS or the
NEIC networks). Performance metrics should include miss rates and false alarm rates relative to catalogs that have
been vetted by human analysts. The system’s ability to adapt on the fly to new data sources should be validated by
the addition and deletion of data streams from stations not typically used for monitoring, such as from regional
hazard monitoring networks.

PHASE III DUAL USE APPLICATIONS: In coordination with scientists and engineers from AFRL and AFRL’s
operational customer, the Air Force Technical applications Center (AFTAC), transition the system to AFTAC for
further evaluation and testing with AFTAC’s data. Delivery must include thorough documentation, users’ manual,
case examples, support, and training to ensure effective transition. The system will also have commercial application
in regional and national networks used to monitor seismic hazards, volcano monitoring, and induced seismicity (e.g.
from mining, geothermal, and fracking).

REFERENCES:
1. Yoon, C. E., O. O’Reilly, K. Bergen, and G. Beroza, Earthquake detection through computationally efficient
similarity search, Science Advances, 04 Dec 2015, Vol. 1, no. 11, DOI: 10.1126/sciadv.1501057

2. N. S. Arora, S. Russell, and E. Sudderth, NET-VISA: Network Processing Vertically Integrated Seismic Analysis,
Bulletin of the Seismological Society of America, Vol. 103, No. 2a, doi: 10.1785/0120120107

3. Benz, H., C. E. Johnson, J. M. Patton, N. D. McMahon, P. S. Earle, GLASS 2.0: An Operational, Multimodal,
Bayesian Earthquake Data Association Engine, American Geophysical Union, Fall Meeting 2015, abstract id. S21B-

AF - 25
2687

KEYWORDS: Nuclear explosion monitoring, machine learning, similarity search

TPOC-1: Glenn Eli Baker


Phone: 505-846-6070
Email: glenn.baker.3@us.af.mil

AF19A-T013 TITLE: Software-Performed Segregation of Data and Processes within a Real-Time


Embedded System

TECHNOLOGY AREA(S): Space Platforms

ACQUISITION PROGRAM: AEHF - Advanced Extremely High Frequency (AEHR) Satellite Program

OBJECTIVE: Develop application-level approaches to performing high assurance software segregation of processes
and data developed in the context of a real-time embedded system.

DESCRIPTION: The United States Department of Defense (DoD) continually designs, acquires, and deploys best in
class, highly complex and capable embedded systems. Due to their often high cost, low-density, long development
time lines, and the mission criticality of the services they may provide, DoD embedded systems have a high value to
defense of DoD systems. As we have embraced enhanced embedded system computing capabilities, in most aspects
we have become increasingly vulnerable to multiple types of cyber threats.

This topic will investigate next-generation approaches to, and the test and verification of, methods that ensure the
high-assurance isolation and separation of data and processes from other data and processes within the context of the
software of a resource-constrained, real-time, space platform system. Current space platforms require solutions to
enabe defense-in-depth against cyber threats, resulting in a system where a single compromised software module
cannot propagate throughout the entire system. Furthermore, legacy implementation of multi-level security (MLS)
onboard a weapon system requires a dedicated computer and cryptographic system for each level. This topic will
investigate new and emerging approaches to address these problems entirely within software, enhancing cyber
resiliency while enabling MLS within a single computer at the application-layer, and dramatically reducing the size,
weight, power, and cost of hosting multiple software payloads at varying security levels. Current state-of-the-art
solutions, as discussed in reference 1, to this problems are not principally developed to provide high-assurance
segregation within the system, are limited in the degree of segregation being provided, and impose significant
computational overhead that would be unacceptable to a real-time system.

The intent of this topic is to develop a prototype approach to data and process segregation that can be fielded as part
of a representative ground space platform test bed.

PHASE I: Final report with approaches and methods for data and process segregation within real-time embedded
systems, techniques for verification of this data/process segregation, and an executable proof of concept
demonstrating the capabilities.

PHASE II: Working implementation of methods (architecture, algorithm, etc) of data/process segregation, working
implementation of verification and validation techniques, a repeatable demonstration of methods using an agreed-
upon embedded development environment using an agreed-upon real-time operating system and software, and final
technical report.

PHASE III DUAL USE APPLICATIONS: Implementation of developed technologies within an existing ground
testbed representative of a selected Air Force architecture. The ground test bed computing architecture will be of a

AF - 26
multicore ARMv8-A instruction set architecture.

REFERENCES:
1. Samuel Laurén, Sampsa Rauti, and Ville Leppänen. 2017. A Survey on Application Sandboxing Techniques. In
Proceedings of the 18th International Conference on Computer Systems and Technologies (CompSysTech'17), Boris
Rachev and Angel Smrikarov (Eds.). AC

2. Hajime Inoue and Stephanie Forrest. 2002. Anomaly intrusion detection in dynamic execution environments. In
Proceedings of the 2002 workshop on New security paradigms (NSPW '02). ACM, New York, NY, USA, 52-60.
DOI=http://dx.doi.org/10.1145/844102.84411

3. Cosimo Anglano. 2006. Interceptor: middleware-level application segregation and scheduling for P2P systems. In
Proceedings of the 20th international conference on Parallel and distributed processing (IPDPS'06). IEEE Computer
Society, Washington, DC, USA, 374-374.

KEYWORDS: application-level segregation, application sandboxing, cybersecurity

TPOC-1: Robert W. Vick


Phone: 505-846-5107
Email: robert.vick.2@us.af.mil

AF19A-T014 TITLE: Next Generation Energy Storage Devices Capable of 400 Wh/kg and Long Life

TECHNOLOGY AREA(S): Space Platforms

ACQUISITION PROGRAM: --

OBJECTIVE: Develop next generation energy storage technology that will meet cycle life requirements for DoD
Spacecraft and demonstrate >400 Wh/kg full cell performance.

DESCRIPTION: Mass of spacecraft components, especially of the power system, can be a significant portion of the
overall spacecraft mass. The need for efficient energy storage in space platforms is paramount as expected lifetime
and power needs continue to rise. Increased satellite power needs and the drive to reduce size and mass of power
system components necessitates the development of advanced energy storage devices with improved energy density
>400 Wh/kg and long life. Batteries used in space must be compatible with 5-year ground storage followed by 15-
year operational lifetimes for geosynchronous orbits or up to 60,000 charge and discharge cycles required for low
earth orbit. Given unique Air Force energy storage needs, a next generation energy storage technology achieving
>400 Wh/kg with the cycle life required in space would be instrumental to achieving higher power spacecraft for
enhanced missions. Advanced materials, cell designs, new chemistries, or a combination of these are needed to
achieve higher energy density cells. Potential pathways include, but are not limited to, Li-S, graphene batteries,
alternative anodes, and CNT additives. These and other methods should achieve >400 Wh/kg on a cell level,
discharge rates of 1C, and cycle life of 5000 cycles at 70% depth of discharge. Technologies must maintain the same
safety and reliability standards as current lithium ion (Li-ion). Advances in Li-ion are not excluded from
consideration, but proposals will need to demonstrate technical pathways to achieve stated specific energy goal at
the cell level.

PHASE I: Demonstrate proof of concept for improvements to >400 Wh/kg energy density, cycle life, and rate
requirements of the proposed energy storage technology. Present experimental data to show feasibility of the
innovative solution. Describe an initial design for prototype along with performance estimates. Outline an
improvement plan to accomplish stated metric(s).

AF - 27
PHASE II: Demonstrate significant improvements in the cited metric(s) from Phase I. Demonstrate proof of concept
with the fabrication and testing of advanced cell materials and design meeting the requirements (specific energy,
cycle life, rate requirements, safety, etc.) outlined in this solicitation. Provide cost projection data to substantiate the
design, performance and operation costs. Fabricate and deliver prototype units for potential use on flight
experiments and testing. Include a detailed design and performance analysis of prototype cells.

PHASE III DUAL USE APPLICATIONS: Work with a system integrator to refine requirements and perform flight
validation testing of the developed energy storage technology. Build and fly Class D hardware demonstration for
space environment.

REFERENCES:
1. Xu, Kang, Electrolytes and interphases in Li-ion batteries and beyond, Chemical Reviews 114, 11503-11618
(2014).

2. Hassoun, J. & Scrosati, B., Review - Advanced in anode and electrolyte materials for the progress of lithium-ion
and beyond lithium-ion batteries, Journal of the Electrochemical Society 162, A2582-2588 (2015).

KEYWORDS: high specific energy, space power systems, next generation chemistry, battery, cathode, anode

TPOC-1: Jessica Buckner


Phone: 505-846-3962
Email: jessica.buckner.2@us.af.mil

AF19A-T015 TITLE: Space-Based Computational Imaging Systems

TECHNOLOGY AREA(S): Sensors

ACQUISITION PROGRAM: --

OBJECTIVE: This STTR will develop computational imaging systems with application to space missions. New
advances in onboard processing hardware enable the Air Force to consider the revolutionary benefits of
computational imaging systems for space. These systems may significantly outperform all-optical systems.

DESCRIPTION: The last two decades in imaging science have witnessed a revolution in the way that optical
systems may be designed. Driven by the ubiquity of robust computational resources, imaging systems may now be
co-designed with both optical and computational elements in ways that can reduce optical design complexity,
outperform traditional all-optical systems and even enable entirely new measurement modalities. All three of these
benefits of computational imaging may be exploited by space based systems to enable both new functional
capabilities and new sensor/constellation CONOPS.

This STTR seeks development of computational imaging systems with application to space missions. New advances
in onboard processing hardware enable the Air Force to consider the revolutionary benefits of computational
imaging systems for space. Of particular interest here are computational imaging systems that combine both optical
and computational elements in ways that have the potential to outperform traditional (all-optical) imaging systems.
This topic covers computational imaging systems that offer benefits in one or more area: 1) Enhanced imaging
performance (spatial/temporal/spectral resolution); 2) significantly shorter development schedule (trivial optical
alignment); 3) expedited calibration (cross-calibration of ubiquitous systems); 4) drastically reduced size/weight/cost
(alternative scaling laws to all-optical systems); 5) decreased system acquisition time (COTS optics & detectors); 6)
new sensor/constellation CONOPS (e.g. on-orbit processing, distributed communications, etc.). It’s expected that
systems will be designed/evaluated in terms of: survivability of onboard computational hardware, optical alignment
tolerances, susceptibility to spacecraft jitter, practical scaling of optics sizes, and suitability to visible and IR

AF - 28
optics/detectors.

PHASE I: - Report of measured/estimated system performance limitations


- Baseline system design details – optics, algorithms, etc.
- End-to-end single system CONOPS recommendations.

PHASE II: - Prototype computational imaging space based system.


- Report of measured system performance limitations related to space flight conditions.
- Refined system design details – optics, algorithms, alignment procedures, etc.
- Laboratory-based system CONOPS demonstration.

PHASE III DUAL USE APPLICATIONS: Phase III work is expected to culminate in a space rated demo
computational imaging system.

REFERENCES:
1. Cossairt, Oliver S., Daniel Miau, and Shree K. Nayar. "Gigapixel computational imaging." Computational
Photography (ICCP), 2011 IEEE International Conference on. IEEE, 2011.

2. Sun, Baoqing, et al. "3D computational imaging with single-pixel detectors." Science 340.6134 (2013).

3. Velten, Andreas, et al. "Recovering three-dimensional shape around a corner using ultrafast time-of-flight
imaging." Nature Communications 3 (2012): 745.

KEYWORDS: Computational Imaging, EO/IR systems, space systems

TPOC-1: Reed Weber


Phone: 505-853-4130
Email: reed.weber@us.af.mil

AF19A-T016 TITLE: Multifunctional Integrated Sensing Cargo Pocket UAS

TECHNOLOGY AREA(S): Weapons

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: The goal of this STTR topic is to develop an integrated multifunctional structural platform that houses
nano-sensing elements, to enable a cargo pocket sized (nano) UAS to extract environmental information while
surviving and navigating through extremely difficult environments.

DESCRIPTION: Palm-sized UASs, weighing on the order of tens of grams apiece and capable of fitting within a
combat fatigue cargo pocket, are proving to be capable personal platforms in urban environments including inside
structures due to their ease of transport, inherent stealthiness, and ability to penetrate small spaces. However, such

AF - 29
environments are plagued with obstacles of various types, ranging from walls and structures to cables and vegetation
that must be avoided or otherwise mitigated to prevent a “crash”.

Emerging techniques involve the use of vision sensors and algorithms that allow autonomous negotiation of hazards,
including in the dark and in both indoor and outdoor spaces. However, the small mass of such platforms makes them
extremely susceptible to fluctuating wind conditions, in particular turbulence found in close-ground environments,
which can disrupt computed flight trajectories. Work by other researchers is addressing wind gust sensing with
appropriate control strategies to mitigate the effects of turbulence. However, there will still be cases in which wind
gusts will temporarily override the abilities of any controller or even the dynamically available thrust from platform
actuators. Collisions with hard environmental objects are thus unavoidable, and it is desirable for a platform to be
designed such that the “cost” of a collision is a temporary delay of flight for several seconds rather than a mission-
ending crash.

One approach is to add a “cage” or other structural forms of protection around the platform to prevent contact
between rotors or other actuators in the environment. However, when fabricated with conventional materials, such
cages are either too heavy or provide protection in limited directions. Furthermore, if they can survive rough contact
events, their structures can only tolerate a few impact events before breaking. New research in materials science is
leading to the development of materials that can handle repeated stress and impact events. Some examples include
composites whose structure is inspired by organisms, for example, the mantis shrimp [ref 2].

In addition to structural issues with crash events, the ability for efficient and effective flights are limited due to the
mass of the platform and limited energy storage capacity. As such, adding sensing elements to these platforms
reduces the efficacy of flight missions. For example, use of conventional sensors that implement thin films adds
significant weight, and limits sensitivity or response times needed. In addition, the number of gas analytes that can
be interrogated is constrained.

The use of nanowire gas sensors and sensor arrays have the potential to solve these limitations. Nanowire-based gas
sensors often provide a fast response time, enable ultrasensitivity, and provide a means to analyze multiple gaseous
species. This is primarily due to their high surface-to-volume ratio based on their extremely fine diameters. Thus,
they provide a way to detect only a low concentration of gas molecules via minute changes in the electrical
properties of the sensing elements. In addition, because of their minimal size, these sensors can be very light weight
and use significantly low amounts of power.

However, there are many challenges to realizing ultrasensitive and durable nanowires. For example, metal oxide-
based systems can fail in a brittle manner rendering them ineffective in robust arenas or utilize too much power.
Conversely, polymer-based systems have limited performance based on ambient temperatures or sensitivity to
humidity. In order to overcome this, a platform is needed that can integrate sensing elements into physically and
chemically robust platforms that can be mounted on nano UASs. In addition, these platforms should perform with
minimal mass addition, low power usage and still demonstrate sensitivity.

The goal of this topic is to prototype a holistic nano UAS with integrated sensing, structural elements, and hardware
having the following characteristics:
• Capable of autonomously detecting and homing in on chemicals of interest for both civilian and military
applications using on-board sensors;
• Capable of navigating through close ground and indoor environments with only top-level human input, while
surviving wind-induced collisions with environmental features at up to 10 meters/sec;
• Can fit within a cargo pocket.

PHASE I: Through a combination of experimentation, analysis, and simulation, implement and test components of a
mechanically and chemically robust sensory platform that is light and utilizes low power. Demonstrate a basic
platform that can detect various gases. This work may include breadboard level testing of select components,
structural or sensing, including limited flight testing, if feasible, to identify initial uncertainties and areas to address
for Phase 2.

PHASE II: Prototype a nano UAS that meets the general characteristics listed above. Perform flight testing of the
prototype system in both laboratory and representative environments. Research efforts and test environments should

AF - 30
address both civilian and military applications.

PHASE III DUAL USE APPLICATIONS: Transition the technology for both civilian and military applications.
Dual use applications include facilities monitoring, environmental testing, remote sensing of chemical or biological
weapons, search and rescue, and ISR

REFERENCES:
1. https://www.flir.com/products/black-hornet-prs/ ProxDynamics Black Hornet Nano UAS http

2. Weaver, J., Milliron, G., Miserez, A., Evans-Lutterodt, K., Herrera, S., Gallana, I., Mershon, W., Swanson, B.,
Zavattieri, P., DiMasi, E., Kisailus, D. “The Stomatopod Dactyl Club: A Formidable Damage-Tolerant Biological
Hammer,” Science, 336 (2012) 1

3. Barrows, G. et al “Vision Based Hover in Place”, 50th AIAA Aerospace Sciences Meeting, Nashville TN,
January 2012.

4. ALTERNATE REFERENCE SHOWING CENTEYE NANO DRONE


https://www.youtube.com/watch?v=YTi8bjbZJ4s

KEYWORDS: Resilient structure. Nanowire Gas Sensor. Nanowire Gas Sensor Array. Olfactory Ultrasensitivity
or Olfactory Ultra Sensitivity. Ultrasensitive Nanowire or Ultra Sensitive Nanowire. Multifunctional Structure and
(Micro Air Vehicles or MAV or Nano Air

TPOC-1: Martin F. Wehling


Phone: 850-883-1880
Email: martin.wehling@us.af.mil

AF19A-T017 TITLE: Tunable bioinspired spatially-varying random photonic crystals

TECHNOLOGY AREA(S): Weapons

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop controlled (tunable) spatially-varying materials for investigating reflectance and
transmission of visible light, high resolution, high apparent temperature, broad-band solution for infrared hardware-
in-the-loop scene projection.

DESCRIPTION: New methods have recently shown that photonics crystals can be spatially varied in a smooth
manner without causing any discontinuities and defects in the lattice. This results in a new phenomenology such as
self-collimation of a light around a sharp bend without loss intensity. This can also lead to the development of a
single lattice with disparate functions like beam bending, focusing, and polarization conversion. Moreover, the
lattice can be shaped in such a way that the collimated beams are not limited by Snell’s law and the lattice can
collect and focus light without the limiting effects of refraction.
The purpose of this topic is to use the concepts of spatially-variant photonic crystals with added biological inspired

AF - 31
randomness and material. Many biological systems of interest have small index of refraction and have diamond and
gyroid structures. In electromagnetic and photonic systems, randomness has been shown to provide wide band gaps
and to suppress side lobes of array antennas and frequency selective surfaces. This combination between SVP and
randomness should provide more omnidirectional behavior, greater immunity to damage and deformations, and
identification of conspecifics and friend/foe. Moreover, it may be possible to broaden the bandwidth, self-collimate
over a wider range of angles, achieve stronger properties from low refractive index materials, and control light even
more abruptly.

PHASE I: Investigate the transmission and reflection intensities of visible and near-IR light in random SVPCs with
low index of refraction material.
• Investigation of two types of lattices is desired: diamond and gyroid lattices with low index of refraction.
• Perform material study to determine the range of indices of refraction closely resembling lattices in the insect
world while achieving the expected functionality.
• Investigate tuning mechanism(s) that allow the lattice to work in the visible and near-IR bands.
• Full simulation of wave propagation and band-gaps using COMSOL, MEEP/MPB, or in-house developed
software using open source languages. All codes and models are to be shared with AFRL.
• Prototype fabrication of the lattice to test basic functionality in the lab in a given band: propagation around sharp
bends and light focusing.

PHASE II: Fabricate and demonstrate several tunable lattices for each type (diamond and gyroid) and test in the
visible and near-IR spectra
• Tunability
• Multiplexing
• Light focusing
• Propagation around sharp bends
• Behavior of different polarizations of light
• Controlled reflection and transmission
• Controlled band gap structure

PHASE III DUAL USE APPLICATIONS: Fabricate multiplexing optical devices for fast and robust high
bandwidth communication; Fabricate light funnels for investigating biological sensing. Other devices are feasible
but are restricted to government projects and use not mentioned in this write up.

REFERENCES:
1. Raymond C. Rumpf and Javier Pazos, "Synthesis of spatially variant lattices," Opt. Express20, 15263-15274
(2012)

2. Michielsen K, Stavenga D. Gyroid cuticular structures in butterfly wing scales: biological photonic crystals.
Journal of the Royal Society Interface. 2008;5(18):85-94. doi:10.1098/rsif.2007.1065.

3. Ingram A., Parker A. A review of the diversity and evolution of photonic structures in butterflies, incorporating
the work of John Huxley (The Natural History Museum, London from 1961 to 1990). Philosophical Transactions of
the Royal Society B: Biological Sciences. 2008;363(1502):2465-2480. doi:10.1098/rstb.2007.2258.

4. Kuilong Yu, Tongxiang Fan, Shuai Lou, Di Zhang, Biomimetic optical materials: Integration of nature’s design
for manipulation of light, Progress in Materials Science, Volume 58, Issue 6, July 2013, Pages 825-873

KEYWORDS: Photonic crystals, color from structure, self-collimating, multiplexing lattice.

TPOC-1: Dr Jimmy Touma


Phone: 850-882-0340
Email: jimmy.touma.1@us.af.mil

AF - 32
AF19A-T018 TITLE: Hardware-in-the-loop test bed for magnetic field navigation

TECHNOLOGY AREA(S): Weapons

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop a hardware-in-the-loop simulator for future navigation concepts that use magnetic field
anomaly sensors. Develop a magnetic field global database and physical field generator to include sensors in
laboratory flight simulations of Air Force vehicles.

DESCRIPTION: Hardware-in-the-loop (HWIL) simulation is used to evaluate guidance, navigation, and control
implementations in Air Force flight vehicles during their development. Navigation is critical in performing a
mission that requires achieving a predetermined position, following a specific inertial flight path, or observing an
object at a known location. While GPS allows for a vehicle to accurately navigate, if GPS cannot be relied upon,
alternative means of navigation must be found. Supplements to navigation might use terrain maps, star and planet
locations, optic flow, sky polarization, electro-magnetic mapping, and numerous other phenomena to bound the drift
in pure inertial navigation solutions. Use of known maps of geographically varying magnetic field vectors, similar to
using known maps of terrain features, has been proposed to augment conventional navigation approaches. It has
been hypothesized that some animals accurately travel large distances augmented by sensors that detect variations in
magnetic field strength. To investigate the potential benefit of this approach to the Air Force, replicating the Earth’s
magnetic field variations in a hardware-in-the-loop facility is desired. Such a hardware-in-the-loop simulator
currently does not exist. Availability would allow the Air Force to fly missions with integrated guidance, navigation,
and control concepts in a ground test facility and assess the limitations of magnetic field mapping approaches with
and without GPS.

Magnetic field maps with varying levels of resolution exist and are publicly available. The highest definition maps
maintained by the National Center for Environmental Information provide global information on geomagnetic
anomalies. These maps include the geomagnetic main and crustal field, providing magnetic field values (total field,
dip, and declination) at any point above or below the Earth's surface with a 28 km resolution. A constant challenge
for HWIL testing is to ensure that the facility does not influence the results of the test through simulator errors or
time delays introduced into closed-loop control systems. It is desired that the simulator exceeds the resolution of the
sensor being testing and accuracy is sufficient to allow for repeatable test results. Geomagnetic sensor resolution is
typically on the order of 100 uGauss. A study will need to be accomplished to determine simulator vector
component accuracy requirements and dynamic range given expected earth field variations and sensor accuracy.

PHASE I: Perform an initial requirements study for the simulator. Engineer a conceptual design and establish a
computational model of field resolution, uniformity, dynamic range, and accuracy. Acquire publicly available
databases, design an architecture for database lookups, and document an interface protocol for facility simulation
computers. Experimentally establish feasibility using a surrogate sensor.

PHASE II: Finalize the simulator design. Procure components sufficient to build a prototype of both the scene
generation system and the magnetic field emulator. Establish and document the simulator calibration approach
assuming know sensor characteristics. Work with government researchers to establish a set of verification test cases
to compare digital truth vs. sensed time histories along simulated flight paths. Install the prototype simulator in the

AF - 33
AFRL KHILS facility to enable Air Force research.

PHASE III DUAL USE APPLICATIONS: Work with prime DoD contractors to enhance the simulator to meet their
requirements. Build final production simulators and transition to contractor facilities for support of internal research
of alternate navigation methods. Integrate the simulator with other navigation simulators, e.g., celestial.

REFERENCES:
1. M.J. Caruso, C. H. Smith, T. Bratland, R. Schneider, A New Perspective on Magnetic Field Sensing, Honeywell,
1998.

2. Nair, M., A. Chulliat, A. Woods, P. Alken, B. Meyer and R. Saltus, New approach to quantify uncertainty for
high-resolution magnetic reference models, Industry Steering Committee on Wellbore Survey Accuracy (ISCWSA)
45th meeting, March 17, 2017, The Ha

3. Maus, S., M. C. Nair, B. Poedjono, S. Okewunmi, J. D. Fairhead, U. Barckhausen, P. R. Milligan, J. Matzka,


High Definition Geomagnetic Models: A New Perspective for Improved Wellbore Positioning, IADC/SPE Drilling
Conference and Exhibition, 6-8 March 2012, San Diego, California, USA, isbn: 978-1-61399-186-2, doi:
10.2118/151436-MS, March, 2012.

4. Alken, P., A. Chulliat, M. Nair, B. Meyer, R. Saltus, A. Woods, N. Boneh, New advances in geomagnetic field
modeling, Industry Steering Committee on Wellbore Survey Accuracy (ISCWSA) 44th meeting, September 22nd,
2016, Glasgow, Scotland.

KEYWORDS: Magnetic anomaly grid, high-definition geomagnetic field, hardware-in-the-loop, alternate


navigation, scenario generation, flight control

TPOC-1: Darryl Huddleston


Phone: 850-883-7060
Email: darryl.huddleston@us.af.mil

AF19A-T019 TITLE: Efficient numerical methods for mesoscale modeling of energetic materials

TECHNOLOGY AREA(S): Weapons

ACQUISITION PROGRAM: --

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Enable efficient development of mesoscale-informed high explosive reactive flow models using
highly efficient numerical methods for use in system level simulations of Air Force weapons.

DESCRIPTION: Reliable employment of a weapon requires a firm understanding of the behavior of the explosive
and its response to surrounding environment. The two primary concerns in the design of explosives for specific
munition applications are 1) the reliability of explosive to detonate only when desired and 2) the controlled nature of

AF - 34
the energy release during initiation and detonation. Sensitivity and energy release depends on a range of factors
including but not limited to, meso-structural characteristics (void fraction, particle size and shape, binder fraction) of
the explosive, age of the explosive, and the loading conditions (mechanical and thermal) of the explosive during
handling, transport and storage. The effect of micro-structural features on initiation is currently a topic of interest to
mesoscale modeling.

An accurate reactive flow model is required to simulate explosive detonation. Reactive flow models may be
developed by calibration to experimental data or by numerical simulation. Developing reactive flow models using
numerical simulations requires large ensembles of 3D mesoscale simulations, and current mesoscale simulation
codes require supercomputing level resources to build a model for a single explosive formulation. To increase the
practicality of numerically developing accurate reactive flow models, significant improvements in computational
efficiency are needed. This topic seeks strategies and implementations to achieve these efficiency gains, such as: 1)
adaptive mesh refinement to focus computational effort around the regions of interest and 2) the use of high order
numerical methods.

PHASE I: Develop a numerical strategy that addresses the tasks identified in the topic description, a plan for
implementation in a simulation code, and a plan for comparison of the simulation code’s relevant metrics against
other numerical methods. Demonstrate proof of concept of the modeling strategy on simplified test problems.

PHASE II: Implement the methodology from Phase I in a suitable simulation code. Deliver source code and
documentation for the simulation tool. Demonstrate and document improvements achieved using these advanced
numerical methods in the mesoscale simulation capability with problems of Air Force interest.

PHASE III DUAL USE APPLICATIONS: Further development and refinement of highly efficient numerical
algorithms implemented in previous phases. Integrate into production-ready simulation tools and make available for
commercial use. Develop and document efficient numerical algorithms to enable interoperability with widely-used
commercial and research codes, if no such algorithms are available.

REFERENCES:
1. Rai, Nirmal K., and H. S. Udaykumar. “Mesoscale Simulation of Reactive Pressed Energetic Materials under
Shock Loading.” Journal of Applied Physics, vol. 118, no. 24, 2015

2. A. Dubey, A. Almgren, J. Bell, M. Berzins, S. Brandt, G. Bryan, P. Colella, D. Graves, M. Lijewski, F. L. offler,
B. O'Shea, E. Schnetter, B. V. Straalen, and K. Weide, A survey of high level frameworks in block-structured
adaptive mesh refinement pac

3. Zhang, X., and Shu, C-W. “Maximum-Principle-Satisfying and Positivity-Preserving High-Order Schemes for
Conservation Laws: Survey and New Developments.” Proceedings of the Royal Society A: Mathematical, Physical
and Engineering Sciences, vol. 467, no. 2134, 2011, pp. 2752–2776

KEYWORDS: Numerical methods, high order, adaptive mesh refinement, mesoscale modeling, simulation.

TPOC-1: Sushilkumar Koundinyan


Phone: 850-875-2686
Email: sushilkumar.koundinyan.1@us.af.mil

AF19A-T020 TITLE: Guided Automation of Molecular Beam Epitaxy for Swift Training to Optimize
Performance (GAMESTOP) of New Materials

TECHNOLOGY AREA(S): Materials/Processes

AF - 35
ACQUISITION PROGRAM: --

OBJECTIVE: The objective of this topic is to reduce the time to optimize new thin film materials by coupling state
of the art control schemes with in-situ characterization techniques. This topic is aimed at molecular beam epitaxy
(MBE) growth, although similar approaches can be employed in other epitaxial growth processes.

DESCRIPTION: This topic is looking at combining recent developments in machine learning and optimization
routines with novel in-situ characterization techniques to guide epitaxial growth for the rapid development of new
materials. Molecular beam epitaxy is one of the most controllable deposition techniques producing the highest
quality thin film materials. However the parameter space to optimize the material and thus the performance of an
electronic or electro-optic device is extremely wide. Typically, the fluxes from each individual effusion cell, the
substrate material and growth temperature are the key process variables effecting the growth rate and key material
characteristics such as surface roughness, alloy composition, doping level, and radiative lifetime. Therefore,
developing a new thin film material grown via MBE is a complex process involving many growths in which
extensive characterization is used to determine the process/property correlations to produce an optimized material
response.

This topic is aimed at reducing the time to optimize new materials by coupling state of the art control schemes such
as machine optimization, stimulated annealing algorithms, and neural nets to manage the MBE growth via in-situ
characterization techniques allowing for the determination of doping, alloy composition, carrier mobility, etc.
Obviously, the choice of characterization techniques will depend on the material/device characteristic to be
optimized. As a minimum, composition, doping, substrate temperature and layer thickness should be
measured/controlled. The machine learning may be performed using areal information or from sequential growths.
Approaches will be evaluated on the rate at which they converge to the solution. Areal approaches will require
imaging sensors and/or accurate models for the process variables and imaging sensors for the dependent material
characteristics but potentially give vastly more information per growth than sequential growth approaches.

Several key functionalities must be demonstrated in these proposals. First, a clear understanding of which and what
types of sensors can be used in the MBE process should be demonstrated and why they are important in
optimization. Optimization routines must be able to handle multiple sensor inputs as well as provide feedback
providing closed loop control between growth parameters and relevant characteristics. This includes trade-off in
properties such as mobility and carrier concentration. For instance, a device with the best mobility and the highest
carrier concentration would be ideal, however as carrier concentration goes up the mobility goes down, so there is an
optimum solution. Also, optimization routines must be able to search for global minimum in a multiple dimension
growth parameter space which may contain local minima. Offerors should estimate convergence rates and discuss
how the situations used to obtain these estimates are similar to the MBE process in terms of complexity

PHASE I: With the complexity of the MBE process, there exists different levels of properties/parameters, some
basic (Level 1) such as material thickness, composition, surface roughness, etc. that are easier to characterize in-situ
and have direct correlation to end device performance. Then there are more specific properties/parameters (Level 2)
such as material interface properties, mobility, carrier or sheet concentration and carrier lifetimes that are more
difficult to determine in-situ, but greatly influence overall device performance. In Phase I it is desired that control
and optimization of at least all Level 1 properties be performed. Offerors must specify material systems which
they have access to. At a minimum this should include an InAlGaAs based system. Optimization of initial
parameters must be demonstrated in a region of process space to be defined. The exact problem to be optimized will
be specified after Phase I is awarded. This approach is being used to allow a fair evaluation of progress and
development potential of the different learning and measurement suites being used at the conclusion of Phase I. Ex-
situ measurements will be allowed in Phase I but a clear path must be demonstrated for implementation of the
sensors and measurements in the system to allow in-situ measurements with similar accuracy during Phase II. The
success of the learning algorithm will be determined by the number of growths needed to reach the optimized
conditions.

PHASE II: The Phase II effort will integrate the in-situ sensors into the system and perform/demonstrate further
optimization. Final device characteristics will be optimized during Phase II and an overall system with sensors will
be developed. A marketing plan will be developed

AF - 36
PHASE III DUAL USE APPLICATIONS: In Phase III the system will be commercialized and marketed.
Additional capabilities will be added based upon the needs/desires of potential end users.

REFERENCES:
1. “Molecular Beam Epitaxy: Fundamentals and Current Status”, M. A. Herman and H. Sitter, Springer Series in
Material Science 7, Springer Verlag

2. “Materials Fundamentals of Molecular Beam Epitaxy”, Jeffery Tsao, Academic Press

3. Pavel Nikolaev, Daylond Hooper, Frederick Webber, Rahul Rao, Kevin Decker, Michael Krein, Jason Poleski,
Rick Barto & Benji Maruyama, “Autonomy in materials research: a case study in carbon nanotube growth”,
Computational Materials Volume 2, Article number: 16031 (2016).

4. Yue Liu , Tianlu Zhao , Wangwei Ju , Siqi Shi , “Materials discovery and design using machine learning”,
Journal of Materiomics, Volume 3, Issue 3, September 2017, Pages 159-177.

KEYWORDS: molecular beam epitaxy (MBE) growth, thin film materials

TPOC-1: Kurt G. Eyink


Phone: 937-656-5710
Email: kurt.eyink@us.af.mil

AF19A-T021 TITLE: Carbon-Carbon Manufacturing Process Modeling-Aeroshells

TECHNOLOGY AREA(S): Materials/Processes

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR
Parts 120-130, which controls the export and import of defense-related material and services, including export of
sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual
use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type
of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions.
Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data
under US Export Control Laws. Please direct ITAR specific questions to the AF SBIR/STTR Contracting Officer,
Ms. Michele Tritt, michele.tritt@us.af.mil.

OBJECTIVE: Develop, demonstrate and validate a processing model to predict the physical and mechanical
properties of structural carbon-carbon materials for aeroshells. Compatibility with industry standard design and
modeling software is anticipated.

DESCRIPTION: The US Air Force has a need for improved understanding and modeling of processes used to
manufacture structural carbon-carbon (C-C) materials used in hypersonic aeroshells. At least for short durations, C-
C has the capability to withstand very high temperatures while maintaining structural integrity. Even though these
materials have been used for decades, their manufacturing (mfg) processes are still a craft. In addition, the effects of
variations in the starting materials and processing conditions are largely unknown. Over the years, some starting
materials have become obsolete and the effects of changing these materials even slightly are unpredictable.
Therefore, every time a new process or starting material is contemplated for use on a DoD weapon system, a new
property database must be generated. In addition, physics-based prediction of the capability of new processes to
create high quality materials and structures does not exist. Improved understanding and modeling of the
manufacturing processes used to make C-C composites is anticipated to lead to improved properties, increased
process repeatability, as well as reduced manufacturing and qualification costs and decreased manufacturing times.
The C-C processing model may be empirical (physics- and chemistry-based), analytical, numerical, or a

AF - 37
combination, but should, to the degree possible, reflect the underlying physics and chemistry of the system. The
starting materials and their physical configuration (i.e. prepreg, 3-D woven preform, etc.), component geometry,
processing methods and parameters (temperature, pressure, etc.), and other relevant factors should be taken into
consideration. Two of the most widely-used commercial processing methods to make structural C-C should be
included in the model under this effort.

The resulting processing model should accurately predict final component geometry, density, mechanical and
physical properties, and variations within the component. The model/modeling architecture should be flexible
enough to incorporate new processes and/or customization of current processes in the future. Geometry and other
relevant information should be able to be easily imported into the model, and results should be able to be exported to
existing design and analysis software (i.e. FEM software) commonly used in the aerospace industry. The model
should run in a reasonable period of time to allow multiple analyses so that the optimum processes can be selected
for a given part geometry and starting material. In Phase I, the model should focus on a simple flat panel geometry
with a single fiber geometry. In Phase II, the model shall demonstrate that it can accurately predict the properties of
an additional, more complex geometry component with a variation in cross section. In addition, the model’s
flexibility shall be demonstrated by modeling an additional, more complex fiber geometry. The contractor shall
perform validation and verification (V&V) of the model.

To aid in transitioning the model for use by industry, it is anticipated that the model may be offered in the future as a
module or add-on to currently available commercial-of-the-shelf (COTS) modeling software. The contractor should
keep technology transition in mind as the model is created to help ensure successful transition.

PHASE I: Perform a requirements analysis and create a document specifying all of the functionality that should be
included in the C-C processing model. Obtain data and background information needed to create the model.
Demonstrate the feasibility of a C-C process modeling concept for at least one C-C material system and process.
Perform V&V of the Phase I model.

PHASE II: Refine the modeling approach defined in Phase I. Create C-C processing model for an additional C-C
mfg process, a more complex geometry, and a more complex fiber architecture. Incorporate effects of defects &
variability. Demonstrate ability to predict material properties based on process nominal values. Demonstrate
sensitivity of predictions to manufacturing parameter input variability. Perform model V&V. Determine
interoperability with industry standard design and analysis software.

PHASE III DUAL USE APPLICATIONS: Finalize model refinement & validation. Develop appropriate technology
transition strategies that focus on commercialization of the developed modeling tools. Develop a business strategy
that ensures the software can continue to be upgraded as new information and modeling techniques become
available.

REFERENCES:
1. Vignoles, Gerard L., et al., “Analytical stability study of the densification front in carbon-or ceramic-matrix
composites processing by TG-CVI,” Chemical Engineering Science, vol. 62, no. 22, pp. 6081-6089, Nov 2007.

2. Ravikumar, N.L., et al., “Numerical simulation of the degradation behavior of the phenolic resin matrix during
the production of carbon/carbon composites,” Fullerenes, Nanotubes and Carbon Nanostructures, vol. 19, No. 5, pp.
353-372, Jul 2011.

3. Dietrich, S., et al., “Microstructure characterization of CVI-densified carbon/carbon composites with various
fiber distributions,” Composites Science and Technology, vol. 72, no. 15, pp. 1892-1900, 2012.

KEYWORDS: Carbon-Carbon Composites; process modeling

TPOC-1: Karla L. Strong


Phone: 937-904-4598
Email: karla.strong.1@us.af.mil

AF - 38

Das könnte Ihnen auch gefallen