SBIR/STTR Specific Topic Open Period

AF CYBERWORX

SBIR/STTR

Through the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, America’s Seed Fund provides non-dilutive funding to help small businesses develop innovative technologies and bring them closer to commercialization. AF Cyberworx is hosting three topics in this funding round, offering a unique opportunity for businesses to align their solutions with the program’s objectives. Explore the details below to learn more about our focus areas and how to participate.

 

AF CYBERWORX

SBIR/STTR

Through the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, America’s Seed Fund provides non-dilutive funding to help small businesses develop innovative technologies and bring them closer to commercialization. AF Cyberworx is hosting three topics in this funding round, offering a unique opportunity for businesses to align their solutions with the program’s objectives. Explore the details below to learn more about our focus areas and how to participate.

AF CYBERWORX

OUR THREE SPECIFIC TOPICS

The SBIR 25.4 / STTR 25.D Open Period runs from May 28 – June 25. During this time, small businesses can review solicitations, but ask technical questions directly through the DoD SBIR/STTR page. We encourage interested businesses to take advantage of this window to gather critical insights and finalize applications. If you need guidance on the application process or want to determine whether your project aligns with our SBIR/STTR focus areas, review our topic FAQs or visit AFWerx.com

AI/ML-ENHANCED RISK MANAGEMENT FRAMEWORK

DETECTION OF UNCREWED AIRCRAFT SYSTEMS IN CLUTTERED ENVIRONMENTS

AI/ML-GENERATED DECOY NETWORKS

Accordion Fix
FAQs
Which parts of the RMF process are we expected to automate? Must it be “all of RMF”?

Phase II must demonstrate a TRL 6 prototype that automates the RMF elements you already proved feasible in your Phase I type work. Example areas include control tailoring, evidence ingestion/normalization, POA&M generation, and continuous monitoring dashboards. Full end to end automation is not required in Phase II; prioritize the pieces that shorten the path to an ATO and feed directly into Phase III’s cATO + ConMon objective.

Traditional ATO, cATO, or both?

Design should not preclude future cATO workflows; however, Phase II evaluation will use a standard ATO package.

Are any RMF tasks “off limits” to automation?

Tasks that require inherently governmental judgment—risk acceptance by the Authorizing Official and final SCA concurrence—must keep a human in the loop. Everything else is fair game if you can show outcome quality at least equal to the current manual baseline.

Which DoD systems/APIs are priority for integration?

The government is coordinating to provide an eMASS sandbox (IL4/5) to offerors who wish to test their product. High-value feeds identified by stakeholders are ACAS/Tenable, STIG automation output, HBSS/ESS, and Nessus Agent results. Use open, machine readable formats (JSON, OSCAL, YAML) wherever possible.

Deployment environment—Cloud One, Platform One, on prem? IL levels?

Recommended practice: target IL4/5 first; design so it can be re hosted at IL6 or higher on demand. A FedRAMP High (IL4/5) cloud such as Cloud One Azure is acceptable for Phase II as long as you can export data required for eventual on prem installs at classified levels.

Are specific AI/ML stacks required or prohibited?

No mandated stack. Your technical volume must justify model choice, training data provenance, and how you will meet DoD Responsible AI guardrails. Black box SaaS models that cannot be audited are discouraged.

Primary user persona—RMF engineers only?

Design recommendation: phase II UI/UX should serve RMF practitioners first (ISSM, SCA, AO staff) but expose read only dashboards that non cyber decision makers can consume without training.

Will sample artefacts or realistic traffic be provided?

Offerors should plan to obtain representative artefacts; Government may provide sample data subject to classification and CUI constraints. Synthetic network traffic is acceptable for Phase II development; live feeds come later.

What volume of data should we expect?

Estimate based on stakeholder discussions: roughly 8-10 GB per system per assessment cycle (documents, screenshots, scan CSVs).

What metrics will justify future funding or adoption?

Follow-on funding or Phase III opportunities have not been announced. Suggested metrics include:

  • Percent reduction in calendar days to ATO decision
  • Labor hours saved per control family
  • Evidence re use rate across systems
  • False positive versus true positive control failure detection
Single monolithic solution or multiple micro apps?

Either is acceptable; evaluation criteria weight a coherent end to end workflow and ease of integration higher than packaging. Make your architectural choice explicit in the proposal.

Do we need to name the candidate system that already has an ATO?

No. The Government will furnish a representative test bed and sanitized data set; offerors do not need to name or secure access to an existing ATO'd system for Phase II.

How many Phase II awards will be made?

One (1).

What travel should we budget for Phase II?

Required travel is not anticipated. Reviews and demonstrations are expected to be conducted virtually; in person visits will be arranged only if mission essential and approved by the Government.

How do we show Phase I feasibility in a Direct to Phase II (D2P2) proposal?

Show Phase I feasibility by placing, in Volume 5 of the D2P2 package, a short synopsis plus the detailed “feasibility documentation” (no page limit) that proves you already achieved the topic’s Phase I goals: lab or pilot results for a TRL 4-6 prototype, test reports, demo video or screenshots, and a table that cross-walks each Phase I requirement to the exact evidence page. The Air Force will not even evaluate a D2P2 if this file set fails to establish scientific and technical merit, or if the work was done under any prior SBIR/STTR; it must have been “substantially performed” by your firm/PI and you must own or license any associated IP.

Is CUI processing required during Phase II?

Not during the prototype, but your architecture must be upgradable to Moderate level CUI. Use FIPS validated cryptography and document how you will inherit CMMC Level 2 controls.

Are memory safe languages or supply chain frameworks desired?

This is not a mandatory requirement. Show how your technology choice mitigates common weakness categories.

What timeline milestones are expected?

Offerors should propose a milestone schedule that credibly achieves TRL 6 within the 24-month PoP.

Is there an integration test API or export format we must use for RMF data?

The government is coordinating to provide an eMASS sandbox for testing. For initial proposals, no specific format is required.

Who owns the Phase III transition?

Initial stakeholder discussions are with SAF/CN (USAF CIO) and 16 AF’s Enterprise Cyberspace & Information Dominance directorate. No Phase III owner has been identified.

Accordion Fix
FAQs
Coming Soon!
Accordion Fix
FAQs
What is the end goal of this effort—operational deployment or experimental use?

The ultimate objective is operational deployment. While Phase I centers on feasibility, the desired outcome is a deployable system that functions within active defensive cyber operations. The decoy network should lure adversaries away from operational systems into high-fidelity, dynamic environments. These environments must also support persistent monitoring to gather actionable intelligence on adversary behavior and tactics. Solutions should be designed with real-world scalability, integration potential, and mission relevance in mind.

What level of realism and adaptability is expected in the decoy environment?

Realism is critical. The decoy must continuously evolve to remain believable under scrutiny from state-sponsored adversaries. This includes realistic user behavior, data flows, services, and infrastructure. The environment should appear valuable and exploitable—enticing enough to capture attention—but not so vulnerable or static that it is easily identified as a trap. AI/ML should be leveraged to monitor real or simulated networks and adapt the decoy in real time or through retraining. A well-calibrated balance of authenticity and stealth is essential for long-term deception.

What role should AI/ML play in the proposed solution?

AI/ML is expected to be central to the system’s design. It should be used to learn from live, synthetic, or simulated network data—capturing behaviors, services, and traffic patterns—and generating decoy environments that mirror those observations. Additionally, AI/ML should drive adaptation over time, model user and adversary interactions, and detect intrusions or behavioral shifts. The solution should support autonomous updates and behavior generation while providing defenders with real-time insights into threat activity.

What operational modes and response capabilities should the system support?

The system must support fully autonomous, semi-automated, and manual control modes. Autonomous operation ensures persistent deception with minimal operator overhead. Semi-automated and manual controls enable tailored intervention during targeted threat tracking. The system should be capable of real-time response to adversary behavior—modifying topology, adjusting services, or escalating alerts as needed. This responsiveness increases the credibility of the decoy while enhancing operator situational awareness.

Are specific architectures, technologies, or protocols required?

No specific architecture is mandated. Offerors may use AI/ML, expert systems, virtualization, containerization, or hybrid approaches. The key requirement is a flexible, scalable system that can emulate real operational environments across a range of protocols (e.g., HTTP, MQTT, MODBUS, DNP3) and behaviors. The architecture should allow for realistic traffic and service emulation, seamless integration of learning pipelines, and continuous refinement based on observed inputs.

What deployment environments and integration considerations should be anticipated?

The system should be designed for deployment in a range of environments, including secure cloud, on-premises, or hybrid configurations. A FedRAMPapproved cloud is not required, but cybersecurity best practices and modular design are essential. Solutions should be aligned with long-term integration pathways, including Risk Management Framework (RMF) compliance and Authority to Operate (ATO) readiness. Flexibility, portability, and ease of deployment will be critical for successful transition to operational environments.

How is realism currently evaluated in decoy networks, and what heuristics matter most?

The key indicator is whether the adversary realizes they’re in a decoy. If a hacker disengages early or alters their tactics, that’s a sign the deception failed. Past efforts often lacked credible user behavior, protocol fidelity, or exhibited static responses—making them too easy to spot.

Should adaptability be aligned with known adversary TTPs (e.g., MITRE ATT&CK), or should it operate autonomously?

Either approach can be effective, but proposals should clearly define their strategy. Aligning with adversary TTPs offers transparency and control, while autonomous adaptation can offer broader flexibility. Systems that can learn, evolve, and reconfigure to remain convincing are the goal.

Will Phase II testing require specific sandbox environments?

No specific testbed is currently designated. Phase I proposers should suggest an appropriate sandbox or simulation plan for future evaluation, showing how their architecture supports realism, adversary engagement, and dynamic behavior without needing a fixed environment.

What matters more—user behavior simulation or infrastructure fidelity?

Both are important, but adversaries often detect fakes through inconsistencies in user behavior and data flow patterns. A convincing rhythm of interaction—user logins, file access, network chatter—can outweigh perfect system specs.

Should decoy systems aim for adversary characterization, or is tracking enough?

Detection and tracking are baseline. If done securely, extracting insights about adversary behavior, tools, or intent adds significant value. However, attribution is secondary to effective deception and must not risk exposure of the decoy.

Will these decoy systems integrate into broader SOC or DCO toolchains?

Yes, that’s the anticipated direction. Interoperability with other cybersecurity platforms will enhance value, especially if systems support standards like STIX/TAXII or OpenC2 for alert sharing and orchestration.

What types of networks might decoys be expected to emulate?

Proposers should be prepared to replicate a range of environments—enterprise IT, industrial control systems (ICS/SCADA), telecom infrastructure, or operational DoD networks. Modularity and mission-specific tailoring will be advantageous.

Are there any SWaP or software restrictions, and should decoys support edge deployment?

Centralized deployment is generally preferred, but edge-capable systems are viable, especially in contested environments. If edge-launch is envisioned, the Phase I feasibility study should clearly articulate how bandwidth, SWaP, and security concerns will be addressed.

What are the gaps in existing commercial decoy solutions?

Many commercial products lack the depth, adaptability, or automation needed to deceive advanced actors. They’re often static, manually configured, or easily fingerprinted. This topic seeks solutions that feel alive to an adversary—adaptive, dynamic, and difficult to dismiss.

What is the cost structure for a Phase I award?

Phase I proposals may request up to $140,000 for a 6-month effort. Emphasis should be placed on feasibility, automation potential, and operational relevance—especially in contested environments.

What documentation is required for proposal submission?

Submissions include seven volumes: Cover Sheet, Technical Volume, Cost Volume, Company Commercialization Report, Supporting Documents, Fraud, Waste & Abuse acknowledgment, and Foreign Disclosures. See the official DAF STTR Phase I instructions (Release 8) and the DSIP portal for full details.

Will data be provided to support training or validation?

No government datasets will be furnished in Phase I. Proposers should rely on public or synthetic data. If access to sensitive or representative data is essential for advancement, that need should be described in the feasibility study with a plan for how it would be addressed.


More information can be found at

AFWerx.com

AF CYBERWORX

OUR THREE SPECIFIC TOPICS

The SBIR 25.4 / STTR 25.D Open Period runs from May 28 – June 25. During this time, small businesses can review solicitations, but ask technical questions directly through the DoD SBIR/STTR page. We encourage interested businesses to take advantage of this window to gather critical insights and finalize applications. If you need guidance on the application process or want to determine whether your project aligns with our SBIR/STTR focus areas, review our topic FAQs or visit AFWerx.com

AI/ML-ENHANCED RISK MANAGEMENT FRAMEWORK

Accordion Fix
FAQs
Which parts of the RMF process are we expected to automate? Must it be “all of RMF”?

Phase II must demonstrate a TRL 6 prototype that automates the RMF elements you already proved feasible in your Phase I type work. Example areas include control tailoring, evidence ingestion/normalization, POA&M generation, and continuous monitoring dashboards. Full end to end automation is not required in Phase II; prioritize the pieces that shorten the path to an ATO and feed directly into Phase III’s cATO + ConMon objective.

Traditional ATO, cATO, or both?

Design should not preclude future cATO workflows; however, Phase II evaluation will use a standard ATO package.

Are any RMF tasks “off limits” to automation?

Tasks that require inherently governmental judgment—risk acceptance by the Authorizing Official and final SCA concurrence—must keep a human in the loop. Everything else is fair game if you can show outcome quality at least equal to the current manual baseline.

Which DoD systems/APIs are priority for integration?

The government is coordinating to provide an eMASS sandbox (IL4/5) to offerors who wish to test their product. High-value feeds identified by stakeholders are ACAS/Tenable, STIG automation output, HBSS/ESS, and Nessus Agent results. Use open, machine readable formats (JSON, OSCAL, YAML) wherever possible.

Deployment environment—Cloud One, Platform One, on prem? IL levels?

Recommended practice: target IL4/5 first; design so it can be re hosted at IL6 or higher on demand. A FedRAMP High (IL4/5) cloud such as Cloud One Azure is acceptable for Phase II as long as you can export data required for eventual on prem installs at classified levels.

Are specific AI/ML stacks required or prohibited?

No mandated stack. Your technical volume must justify model choice, training data provenance, and how you will meet DoD Responsible AI guardrails. Black box SaaS models that cannot be audited are discouraged.

Primary user persona—RMF engineers only?

Design recommendation: phase II UI/UX should serve RMF practitioners first (ISSM, SCA, AO staff) but expose read only dashboards that non cyber decision makers can consume without training.

Will sample artefacts or realistic traffic be provided?

Offerors should plan to obtain representative artefacts; Government may provide sample data subject to classification and CUI constraints. Synthetic network traffic is acceptable for Phase II development; live feeds come later.

What volume of data should we expect?

Estimate based on stakeholder discussions: roughly 8-10 GB per system per assessment cycle (documents, screenshots, scan CSVs).

What metrics will justify future funding or adoption?

Follow-on funding or Phase III opportunities have not been announced. Suggested metrics include:

  • Percent reduction in calendar days to ATO decision
  • Labor hours saved per control family
  • Evidence re use rate across systems
  • False positive versus true positive control failure detection
Single monolithic solution or multiple micro apps?

Either is acceptable; evaluation criteria weight a coherent end to end workflow and ease of integration higher than packaging. Make your architectural choice explicit in the proposal.

Do we need to name the candidate system that already has an ATO?

No. The Government will furnish a representative test bed and sanitized data set; offerors do not need to name or secure access to an existing ATO'd system for Phase II.

How many Phase II awards will be made?

One (1).

What travel should we budget for Phase II?

Required travel is not anticipated. Reviews and demonstrations are expected to be conducted virtually; in person visits will be arranged only if mission essential and approved by the Government.

How do we show Phase I feasibility in a Direct to Phase II (D2P2) proposal?

Show Phase I feasibility by placing, in Volume 5 of the D2P2 package, a short synopsis plus the detailed “feasibility documentation” (no page limit) that proves you already achieved the topic’s Phase I goals: lab or pilot results for a TRL 4-6 prototype, test reports, demo video or screenshots, and a table that cross-walks each Phase I requirement to the exact evidence page. The Air Force will not even evaluate a D2P2 if this file set fails to establish scientific and technical merit, or if the work was done under any prior SBIR/STTR; it must have been “substantially performed” by your firm/PI and you must own or license any associated IP.

Is CUI processing required during Phase II?

Not during the prototype, but your architecture must be upgradable to Moderate level CUI. Use FIPS validated cryptography and document how you will inherit CMMC Level 2 controls.

Are memory safe languages or supply chain frameworks desired?

This is not a mandatory requirement. Show how your technology choice mitigates common weakness categories.

What timeline milestones are expected?

Offerors should propose a milestone schedule that credibly achieves TRL 6 within the 24-month PoP.

Is there an integration test API or export format we must use for RMF data?

The government is coordinating to provide an eMASS sandbox for testing. For initial proposals, no specific format is required.

Who owns the Phase III transition?

Initial stakeholder discussions are with SAF/CN (USAF CIO) and 16 AF’s Enterprise Cyberspace & Information Dominance directorate. No Phase III owner has been identified.

DETECTION OF UNCREWED AIRCRAFT SYSTEMS IN CLUTTERED ENVIRONMENTS

Accordion Fix
FAQs
Coming Soon!

AI/ML-GENERATED DECOY NETWORKS

Accordion Fix
FAQs
What is the end goal of this effort—operational deployment or experimental use?

The ultimate objective is operational deployment. While Phase I centers on feasibility, the desired outcome is a deployable system that functions within active defensive cyber operations...

What level of realism and adaptability is expected in the decoy environment?

Realism is critical. The decoy must continuously evolve to remain believable under scrutiny from state-sponsored adversaries...

What role should AI/ML play in the proposed solution?

AI/ML is expected to be central to the system’s design. It should be used to learn from live, synthetic, or simulated network data...

What operational modes and response capabilities should the system support?

The system must support fully autonomous, semi-automated, and manual control modes...

Are specific architectures, technologies, or protocols required?

No specific architecture is mandated. Offerors may use AI/ML, expert systems, virtualization, containerization...

What deployment environments and integration considerations should be anticipated?

The system should be designed for deployment in a range of environments, including secure cloud, on-premises, or hybrid configurations...

 


More information can be found at

AFWerx.com