Files
SpaceCom/docs/Tenders/BID_RULES_182213.md

387 lines
12 KiB
Markdown

# Bid Rules for ESA Tender 182213
## Purpose
This note converts the lessons from prior SESAR tender feedback into a practical checklist for bidding ESA tender `182213`, whose objective is to characterise and assess risks from subsonic space debris re-entry fragments crossing airspace.
The core rule is simple:
**Do not submit a broad "interesting concept" proposal. Submit a measurable, validation-led, operationally grounded proposal with explicit impact logic, clear task ownership, and a direct line from physics to airspace decision support.**
---
## Why 182213 Fits SpaceCom
Tender `182213` is unusually well aligned with the strongest part of the SpaceCom plan:
- uncontrolled re-entry risk
- airspace hazard from debris
- aircraft vulnerability
- operational disruption and conservative closures
- the need to improve decision-making under uncertainty
This matches the core SpaceCom positioning in `docs/MASTER_PLAN.md`:
- space-domain prediction plus aviation-domain decision support
- FIR intersection analysis
- hazard corridors
- NOTAM drafting support
- multi-ANSP coordination
The proposal should therefore be framed as:
**an operational risk-assessment and decision-support capability for uncontrolled re-entry debris in airspace, not merely an orbital analysis tool**
---
## Lessons from Past ESRs
## 1. Objectives must be measurable
Past proposals were repeatedly marked down for objectives that were broad, brief, or hard to verify.
Do:
- define 3 to 5 concrete objectives
- attach each objective to an observable output
- state how success will be measured at project end
Do not:
- use vague wording like "improve awareness", "support resilience", or "enable better decisions" without measurement logic
For 182213, every objective should have:
- a technical metric
- an operational metric
- a validation method
Example pattern:
- improve prediction of subsonic fragment airspace exposure
- quantify uncertainty bounds for aircraft encounter risk
- reduce unnecessary airspace restriction area and/or duration versus conservative baseline methods
---
## 2. KPIs and performance logic must appear early
The strongest prior bid, SCAN, was praised for quantitative KPI-based validation. The weaker bids were criticised for qualitative impact claims without traceable performance logic.
For 182213, include a KPI table in the first substantive section, not as an afterthought.
Minimum KPI families:
- hazard prediction accuracy
- uncertainty calibration
- aircraft exposure estimation accuracy
- operational usefulness for ANSP decision-making
- reduction in unnecessary conservatism
- timeliness of updates during active events
Suggested KPI examples:
- error in predicted affected airspace footprint versus reconstructed event outcome
- calibration of risk bands against back-tested scenarios
- false positive and false negative rates for affected airspace warnings
- percentage reduction in precautionary closure area relative to current conservative practice
- time from updated re-entry assessment to updated operational recommendation
---
## 3. Methodology must be technically specific
Previous weaker bids were penalised when methods, candidate models, datasets, and technical assumptions were under-specified.
For 182213, specify:
- breakup and fragment descent modelling approach
- treatment of subsonic fragment regimes
- atmospheric assumptions and data sources
- aircraft vulnerability / encounter model
- air traffic density or exposure model
- uncertainty propagation method
- validation dataset strategy
- comparison baseline
The evaluators should never have to guess:
- which models you will test
- which data you will use
- how uncertainty is handled
- what the benchmark is
If multiple methods are possible, state the decision logic up front:
- baseline method
- advanced method
- selection criteria
---
## 4. Validation must be concrete, not deferred
One repeated weakness in earlier bids was saying that validation detail would be defined later.
For 182213, validation must already be visible in the bid.
State:
- the scenarios to be tested
- the data sources for those scenarios
- the validation environment
- the comparison cases
- the acceptance criteria
Recommended validation structure:
1. Historical case back-testing
2. Monte Carlo scenario testing
3. Operational replay with airspace and flight-density overlays
4. Expert review with aviation stakeholders
If possible, include a headline validation case:
- Long March 5B / Spanish airspace closure style scenario
That immediately shows operational relevance and mirrors the tender text.
---
## 5. Trace every claim to operational impact
Past bids lost marks when impact was credible but too qualitative.
For 182213, every technical output should map to an operational outcome:
- fragment model improvement -> better hazard corridor definition
- aircraft encounter modelling -> clearer risk thresholds for operators
- uncertainty modelling -> more defensible airspace restrictions
- event updates -> better timing of restrictions and release decisions
The impact section should explicitly answer:
- what will change for ANSPs
- what will change for airlines
- what will change for regulators
- what will change for re-entry risk assessment practice
Do not leave impact at the level of "better safety awareness".
Use impact language such as:
- fewer unnecessary closures
- more targeted restrictions
- faster updates under uncertainty
- more defensible criteria for action
- stronger basis for future regulatory guidance
---
## 6. Show the path to standards, regulation, and adoption
Winning bids are not just technically strong; they show where the outputs go next.
For 182213, spell out the route from project outputs to:
- ESA / space safety use
- aviation authority acceptance
- ANSP procedures
- future standardisation and guidance material
Potential channels to mention, if justified in the consortium and work plan:
- ICAO-related airspace risk handling
- EASA / national authority guidance
- EUROCONTROL / ANSP operational procedures
- future ATM / STM coordination frameworks
The key is not to overclaim, but to show a credible uptake path.
---
## 7. Be explicit about automation, human decision support, and cybersecurity
Past proposals were criticised when automation levels or cybersecurity handling were unclear.
For 182213:
- state clearly whether the output is advisory decision support, not automated airspace command
- define the human decision-maker in the loop
- state how integrity, traceability, and update provenance are maintained
- describe cybersecurity proportionately if software or data exchange are in scope
Useful framing:
- the system supports, but does not replace, authorised aviation decision-makers
- all operational recommendations are traceable to model version, inputs, and timestamped assumptions
---
## 8. Work packages must be concrete at task level
Several past bids were penalised for vague WPs, missing sub-tasks, unclear milestones, and unclear partner roles.
For 182213, each WP should show:
- purpose
- tasks
- lead
- contributors
- outputs
- milestone(s)
- acceptance criteria
No partner should appear as a name in the consortium without a visible task-level role.
Minimum good structure:
- WP1 Project management and quality assurance
- WP2 Requirements, scenarios, and operational criteria
- WP3 Physics and fragment risk modelling
- WP4 Aircraft vulnerability and airspace exposure modelling
- WP5 Integrated platform / demonstrator and validation
- WP6 Exploitation, standards, and regulatory uptake
If the tender scope is smaller, compress the WPs but keep the same traceability.
---
## 9. Milestones must align with the handbook and the internal logic
Earlier feedback repeatedly picked up avoidable issues:
- inconsistent durations
- missing maturity gate
- deliverables landing after the logical close of technical work
- unclear PM tables
Before submission, run a consistency check on:
- project duration
- WP dates
- milestone dates
- deliverable dates
- PM totals
- partner PM subtotals
- budget tables versus narrative tables
This is not cosmetic. Evaluators treat these errors as evidence that delivery control may be weak.
---
## 10. Exploitation must be concrete, not generic
Weak bids often described dissemination reasonably well but exploitation too vaguely.
For 182213, separate these clearly:
- communication: who hears about the project
- dissemination: who receives results
- exploitation: who uses what, how, and why
Exploitation should identify:
- primary users
- adoption pathway
- productisation or service pathway
- next-phase funding or procurement path
- what SkyNav will own or commercialise after the project
For SpaceCom, exploitation should be tied to:
- aviation safety decision support for re-entry events
- ANSP-facing operational tooling
- integration into broader SpaceCom platform modules
- future institutional and commercial deployments
---
## 11. The bid should feel smaller, sharper, and more provable than the broad SESAR bids
The prior feedback suggests a recurring risk: trying to sound system-level and strategic without enough measurable technical substance.
For 182213, the better move is:
- narrower scope
- sharper technical method
- stronger validation
- clearer operational end user
- simpler consortium story
This tender appears to reward a focused, technically credible, high-clarity proposal more than a sprawling programme narrative.
That should favour an SME-led or SME-prominent bid if the scope is disciplined.
---
## Recommended Proposal Spine for 182213
## One-sentence positioning
SpaceCom will deliver a validated risk-assessment and decision-support capability for subsonic re-entry debris in airspace, combining fragment modelling, aircraft exposure assessment, and operational airspace impact logic to support safer and less conservative response decisions.
## Core objectives
- characterise subsonic fragment behaviour after destructive re-entry
- quantify aircraft exposure and vulnerability to subsonic debris in airspace
- integrate airspace and traffic-density context into re-entry debris risk assessment
- validate operationally useful decision criteria for airspace restrictions and release
- produce a pathway toward institutional adoption and future standardisation
## Core outputs
- fragment risk model
- aircraft encounter / vulnerability model
- integrated risk engine
- operational decision criteria
- validation report using historical and synthetic scenarios
- regulator / operator uptake package
---
## Red Flags to Avoid
- objectives that cannot be measured
- qualitative impact claims without KPIs
- saying validation will be defined later
- vague references to AI or analytics without specific methods
- unclear partner roles
- weak or generic exploitation language
- no explicit operational decision-maker or user
- no cyber / integrity treatment if software exchange is involved
- no benchmark against current conservative practice
---
## Submission Gate Checklist
- [ ] Objectives are specific, measurable, and testable
- [ ] Each objective has at least one KPI and one validation method
- [ ] Methodology names the models, datasets, assumptions, and baselines
- [ ] Validation cases are already defined in the proposal
- [ ] Impact is quantified wherever possible
- [ ] SESAR-style performance logic is mirrored even if this is not a SESAR call
- [ ] Operational user and decision context are explicit
- [ ] Standards / regulatory uptake path is credible and specific
- [ ] Automation and human-in-the-loop position are explicit
- [ ] Cybersecurity / integrity handling is addressed if relevant
- [ ] WPs include task-level detail and named partner roles
- [ ] Milestones, deliverables, PMs, and budget are internally consistent
- [ ] Exploitation is concrete and linked to SpaceCom's roadmap
---
## Bottom Line
The lesson from the past tenders is not that the concepts were weak.
The lesson is that evaluators rewarded proposals that were:
- more measurable
- more technically explicit
- more validation-led
- more traceable from task to output to KPI to impact
For tender `182213`, SpaceCom has a strong thematic fit. The decisive factor will be whether the bid reads like a precise operational science-and-software programme with defensible metrics, rather than a broad strategic vision.