inner banner

Simulation-based Assessment of Diagnostic False Alarms (FAs)

Advanced Simulation-based Assessments – STAGE

STAGE Simulation Assessment(s)

As an integrated component of ISDD, we can reuse our existing Integrated Systems’ (fielded product) eXpress models in the companion Operational Support Simulation tool, “STAGE”.

STAGE offers an unprecedented capability to examine the impact of the design’s diagnostic capability in terms of its strengths and weaknesses over any sustainment lifecycle(s). This will provide us a palate to discover and “trade” previously undeterminable areas of costs and value that are strictly a reflection of constraints of the diagnostic integrity.

The STAGE Simulation will NOT need to establish a separate database! The STAGE Simulation can be objectively performed by YOU!

Used during Design Development

The STAGE Simulation enhances Design Influence & provides a vast new trade space for decision-making purposes targeting assessments for new or alternative design configurations. STAGE will provide an opportunity to evaluate and balance the sustainment philosophy to cost-effective alternatives resulting from trading for optimal operational success, safety, availability and affordability. In this context, these are analyses that occur PRIOR TO any design being fielded. The decisions are NOT based upon the fielded design for an ensuing maintenance decision.

Diagnostic Effectiveness as Related to Test Coverage Constraints

To evaluate the design of a FULL INTEGRATED SYSTEM – NOT a single component or any single subsystem. Design decisions are to be based upon fully describing “Test Coverage” at every level of design & for any & all type(s) of “test(s)” (e.g. SBIT, CBIT, PBIT, inspection, sensors, etc.) and any mix therewith in aggregate for the integrated system design. All test coverage will be validated with appropriate SME to ensure the quality of test data (diagnostic conclusions from “pass” or “fail” test results) capture and accuracy for each piece of the design, but limited to one-level below the replaceable unit.

Diagnostic Effectiveness as Related to Maintenance Constraints

Our context will include the capture of the LORA and any corrective actions or time-related SWAGS (as current as possible and can be iteratively updated& fined-tuned as design matures) from Maintenance Engineering. We will also include the capture of any available Reliability data (FE, FR, etc.) as available, used or produced from any other RE preferred tool, source or method as it becomes available. We will (re)use any independently-owned disciplinary data investment (on an iterative basis) for inclusion into our design data capture environment, eXpress.

Preparing to Organize and Structure the Captured Data into a Knowledgebase

The data capture tool, eXpress will organize & structure all design data elements or assessment products as available for the purpose of establishing exhaustive design functional and failure interdependencies. When all design interdependencies are described and integrated with and among any other companion designs’, we form a “knowledgebase”.

Additionally, the knowledgebase will include a myriad of design-relevant attributes and generate the new organized structure of interdependencies within the design. We can then determine the effectiveness of our diagnostic (including any POF sensing) interrogation capability of our integrated system. In this process, we are able to generate a myriad of assessment metrics for any “slice & diced” subset(s) design(s) contained within our system(s) to enable a method to the discovering the test coverage effectiveness at the system level, and for any specific operational state(s).

Generating and Reusing the Designs’ Knowledgebase

Once the design has been processed in eXpress, we will use the “cooked” knowledgebase from eXpress that now contains the combined and integrated design data from each design independent discipline. We will then use this identical knowledgebase through an export facility in eXpress that produces a “free” public XML format, “DiagML”, which can imported into many other third party test tools, DSI Workbench, or STAGE, DSI’s Operational Support & Health Management Simulation Tool.

We need to use the exact same data from which we have already produced static, single-point assessments that satisfy CDRL’s as specified by DoD customer requirements, to produce a whole new set of “Simulated” metrics that describe the effects of any of our design decisions over any time interval (“life time”) – but with the consideration of maintenance. For the purposes of our simulation, Maintenance will be in accordance with the LORA as specified by Maintenance Engineering and in accordance with the predictive or corrective maintenance philosophy, but constrained by the effectiveness of our sensors (test coverage) per operational state of our system.

STAGE Simulations with Traditional Diagnostic Data Analytics

We may remove multiple components (some failed, some non-failed) in any maintenance activity as determined by our Maintenance Schedule, our Prognostic effectiveness (and any related scope, horizon, accuracy & confidence based upon POF studies) our opportunistic replacement strategy (replacing for convenience as hoping to increase availability or defer cost of repeated costs of additional components with a lower RUL), or any Corrective Maintenance based upon the limitations of our Fault Detection and Fault Isolation capability. These Maintenance Actions will all have an impact upon replacement time(s), cost(s), number of components and accuracy or effectiveness of diagnosis/prognosis.

STAGE Simulations Balancing Prognostics with Traditional Diagnostic Data Analytics

“STAGE” can show the impact of any sort of maintenance philosophy or “mix” of maintenance approaches, including “Condition-Based Maintenance” or “CBM”, which is frequently considered with the incorporation of “Prognostics” in the operational and maintenance environment. Variables are first related to prognostic requirements, such as Horizon (Remaining Useful Life), and confidence / accuracy level. Running the resulting diagnostics with these prognostic tests can then be run in STAGE simulation to show the impact on maintenance actions and down time over time.

Vary Maintenance Paradigms

With the Operational Simulation from same design knowledgebase that we initially produced the static metrics, we will now be able to determine which of the following time-based metrics will provide useful information for making, not just design decisions, but also “Maintenance” approach decisions. Since we will likely be very surprised how our design reacts or differs in a simulated assessment than from a traditional metric producing paradigm. Where we may have produced ideal static independent discipline-based metric results, we may now produce very surprising or even disturbing, discipline-interdependent, time-based results through a stochastic simulation process – or visa-versa, depending on maintenance, type, frequency, philosophy and the diagnostic integrity of our system.

Getting Creative with STAGE

Let’s be aware that STAGE has an abundance of metrics and some may have more utility when varying the sustainment objectives. With STAGE, you’ll discover many new metrics – and look at the impact of assessing a design and maintenance strategy (that can also evolve with design updates). We will select any of the algorithms described below, which are simply “stock” calculations that produce graphs to place on PowerPoints or can simply produce values that can be placed in a grid to compare results for multiple life time(s) enabling the opportunity to observe how to find the optimum ‘BALANCE’ of maintenance approaches to optimize the impact on the four (4) main goals of Systems Testability: Operational Success, Safety, Operational Availability and Cost of Ownership.

Listing of some of the “Stock Calculations” in STAGE: (maybe just list in a scrollable, pull-down menu with the intention to provide links to example graphs in the future)….

There are many other considerations that can be assessed as well. But the goal is to allow the assessment to accommodate any objective or any complex program before significant investment is made by dismissing the opportunity to discover if there is merit to any of the metrics (conventional and/or progressive) to best exploit or represent the diagnostic integrity of the design. Now, the only answer that matters about the value of any metric (be it a discrete value or produced from a stochastic process) is the answer that supports the context the operational and sustainment requirements.

Subscribe To Our Newsletter