“Time flies over us but leaves its shadow behind.”  — Nathaniel Hawthorne

Layers of protection analysis (LOPA) considers enabling conditions as part of the analysis. Enabling conditions account for an aspect of likelihood that if unaccounted for, would overstate the risk of a hazard.

There are some organizations, however, that deem many of these enabling conditions as inappropriate and do not allow them in a risk assessment.

One of these enabling conditions is “time at risk.”

Time at Risk

LOPA is a form of likelihood assessment. The assessment begins with identifying the initiating cause and the rate at which the initiating cause occurs—its failure rate. We typically express failure rates in terms of failures per year, and in LOPA we express this failure rate to the nearest order of magnitude: once per year, once per decade, once per century, etc. The failure rates for common initiating causes come from widely available databases and are generally accepted. They assume continuous operation—8,760 hours per year.

Some hazards are not present continuously. Time at risk takes this into account.

As described in the CCPS book, Layer of Protection Analysis – “the purple book” – “…it is necessary to adjust the data to reflect that the component or operation is not subject to failure during the entire year, but only that fraction of the year when it is operating or ‘at risk.’ This is normally done by multiplying the base failure rate by the fraction of the year the component is operating.”

Consider a process that is vulnerable to control loop failure during a cooling step. The step takes 8 hours and is performed 40 times each year. The typical failure rate for a control loop is once per decade (1 x 10-1/year). This, then, is the failure rate to use for this scenario:

F = 1 x 10-1/year x 8 hours/step x 40 steps/year x year/8,760 hours = 3.7 x 10-3/year

Because LOPA is a simplified tool of process risk assessment, some would argue that instead of using as calculated, that the resulting time‑at‑risk factor should be rounded to the nearest order of magnitude, to 1 x 10‑1/year. They wouldn’t, however, argue against using a time‑at‑risk factor.

Nonetheless, there are those who argue against the use of time‑at‑risk factors at all.

Arguments Against Using Time-At-Risk Factors

Time-at-risk factors average the risk of a particular hazard over some extended period. They weight the finite likelihood of failure when receptors (personnel, the environment, the community) are at risk with the non-existent likelihood of failure when receptors are not at risk. There is nothing wrong with this approach when estimating the overall risk of a hazard. However, the use of a time-at-risk factor has the effect of diluting the calculated risk when the question is “what is the risk of this hazard while it is in service?” In the case of while it is in service, time at risk is not relevant.

There is another occasion we want to know and base our risk-reduction strategies on the risk of a hazard while it is in service: when estimating the risk in equipment that is used for many different processes. Consider a process equipment train used for 25 different, but similar, processes. Assume that each campaign lasts about 2 weeks. Let’s imagine that the risk from that equipment train is X for a particular process and that the tolerable risk is 10X. The risk from a particular process is low enough that there would be no requirement to reduce the risk for that process.

However, if the risk from each process is about the same, the cumulative risk in the equipment train is 25X. While no individual process exceeds the tolerable risk, the overall risk is 2.5 times greater than the tolerable risk, which demands that something be done to reduce the risk.

When a process is divided into increasingly smaller segments, eventually the risk from any one segment is so small as to be negligible. The argument against using a time-at-risk factor is to avoid dividing the hazards into such small slices of time as to render the risk falsely negligible.

Confusion With Mission Time

Some confuse time at risk with another aspect of likelihood calculations: mission time. A time-at-risk factor affects the likelihood of initiating a hazardous scenario. It is a factor that adjusts the initiating failure rate. Mission time or, in many cases, the proof test interval, on the other hand, is an essential element in calculating the Probability of Failure on Demand (PFD) of a safeguard. We use mission time when our safeguard is a non-repairable system, so that we can calculate PFD; otherwise, we use the proof test interval when our safeguard is a repairable system, so that we can calculate Average Probability of Failure on Demand (PFDavg).

Both PFD and PFDavg are functions of λt. The first term, λ, is the failure rate of the safeguard. It is similar to the initiating cause frequency, F, in that λ and F are expressed as failures per year. The second term, t, is a time. For PFD, t is the time since putting the safeguard into service, culminating ultimately in T, the safeguard’s mission time, which is when the safeguard comes out of service. For PFDavg, t is the proof test interval.

We calculate the PFD or PFDavg using the product of failure rate (failures per year) and the relevant time (years). For either, the shorter the time, the smaller the probability of failure. The calculation of PFD or PFDavg based on mission time or proof-test interval is separate and distinct from the calculation of time at risk.

Interestingly, when the relevant time is one year, as is often the case, the failure rate and the PFD have the same values, even though one is in units of “per year” and the other is a unitless probability. This helps to explain why the two are often confused.

For any safeguard, the probability of failure increases as time goes on. For a safeguard that is applied intermittently for brief periods, it may be that the probability of failure increases whether or not the safeguard is in service. However, validation or proof tests before putting a safeguard into service have the effect of resetting the initial probability of failure to zero. It climbs from there once in service. When a safeguard is subjected to validation or proof tests before each time it goes into service, the PFD will always start at essentially zero, and then increase with time.

Using Time-At-Risk Factors and Mission Time

Time at risk is a valid enabling condition that can be applied legitimately to many hazardous scenarios, avoiding an overstatement of the risk. However, it is possible to misapply time-at-risk factors, so some organizations simply prohibit their use. This is a conservative approach. Perhaps it is overly conservative. However, when an organization allows the use of time-at-risk factors, it is important to use them appropriately.

Mission time or proof-test intervals, on the other hand, are different from time at risk. These are times used to calculate how the PFD or PFDavg of a safeguard increases with the passage of time. Mission time and proof-test intervals are appropriate whether or not time-at-risk factors are allowed.


  • Mike Schmidt

    With a career in the CPI that began in 1977 with Union Carbide, Mike was profoundly impacted by the 1984 tragedy in Bhopal and has been working on process safety ever since.