Common Functional Safety Terms

Title Functional Safety Terms
Facebook
WhatsApp
LinkedIn
Pinterest
Telegram

Functional Safety Glossary

These are terms being used in Functional Safety and described in IEC 61511 standard.

  1. Architecture configuration

Specific configuration of hardware and software components in a system. The architecture configuration refers to the arrangement and interconnection of components (e.g., sensors, logic solvers, and actuators) within the safety instrumented function (SIF) to achieve the desired Safety Integrity Level (SIL).

  1. Bypass, Override, Defeat, Disable, Force, Inhibit or Muting

Action or facility to prevent all or parts of the SIS functionality from being executed. Byapss refers to temporarily disabling or overriding a safety function that is designed to prevent or mitigate hazardous events in an industrial process. Bypassing a SIF is sometimes necessary for maintenance, testing, or operational reasons, but it introduces significant risks and must be managed carefully to avoid compromising safety. Examples of bypassing include:

  1. Basic Process Control System (BPCS)

A BPCS includes all of the devices necessary to ensure that the process operates in the desired manner. The BPCS typically includes sensors (for data acquisition), controllers (for decision-making and signal processing), and actuators (for adjusting process parameters). It may also include Human-Machine Interfaces (HMI) for operator interaction.

  1. Channel

Device or group of devices that independently perform(s) a specified function. A channel refers to an independent path within a safety instrumented system (SIS) that includes a complete set of components necessary to perform a specific safety function.

The devices within a channel could include input/output (I/O) devices, logic solvers, sensors, and final elements or any other components involved in detecting a process condition, making a decision based on that condition, and taking appropriate action.

A dual channel (i.e., a two-channel) configuration is one with two channels that independently perform the same function. Channels may be identical or diverse. Each channel typically includes its own sensors, logic solvers, and actuators, providing redundancy to enhance the reliability and safety of the system.

  1. Tolerable Risk

Tolerable risk which is accepted in a given context based on the current values of society. It is a risk that, while not completely eliminated, is reduced to a level that society or an organization is willing to accept, given the costs, benefits, and practical limitations of further risk reduction.

  1. Residual Risk

RResidual risk is the level of risk that remains after all mitigation measures and controls have been implemented. It represents the remaining exposure to potential harm or loss even after steps have been taken to reduce the risk to an acceptable level.

  1. Equipment Under Control (EUC) Risk

Risk arising from the EUC or its interaction with the EUC control system. The risk in this context is that associated with the specific harmful event in which E/E/PE safety-related systems and other risk reduction measures are to be used to provide the necessary risk reduction, (i.e. the risk associated with functional safety).

The main purpose of determining the EUC risk is to establish a reference point for the risk without taking into account E/E/PE safety-related systems and other risk reduction measures.

  1. Reasonably Foreseeable Misuse

Use of a product, process or service in a way not intended by the supplier, but which may result from readily predictable human behaviour. It refers to the potential for a user to use a system, equipment, or product in a way that was not intended by the designer or manufacturer but could be anticipated based on common human behavior, lack of knowledge, or practical circumstances.

  1. Electrical/Electronic/Programmable Electronic (E/E/PE)

Based on electrical (E) and/or electronic (E) and/or programmable electronic (PE) technology, The term is intended to cover any and all devices or systems operating on electrical principles. Examples of (E/E/PE) devices include:

  • Electro-mechanical devices (electrical);
  • Solid-state non-programmable electronic devices (electronic);
  • Electronic devices based on computer technology (programmable electronic);
  1. Safety Integrity Level (SIL)

Discrete level (one out of a possible four), corresponding to a range of safety integrity values, where safety integrity level 4 has the highest level of safety integrity and safety integrity level 1 has the lowest.  Safety integrity levels are used for specifying the safety integrity requirements of the safety functions to be allocated to the E/E/PE safety-related systems.

The higher the SIL, the lower the expected PFD avg for demand mode or the lower the average frequency of a dangerous failure causing a hazardous event for continuous mode.

  1. Random Hardware Failure

Failure, occurring at a random time, which results from one or more of the possible degradation mechanisms in the hardware. It is the unexpected and unpredictable failure of hardware components in a system due to inherent defects or degradation over time. These failures occur without any specific pattern and can happen at any time during the operation of the equipment.

  1. Systematic Failures

Failure, related in a deterministic way to a certain cause, which can only be eliminated by a modification of the design or of the manufacturing process, operational procedures, documentation or other relevant factors. Corrective maintenance without modification will usually not eliminate the failure cause. A systematic failure can be induced by simulating the failure cause.

Systematic failure refers to a type of failure in a system that is consistent and repeatable, caused by errors or flaws in the design, implementation, operation, or maintenance of the system. Unlike random hardware failures, which occur unpredictably due to physical wear or manufacturing defects, systematic failures are deterministic and often result from human error, inadequate processes, or software bugs.

Examples of causes of systematic failures include human error in

  • the safety requirements specification.
  • the design, manufacture, installation, operation of the hardware.
  • the design, implementation, etc. of the software.
  1. Dangerous Failure

A dangerous failure refers to a failure that can result in a hazardous situation, where the safety-related system (SRS) or Safety Instrumented Function (SIF) fails to perform its intended safety function. This type of failure poses a significant risk because it compromises the system’s ability to protect people, the environment, or assets from harm. A dangerous failure directly affects the safety of the system by allowing a hazardous event to occur or by failing to prevent or mitigate such an event.

Type of Dangerous Failures

Detected Dangerous Failure: A failure that is identified by the system’s diagnostics, allowing for corrective action to be taken before a hazardous situation arises.

Undetected Dangerous Failure: A failure that occurs without being identified by the system’s diagnostics, leading to an undetected loss of the safety function, which can result in a hazardous event.

Examples of Dangerous Failures:

  • Hardware Failure: A sensor in a safety system fails to detect an overpressure condition in a chemical reactor, leading to an uncontrolled release of hazardous chemicals.
  • Software Failure: A bug in the control software of an industrial robot causes it to move unexpectedly, creating a risk of collision with human workers.
  • Actuator Failure: A safety valve fails to close during an emergency shutdown, allowing hazardous materials to continue flowing.
  1. Safe Failure

A safe failure refers to a type of failure in which the system or component either continues to operate in a safe manner or transitions to a safe state, thereby preventing any hazardous conditions from arising. Unlike a dangerous failure, which can lead to accidents or unsafe conditions, a safe failure does not pose a risk to people, the environment, or assets.

Failure of an element and/or subsystem and/or system that plays a part in implementing the safety function that:

  1. a) results in the spurious operation of the safety function to put the EUC (or part thereof) into a safe state or maintain a safe state
  2. b) increases the probability of the spurious operation of the safety function to put the EUC (or part thereof) into a safe state or maintain a safe state.
     15. Dependent Failure

A dependent failure refers to a situation where multiple components or systems fail due to a shared cause or condition, rather than independent random events. These failures are correlated because they arise from a common source or influence, such as a single point of failure, common environmental conditions, or a shared design flaw.

Failure whose probability cannot be expressed as the simple product of the unconditional probabilities of the individual events that caused it. Two events A and B are dependent, only if: P(A and B) > P(A) × P(B).

  1. Common Cause Failure

Concurrent or simultaneous failures of different devices, resulting from a single event, where these failures are not consequences of each other. All the failures due to a common cause do not necessarily occur exactly at the same time and this may allow time to detect the occurrence of the common cause before a SIF is actually failed. Some features of CCF are;

  • Common cause failures can also lead to common mode failures.
  • The potential for common cause failures reduces the effect of system redundancy or fault tolerance (e.g., increases the probability of failure of two or more channels in a multiple channel system).
  • Common cause failures are dependent failures. They may be due to external events (e.g., temperature, humidity, overvoltage, fire, and corrosion), systematic fault (e.g., design, assembly or installation errors, bugs), human error (e.g., misuse), etc.
  1. Common Mode Failures

Concurrent or simultaneous failures of different devices characterized by the same failure mode (i.e., identical faults), A common mode failure occurs when multiple components or systems fail in the same way (or mode) due to a shared cause. Unlike CCFs, CMFs typically involve components that are part of a redundant or duplicated system, and the failure happens in the same way across these components.

  1. Compensating Measure

Temporary implementation of planned and documented methods for managing risks during any period of maintenance or process operation when it is known that the performance of the SIS is degraded. It refers to an additional action or safeguard implemented to manage risk when a safety instrumented function (SIF) or other safety-related control cannot fully achieve the required risk reduction on its own

  1. Conservative Approach

Cautious way of doing analysis and calculations. In the safety field, each time an analysis, assumptions, or calculation has to be done (about models, input data, computations, etc.) it can be chosen in order to be sure to produce pessimistic results.

A conservative approach often involves assuming worst-case scenarios when analysing risks, determining safety integrity levels (SIL), or designing safety systems. This means considering the most adverse conditions or events that could occur, and ensuring that the SIS can handle these scenarios.

  1. Detected, Revealed and Overt

Detected is used for failures or faults which do not announce themselves when they occur and which remain hidden until detected by some means (e.g., diagnostic tests, proof tests, operator intervention like physical inspection and manual tests). The repair of such failures can begin only after they have been revealed.

Overt is used for failures or faults which announce themselves when they occur (e.g., due to the change of state). The repair of such failures can begin as soon as they have occurred. An overt failure is one that is immediately obvious and apparent without the need for diagnostic checks or testing. These failures manifest in a way that makes them easy to recognize, such as through visible, audible, or otherwise noticeable symptoms.

Revealed is used for failures or faults that become evident due to being overt or as a result of being detected. A revealed failure is one that becomes apparent during routine testing, inspection, or operation, rather than through continuous diagnostics. Revealed failures may not be immediately obvious and might only be discovered when the system is tested or put into use.

  1. Diagnostic Coverage DC

Fraction of dangerous failures rates detected by diagnostics. Diagnostics coverage does not include any faults detected by proof tests. It measures the proportion of potential dangerous failures that can be detected by the system’s diagnostic mechanisms. High diagnostic coverage means that most dangerous failures are likely to be detected by the system’s diagnostics, allowing for timely intervention to prevent hazardous situations. It directly impacts the reliability and safety integrity level (SIL) of the SIS.

Diagnostics coverage is typically applied to SIS devices or SIS subsystems. E.g., the diagnostics coverage is typically determined for a sensor, final element or a logic solver.

Diagnostic Coverage (DC) = (Dangerous Failures Detected by Diagnostics / Total Dangerous Failures )×100%

Categories of Diagnostic Coverage:

Low Diagnostic Coverage (DC < 60%): Indicates that a significant number of dangerous failures may go undetected, increasing the risk of hazardous events.

Medium Diagnostic Coverage (60% ≤ DC < 90%): A moderate level of coverage where the system can detect many, but not all, dangerous failures.

High Diagnostic Coverage (DC ≥ 90%): The system detects the vast majority of dangerous failures, making it highly reliable in terms of safety.

  1. Fault

Inability to perform the required function, due to an internal state or fault.  Specifically, it is any failure or malfunction of a system, component, or function that prevents the safety instrumented system (SIS) from performing its intended safety function. A fault of an item results from a failure, either of the item itself, or from a deficiency in an earlier stage of the life-cycle, such as specification, design, manufacture or maintenance. A fault of a device results in a failure when a particular set of circumstances is encountered.

  1. Fault Exclusion

Elimination from further consideration of faults due to improbable failure modes. It refers to the practice of excluding certain types of faults from consideration during the safety assessment and design of a safety instrumented system (SIS). This means that the probability of these faults occurring is considered so low or the consequences so minimal that they are not taken into account in the safety integrity level (SIL) assessment or other risk reduction measures.

  1. Fault Tolerance

Ability of a functional item to continue to perform a required function in the presence of faults or errors. “fault tolerance” refers to the ability of a safety instrumented system (SIS) or its components to continue operating correctly even in the presence of a fault or failure. Fault tolerance is achieved through various methods, such as redundancy, diversity, and robust design.

  1. Hardware Safety integrity

Part of the safety integrity of the SIS relating to random hardware failures in a dangerous mode of failure. It refers to the reliability and robustness of the hardware components of a SIS to perform their intended safety functions under all specified conditions, including potential faults and failures.

  1. Input Function

Function which monitors the process and its associated equipment in order to provide input information for the logic solver. It refers to the activities and processes involved in receiving and processing signals or data from sensors or other input devices within a Safety Instrumented System (SIS).  An input function could be a manual function.

  1. Instrumented System

System composed of sensors (e.g., pressure, flow, temperature transmitters), logic solvers (e.g., programmable controllers, distributed control systems, discrete controllers), and final elements (e.g., control valves, motor control circuits). Instrumented systems perform instrumented functions including control, monitoring, alarm and protective functions.

  1. Logic Function

Function which performs the transformations between input information (provided by one or more input functions) and output information (used by one or more output functions. It refers to the decision-making process within a Safety Instrumented System (SIS) that determines the appropriate response based on input signals from sensors or other monitoring devices. Logic functions provide the transformation from one or more input functions to one or more output.

  1. Logic Solver

Part of either a BPCS or SIS that performs one or more logic function(s). The logic solver acts as the brain of the SIS, receiving input signals from sensors, processing them according to predefined logic, and then sending commands to final control elements to maintain or achieve a safe state in the process.In IEC 61511 the following terms for logic solvers are used:

  • electrical logic systems for electro-mechanical technology;
  • electronic logic systems for electronic technology;
  • PE logic system for programmable electronic systems.

Examples are: electrical systems, electronic systems, programmable electronic systems, pneumatic systems, and hydraulic systems. Sensors and final elements are not part of the logic solver.

  1. Safety Configured PE Logic Solver

It refers to a Programmable Electronic (PE) logic solver that has been specifically configured and validated to perform safety functions within a Safety Instrumented System (SIS). This type of logic solver is designed to meet stringent safety requirements, ensuring it can reliably execute Safety Instrumented Functions (SIFs) and achieve the required Safety Integrity Level (SIL).

  1. Mean Repair Time

MRT is expected overall repair time. MRT is the average time taken to detect, diagnose, and restore a failed component or system to its fully operational state. This includes the time from when the failure is first identified until the system or component is back in service.

  1. Mean Time to Restoration

MTTR is expected time to achieve restoration. While both terms MT & MTTS are sometimes used interchangeably, “Mean Time to Restoration” emphasizes the complete process of bringing a system back online, not just the repair itself. It includes all steps needed to ensure the system is fully operational and safe. MTTR includes:

  • the time to detect the failure (a);
  • the time spent before starting the repair (b);
  • the effective time to repair (c);
  • the time before the component is put back into operation (d).

The start time for (b) is the end of (a); the start time for (c) is the end of (b); the start time for (d) is the end of (c).

  1. Maximum Permitted Repair Time, MPRT

Maximum duration allowed to repair a fault after it has been detected. The MRT may be used as MPRT but the MPRT may be defined without regards to the MRT:

  • A MPRT smaller than the MRT can be chosen to decrease the probability of hazardous event.
  • A MPRT greater than the MRT can be chosen if the probability of hazardous event can be relaxed.
  • When a MPRT has been defined it can be used in place of the MRT for calculating the probabilityof random hardware failures.

For higher SILs, the permitted repair time is generally shorter because the system must remain highly reliables and available to provide the necessary risk reduction.

  1. Modes of Operation (of a SIF)

The way in which a SIF operates which may be either low demand mode, high demand mode or continuous mode. Refer to how frequently the SIF is expected to be called upon to perform its safety function. SIF operating in low demand mode or high demand mode is the demand mode SIF.

  1. a) low demand mode: mode of operation where the SIF is only performed on demand, in order to transfer the process into a specified safe state, and where the frequency of demands is no greater than one per year. In low demand mode, the SIF is expected to operate infrequently.
  2. b) high demand mode:mode of operation where the SIF, is only performed on demand, in order to transfer the process into a specified safe state, and where the frequency of demands is greater than once per year.In high demand, the SIF is expected to operate frequently as compared to Low demand mode.
  3. c) continuous mode:mode of operation where the SIF retains the process in a safe state as part of normal operation.(SIF) is required to operate continuously  to maintain a safe state within the process. Unlike in low demand mode, where the SIF is typically dormant and only activates in response to an infrequent event, in continuous mode, the SIF is always functioning to prevent dangerous conditions.
    36. Dangerous Failure of the Demand Mode SIF

In the event of a dangerous failure of the SIF, a hazardous event can only occur;

  • If the failure is undetected and a demand occurs before the next proof test
  • If the failure is detected by the diagnostic tests but the related process and its associated equipment has not been moved to a safe state before a demand occurs.
  • Ddangerous failure of the Continuous Mode SIF

In the event of a dangerous failure of the SIF a hazardous event will occur without further failure unless action is taken to prevent it within the process safety time. Continuous mode covers those SIF which implement continuous control to maintain functional safety.

  1. Module

Module is a self-contained part of a SIS application program (can be internal to a program or a set of programs) that performs a specified function (e.g., final element start/stop/test sequence, an application specific sequence within a SIF)

A module can be a hardware or software component, or a combination of both, that fulfills a specific role within the Safety Instrumented System (SIS). For example, it could be a sensor, actuator, or logic solver component.

The use of a modular approach in designing safety systems, where complex systems are broken down into smaller, manageable modules. This facilitates easier design, testing, maintenance, and modification.

  1. MooN (M out of N)

SIS, or part thereof, made up of “N” independent channels, which are so connected, that “M” channels are sufficient to perform the SIF.

M: The number of units that must operate or vote in favor of a certain action to trigger the safety function. N: The total number of units involved in the decision-making process.

  1. Non-programmable System, (NP) System

System based on non-computer technologies (i.e., a system not based on programmable electronics [PE] or software). Examples would include hard-wired electrical or electronic systems, and mechanical, hydraulic, or pneumatic systems.

  1. Operating Environment

Conditions inherent to the installation of a device that potentially affects its functionality and safety integrity, such as: external environment, e.g., winterization needs, hazardous area classification; process operating conditions, e.g., extremes in temperature, pressure, vibration; process composition, e.g., solids, salts, or corrosives; process interfaces; integration within the overall plant maintenance and operating management systems; communication through-put, e.g., electro-magnetic interference; and utility quality, e.g., electrical power, air, hydraulics.

  1. Process Operating Mode

Any planned state of process operation, including modes such as start-up after emergency shutdown, normal start-up, operation, and shutdown, temporary operations, and emergency operation and shutdown.

  1. Operator Interface or Human Machine Interface, HMI

Means by which information is communicated between a human operator and the SIS (e.g., display interfaces, indicating lights, push-buttons, horns, alarms) The operator interface is sometimes referred to as the human-machine interface (HMI).

  1. Output Function

The function that controls the process and its associated equipment according to output information from the logic function. The output function is the part of the safety instrumented system that directly interacts with the process to ensure that it reaches or maintains a safe state. This typically involves the actuation of valves, shutdown systems, alarms, or other final control elements.

The purpose of the output function is to mitigate identified risks by executing the necessary actions to prevent hazardous events when unsafe conditions are detected by the system.

  1. Performance

Accomplishment of a given action or task measured against the specification. The performance of an SIS is measured against several criteria, particularly focusing on its ability to meet the required Safety Integrity Level (SIL) and other specified safety requirements.

The performance is quantified in terms of the probability of failure on demand (PFD) or the probability of dangerous failure per hour (PFH).

  1. Prior Use or Proven in Use

Documented assessment by a user that a device is suitable for use in a SIS and can meet the required functional and safety integrity requirements, based on previous operating experience in similar operating environments.

To qualify an SIS device on the basis of prior use, the user can document that the device has achieved satisfactory performance in a similar operating environment. Understanding how the equipment behaves in the operating environment is necessary to achieve a high degree of certainty that the planned design, inspection, testing, maintenance, and operational practices are sufficient.

  1. Process Safety Time

Time period between a failure occurring in the process or the basic process control system (with the potential to give rise to a hazardous event) and the occurrence of the hazardous event if the SIF is not performed.

It refers to the maximum time available between the occurrence of a hazardous event or condition and the point at which action must be taken to prevent the situation from escalating into a more serious hazard or incident.

  1. Programmable Electronics (PE)

Item based on computer technology which may be comprised of hardware, software, and of input and/or output units. This term covers micro-electronic devices based on one or more central processing units (CPU) together with associated memories. Examples of process sector programmable electronics include:

  • smart sensors and final elements
  • programmable logic controller
  • loop controllers

Programmable Electronics are electronic devices or systems that can be programmed with specific instructions to carry out safety functions within an SIS.

  1. Programmable Electronic System (PES)

System for control, protection or monitoring based on one or more programmable electronic devices, including all devices of the system such as power supplies, sensors and other input devices, data highways and other communication paths, actuators and other output devices.

A Programmable Electronic System (PES) is an assembly of programmable electronic devices, including processors, input/output (I/O) modules, software, and communication interfaces, designed to perform control, protection, and safety functions.

  1. Programming or Coding

Process of designing, writing and testing a set of instructions for solving a problem or processing data. In the IEC 61511 series, programming is typically associated with PE.

  1. Proof Test

Periodic test performed to detect dangerous hidden faults in a SIS so that, if necessary, a repair can restore the system to an ‘ as new’ condition or as close as practical to this condition.

  1. Safety Instrumented Function, SIF

A Safety Instrumented Function (SIF) is defined as a function that is implemented by a Safety Instrumented System (SIS) to achieve or maintain a safe state of a process when certain hazardous conditions are detected. A SIF is designed to achieve a required SIL which is determined in relationship with the other protection layers participating to the reduction of the same risk.

Key elements of a SIF:

  • Process Sensor(s): Detect abnormal conditions, such as pressure, temperature, or flow exceeding predefined limits.
  • Logic Solver: Interprets the signals from the sensors and decides whether a safety response is needed.
  • Final Element(s): Take action, such as shutting down equipment, closing valves, or activating alarms.
  1. Safety Instrumented System (SIS)

SIS is an instrumented system used to implement one or more SIFs.  A SIS is composed of any combination of sensor (s), logic solver (s), and final elements(s). It also includes communication and ancillary equipment (e.g., cables, tubing, power supply, impulse lines, heat tracing). A SIS may include software and may also include human action as part of a SIF.

SIS is the complete system that manages process safety by performing various safety actions. SIF is a specific safety action or function that the SIS performs to prevent or mitigate a specific hazardous event. A SIS can perform multiple SIFs, each designed to respond to different potential hazards or failures. For example, one SIF might shut down a pump when high pressure is detected, while another SIF might activate an emergency shutdown system in case of a fire.

  1. Safety Integrity

It is the ability of the SIS to perform the required SIF as and when required. Ability of the SIS includes both the functional response (e.g., closing a specified valve within a specified time) and the likelihood that the SIS will act as required.

Safety Integrity refers to the likelihood or probability that a Safety Instrumented Function (SIF) will successfully perform its intended safety function when required. In other words, it defines the reliability and effectiveness of the SIF in reducing risks to an acceptable level within the process industry.

In determining safety integrity, all causes of random hardware and systematic failures which lead to an unsafe state can be included (e.g., hardware failures, software-induced failures and failures due to electrical interferences). Some of these types of failure, in particular random hardware failures, may be quantified using such measures as the average dangerous failure frequency or the probability of failure on demand.

However, safety integrity also depends on many systematic factors, which cannot be accurately quantified and are often considered qualitatively throughout the life cycle. The likelihood that systematic failures result in dangerous failure of the SIS is reduced through hardware fault tolerance or other methods and techniques.

Safety integrity comprises hardware safety integrity and systematic safety integrity, but complex failures caused by the conjunction of both hardware and systematic interaction can also be considered.

  1. Safety Manual 

A Safety Manual is a document provided by the manufacturer or designer of safety-related equipment that contains critical information for the user regarding the proper use, configuration, maintenance, and testing of that equipment within a Safety Instrumented System (SIS). It ensures that the equipment meets the necessary safety performance standards and is correctly integrated into the SIS to achieve the required Safety Integrity Level (SIL).

Functional safety manual information that defines how a SIS device, subsystem or system can be safely applied. The safety manual may include inputs from the manufacturer as well as from the user. This could be a generic stand-alone document or a collection of documents.

  1. Safety Requirements Specification (SRS)

The specification containing the functional requirements for the SIFs and their associated safety integrity levels. SRS) are specifications that describe every required safety function that must be performed by a safety instrumented system (SIS). SRSs specify both what safety functions must be performed by a system and how well those functions must be performed. It is often a contractual document between companies and is one of the most important documents in the safety lifecycle process.

  1. Fixed Program Language (FPL)

Language in which the user is limited to adjustment of a few pre-defined and fixed set of parameters. Typical examples of device applications with FPL are smart sensor (e.g., pressure transmitter without control algorithms), smart final element (e.g. valve without control algorithms), sequence of events recorder, set points for dedicated smart alarm box). The use of FPL is often referred to as “configuration of the device”.

  1. Limited Variability Language (LVL)

Programming language for commercial and industrial programmable electronic controllers with a range of capabilities limited to their application as defined by the associated safety manual. This type of language is designed to be easily understood by process sector users and provides the capability to combine predefined, application-specific, library functions to implement the SRS. LVL provides a close functional correspondence with the functions required to achieve the application. The notation of this language may be textual or graphical or have characteristics of both.

Limited Variability Language (LVL) refers to specialized programming languages with restricted functionality and predefined structures, which are designed to reduce complexity and enhance reliability. In the context of IEC 61511, LVLs are favored for developing safety functions due to their simplicity, predictability, and ease of validation.

  1. Full Variability Language (FVL)

Language designed to be comprehensible to computer programmers and that provides the capability to implement a wide variety of functions and applications. Typical examples of systems using FVL are general-purpose computers.

Full Variability Language (FVL) refers to general-purpose programming languages that provide complete flexibility in terms of logic development, data handling, and control. These languages offer the programmer extensive freedom but, due to their complexity, can introduce greater risks of errors, which is why they require additional safety measures in the context of functional safety.

According to IEC 61511, which governs the safety of process industry applications, FVLs are usually discouraged for direct implementation of safety functions due to their complexity and potential for undetected errors. However, if FVLs are used, stringent validation and verification methods are required.

In the process sector, FVL is found in embedded software and rarely in application programming. FVL examples include Ada, C, Pascal, Instruction List, assembler languages, C++, Java, SQL.

  1. Application Program

Program specific to the user application containing, in general, logic sequences, permissives, limits and expressions that control the input, output, calculations, and decisions necessary to meet the SIS functional requirements.

  1. Embedded Software

Software that is part of the system supplied by the manufacturer and is not accessible for modification by the end-user. Embedded software is also referred to as firmware or system software.

  1. SIS Subsystem

Independent part of a SIS whose disabling dangerous failure results in a disabling dangerous failure of the SIS. The SIFs implemented within a SIS are entirely dependent on the SIS subsystems of this SIS (i.e., when a SIS subsystem fails, the related SIFs also fail.

  1. System

A set of devices, which interact according to a specification. A system refers to a set of interacting or interrelated elements designed to achieve one or more specific objectives.

  1. Systematic Capability

A measure of the confidence that the systematic safety integrity of a device meets the requirements of the specified SIL, in respect of the specified safety function, when the device is applied in accordance with the instructions specified in the device safety manual.

  1. Systematic Safety Integrity

Part of the safety integrity of the SIS relating to systematic failures in a dangerous mode of failure. Systematic safety integrity cannot usually be quantified (as distinct from hardware safety integrity).

  1. Target Failure Measure

Performance required from the SIF and specified in terms of either the average probability of failure to perform the SIF on demand for demand mode of operation or the average frequency of a dangerous failure for continuous mode of operation.

   66. Spurious Trip

A spurious trip or safe failure would be a time when the process is in normal operation and the system acts as if there is a problem and goes to the safe state when it is not necessary. A spurious trip is the activation of a SIF when there is no demand.

Share on facebook
Share on whatsapp
Share on linkedin
Share on pinterest
Share on telegram

Leave a Comment

Home Forums Topics

Viewing 15 topics - 16 through 30 (of 120 total)
Viewing 15 topics - 16 through 30 (of 120 total)