Английская Википедия:Interference freedom

Материал из Онлайн справочника
Версия от 09:58, 26 марта 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Concurrent program verification method}} In computer science, '''interference freedom''' is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis<ref name="OwickiThesis">{{cite thesis |last= Owicki |first= Susan S. |dat...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description

In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis[1] (and papers arising from it [2][3]) under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs.

Concurrent programming had been in use since the mid 1960s for coding operating systems as sets of concurrent processes (see, in particular, Dijkstra.[4]), but there was no formal mechanism for proving correctness. Reasoning about interleaved execution sequences of the individual processes was difficult, was error prone, and didn't scale up. Interference freedom applies to proofs instead of execution sequences; one shows that execution of one process cannot interfere with the correctness proof of another process.

A range of intricate concurrent programs have been proved correct using interference freedom, and interference freedom provides the basis for much of the ensuing work on developing concurrent programs with shared variables and proving them correct. The Owicki-Gries paper An axiomatic proof technique for parallel programs I [2] received the 1977 ACM Award for best paper in programming languages and systems.[5][6]

Note. Lamport [7] presents a similar idea. He writes, "After writing the initial version of this paper, we learned of the recent work of Owicki.[1][2]" His paper has not received as much attention as Owicki-Gries, perhaps because it used flow charts instead of the text of programming constructs like the if statement and while loop. Lamport was generalizing Floyd's method [8] while Owicki-Gries was generalizing Hoare's method.[9] Essentially all later work in this area uses text and not flow charts. Another difference is mentioned below in the section on Auxiliary variables.

Dijkstra's Principle of non-interference

Edsger W. Dijkstra introduced the principle of non-interference in EWD 117, "Programming Considered as a Human Activity", written about 1965.[10] This principle states that: The correctness of the whole can be established by taking into account only the exterior specifications (abbreviated specs throughout) of the parts, and not their interior construction. Dijkstra outlined the general steps in using this principle:

  1. Give a complete spec of each individual part.
  2. Check that the total problem is solved when program parts meeting their specs are available.
  3. Construct the individual parts to satisfy their specs, but independent of one another and the context in which they will be used.

He gave several examples of this principle outside of programming. But its use in programming is a main concern. For example, a programmer using a method (subroutine, function, etc.) should rely only on its spec to determine what it does and how to call it, and never on its implementation.

Program specs are written in Hoare logic, introduced by Sir Tony Hoare,[9] as exemplified in the specs of processes Шаблон:Math and Шаблон:Math:

Шаблон:SpacesШаблон:Math}Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:MathШаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math}Шаблон:SpacesШаблон:Math}

Meaning: If execution of Шаблон:Math in a state in which precondition Шаблон:Math is true terminates, then upon termination, postcondition Шаблон:Math is true.

Now consider concurrent programming with shared variables. The specs of two (or more) processes Шаблон:Math and Шаблон:Math are given in terms of their pre- and post-conditions, and we assume that implementations of Шаблон:Math and Шаблон:Math are given that satisfy their specs. But when executing their implementations in parallel, since they share variables, a race condition can occur; one process changes a shared variable to a value that is not anticipated in the proof of the other process, so the other process does not work as intended.

Thus, Dijkstra's Principle of non-interference is violated.

In her PhD thesis of 1975 [1] in Computer Science, Cornell University, written under advisor David Gries, Susan Owicki developed the notion of interference freedom. If processes Шаблон:Math and Шаблон:Math satisfy interference freedom, then their parallel execution will work as planned. Dijkstra called this work the first significant step toward applying Hoare logic to concurrent processes.[11] To simplify discussions, we restrict attention to only two concurrent processes, although Owicki-Gries[2][3] allows more.

Interference freedom in terms of proof outlines

Owicki-Gries[2][3] introduced the proof outline for a Hoare triple Шаблон:Math}. It contains all details needed for a proof of correctness of Шаблон:Math} using the axioms and inference rules of Hoare logic. (This work uses the assignment statement Шаблон:MathШаблон:Math, Шаблон:Math and Шаблон:Math statements, and the Шаблон:Math loop.) Hoare alluded to proof outlines in his early work; for interference freedom, it had to be formalized.

A proof outline for Шаблон:Math} begins with precondition Шаблон:Math and ends with postcondition Шаблон:Math. Two assertions within braces { and } appearing next to each other indicates that the first must imply the second.

Example: A proof outline for Шаблон:Math} where Шаблон:Math is:
Шаблон:MathШаблон:MathШаблон:Math

Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:Math
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:Math}

Шаблон:Math must hold, where Шаблон:Math stands for Шаблон:Math with every occurrence of Шаблон:Math replaced by Шаблон:Math. (In this example, Шаблон:Math and Шаблон:Math are basic statements, like an assignment statement, skip, or an await statement.)

Each statement Шаблон:Math in the proof outline is preceded by a precondition Шаблон:Math and followed by a postcondition Шаблон:Math, and Шаблон:Math} must be provable using some axiom or inference rule of Hoare logic. Thus, the proof outline contains all the information necessary to prove that Шаблон:Math} is correct.

Now consider two processes Шаблон:Math and Шаблон:Math executing in parallel, and their specs:

Шаблон:SpacesШаблон:Math}Шаблон:SpacesШаблон:Math}
Шаблон:SpacesШаблон:MathШаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math}Шаблон:SpacesШаблон:Math}

Proving that they work suitably in parallel will require restricting them as follows. Each expression Шаблон:Math in Шаблон:Math or Шаблон:Math may refer to at most one variable Шаблон:Math that can be changed by the other process while Шаблон:Math is being evaluated, and Шаблон:Math may refer to Шаблон:Math at most once. A similar restriction holds for assignment statements Шаблон:MathШаблон:Math.

With this convention, the only indivisible action need be the memory reference. For example, suppose process Шаблон:Math references variable Шаблон:Math while Шаблон:Math changes Шаблон:Math. The value Шаблон:Math receives for Шаблон:Math must be the value before or after Шаблон:Math changes Шаблон:Math, and not some spurious in-between value.

Definition of Interference-free

The important innovation of Owicki-Gries was to define what it means for a statement Шаблон:Math not to interfere with the proof of Шаблон:Math}. If execution of Шаблон:Math cannot falsify any assertion given in the proof outline of Шаблон:Math}, then that proof still holds even in the face of concurrent execution of Шаблон:Math and Шаблон:Math.

Definition. Statement Шаблон:Math with precondition Шаблон:Math does not interfere with the proof of Шаблон:Math} if two conditions hold:

(1) Шаблон:Math}
(2) Let Шаблон:Math be any statement within Шаблон:Math but not within an Шаблон:Math statement (see later section). Then Шаблон:Math}.

Read the last Hoare triple like this: If the state is such that both Шаблон:Math and Шаблон:Math can be executed, then execution of Шаблон:Math is not going to falsify Шаблон:Math.

Definition. Proof outlines for Шаблон:Math} and Шаблон:Math} are interference-free if the following holds. Let Шаблон:Math be an Шаблон:Math or assignment statement (that does not appear in an Шаблон:Math) of process Шаблон:Math. Then Шаблон:Math does not interfere with the proof of Шаблон:Math}. Similarly for Шаблон:Math of process Шаблон:Math and Шаблон:Math}.

Statements cobegin and await

Two statements were introduced to deal with concurrency. Execution of the statement Шаблон:Math executes Шаблон:Math and Шаблон:Math in parallel. It terminates when both Шаблон:Math and Шаблон:Math have terminated.

Execution of the Шаблон:Math statement Шаблон:Math is delayed until condition Шаблон:Math is true. Then, statement Шаблон:Math is executed as an indivisible action—evaluation of Шаблон:Math is part of that indivisible action. If two processes are waiting for the same condition Шаблон:Math, when it becomes true, one of them continues waiting while the other proceeds.

The Шаблон:Math statement cannot be implemented efficiently and is not proposed to be inserted into the programming language. Rather it provides a means of representing several standard primitives such as semaphores—first express the semaphore operations as Шаблон:Math, then apply the techniques described here.

Inference rules for Шаблон:Math and Шаблон:Math are:

Шаблон:Math
Шаблон:Math

Шаблон:Math
Шаблон:Math

Auxiliary variables

An auxiliary variable does not occur in the program but is introduced in the proof of correctness to make reasoning simpler —or even possible. Auxiliary variables are used only in assignments to auxiliary variables, so their introduction neither alters the program for any input nor affects the values of program variables. Typically, they are used either as program counters or to record histories of a computation.

Definition. Let Шаблон:Math be a set of variables that appear in Шаблон:Math only in assignments Шаблон:MathШаблон:Math, where Шаблон:Math is in Шаблон:Math. Then Шаблон:Math is an auxiliary variable set for Шаблон:Math.

Since a set Шаблон:Math of auxiliary variables are used only in assignments to variables in Шаблон:Math, deleting all assignments to them doesn't change the program's correctness, and we have the inference rule Шаблон:Math elimination:

Шаблон:SpacesШаблон:Math

Шаблон:Math is an auxiliary variable set for Шаблон:Math. The variables in Шаблон:Math do not occur in Шаблон:Math or Шаблон:Math. Шаблон:Math is obtained from Шаблон:Math by deleting all assignments to the variables in Шаблон:Math.

Instead of using auxiliary variables, one can introduce a program counter into the proof system, but that adds complexity to the proof system.

Note: Apt [12] discusses the Owicki-Gries logic in the context of recursive assertions, that is, effectively computable assertions. He proves that all the assertions in proof outlines can be recursive, but that this is no longer the case if auxiliary variables are used only as program counters and not to record histories of computation. Lamport, in his similar work,[7] uses assertions about token positions instead of auxiliary variables, where a token on an edge of a flow chart is akin to a program counter. There is no notion of a history variable. This indicates that Owicki-Gries and Lamport's approach are not equivalent when restricted to recursive assertions.

Deadlock and termination

Owicki-Gries[2][3] deals mainly with partial correctness: Шаблон:Math} means: If Шаблон:Math executed in a state in which Шаблон:Math is true terminates, then Шаблон:Math is true of the state upon termination. However, Owicki-Gries also gives some practical techniques that use information obtained from a partial correctness proof to derive other correctness properties, including freedom from deadlock, program termination, and mutual exclusion.

A program is in deadlock if all processes that have not terminated are executing Шаблон:Math statements and none can proceed because their Шаблон:Math conditions are false. Owicki-Gries provides conditions under which deadlock cannot occur.

Owicki-Gries presents an inference rule for total correctness of the while loop. It uses a bound function that decreases with each iteration and is positive as long as the loop condition is true. Apt et al [13] show that this new inference rule does not satisfy interference freedom. The fact that the bound function is positive as long as the loop condition is true was not included in an interference test. They show two ways to rectify this mistake.

A simple example

Consider the statement: Шаблон:IndentШаблон:Math Шаблон:IndentШаблон:MathШаблон:Math Шаблон:Indent// Шаблон:MathШаблон:Math Шаблон:IndentШаблон:Math Шаблон:IndentШаблон:Math

The proof outline for it:

Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:MathШаблон:Math Шаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:Spaces//
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:MathШаблон:Math Шаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math

Proving that Шаблон:Math does not interfere with the proof of Шаблон:Math requires proving two Hoare triples:

(1) Шаблон:Math Шаблон:Indent(2) Шаблон:Math

The precondition of (1) reduces to Шаблон:Math and the precondition of (2) reduces to Шаблон:Math. From this, it is easy to see that these Hoare triples hold. Two similar Hoare triples are required to show that Шаблон:Math does not interfere with the proof of Шаблон:Math.

Suppose Шаблон:Math is changed from the Шаблон:Math statement to the assignment Шаблон:MathШаблон:Math. Then the proof outline does not satisfy the requirements, because the assignment contains two occurrences of shared variable Шаблон:Math. Indeed, the value of Шаблон:Math after execution of the Шаблон:Math statement could be 2 or 3.

Suppose Шаблон:Math is changed to the Шаблон:Math statement Шаблон:MathШаблон:Math, so it is the same as Шаблон:Math. After execution of Шаблон:Math, Шаблон:Math should be 4. To prove this, because the two assignments are the same, two auxiliary variables are needed, one to indicate whether Шаблон:Math has been executed; the other, whether Шаблон:Math has been executed. We leave the change in the proof outline to the reader.

Examples of formally proved concurrent programs

A. Findpos. Write a program that finds the first positive element of an array (if there is one). One process checks all array elements at even positions of the array and terminates when it finds a positive value or when none is found. Similarly, the other process checks array elements at odd positions of the array. Thus, this example deals with while loops. It also has no Шаблон:Math statements.

This example comes from Barry K. Rosen.[14] The solution in Owicki-Gries,[2] complete with program, proof outline, and discussion of interference freedom, takes less than two pages. Interference freedom is quite easy to check, since there is only one shared variable. In contrast, Rosen's article[14] uses Шаблон:Math as the single, running example in this 24 page paper.

An outline of both processes in a general environment:

Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math Шаблон:MathШаблон:Math Шаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:Spaces//
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math Шаблон:Math
Шаблон:SpacesШаблон:Math Шаблон:MathШаблон:Math Шаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math

B. Bounded buffer consumer/producer problem. A producer process generates values and puts them into bounded buffer Шаблон:Math of size Шаблон:Math; a consumer process removes them. They proceed at variable rates. The producer must wait if buffer Шаблон:Math is full; the consumer must wait if buffer Шаблон:Math is empty. In Owicki-Gries,[2] a solution in a general environment is shown; it is then embedded in a program that copies an array Шаблон:Math into an array Шаблон:Math.

This example exhibits a principle to reduce interference checks to a minimum: Place as much as possible in an assertion that is invariantly true everywhere in both processes. In this case the assertion is the definition of the bounded buffer and bounds on variables that indicate how many values have been added to and removed from the buffer. Besides buffer Шаблон:Math itself, two shared variables record the number Шаблон:Math of values added to the buffer and the number Шаблон:Math removed from the buffer.

C. Implementing semaphores. In his article on the THE multiprogramming system,[4] Dijkstra introduces the semaphore Шаблон:Math as a synchronization primitive: Шаблон:Math is an integer variable that can be referenced in only two ways, shown below; each is an indivisible operation:

1. Шаблон:Math: Decrease Шаблон:Math by 1. If now Шаблон:Math, suspend the process and put it on a list of suspended processes associated with Шаблон:Math.

2. Шаблон:Math: Increase Шаблон:Math by 1. If now Шаблон:Math, remove one of the processes from the list of suspended processes associated with Шаблон:Math, so its dynamic progress is again permissible.

The implementation of Шаблон:Math and Шаблон:Math using Шаблон:Math statements is:

Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:Spaces
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:MathШаблон:MathШаблон:Math
Шаблон:SpacesШаблон:Math
Шаблон:SpacesШаблон:Math

Here, Шаблон:Math is an array of processes that are waiting because they have been suspended; initially, Шаблон:MathШаблон:MathШаблон:Math for every process Шаблон:Math. One could change the implementation to always waken the longest suspended process.

D. On-the-fly garbage collection. At the 1975 Summer School Marktoberdorf, Dijkstra discussed an on-the-fly garbage collector as an exercise in understanding parallelism. The data structure used in a conventional implementation of LISP is a directed graph in which each node has at most two outgoing edges, either of which may be missing: an outgoing left edge and an outgoing right edge. All nodes of the graph must be reachable from a known root. Changing a node may result in unreachable nodes, which can no longer be used and are called garbage. An on-the-fly garbage collector has two processes: the program itself and a garbage collector, whose task is to identify garbage nodes and put them on a free list so that they can be used again.

Gries felt that interference freedom could be used to prove the on-the-fly garbage collector correct. With help from Dijkstra and Hoare, he was able to give a presentation at the end of the Summer School, which resulted in an article in CACM.[15]

E. Verification of readers/writers solution with semaphores. Courtois et al[16] use semaphores to give two versions of the readers/writers problem, without proof. Write operations block both reads and writes, but read operations can occur in parallel. Owicki[17] provides a proof.

F. Peterson's algorithm, a solution to the 2-process mutual exclusion problem, was published by Peterson in a 2-page article.[18] Schneider and Andrews provide a correctness proof.[19]

Dependencies on interference freedom

The image below, by Ilya Sergey, depicts the flow of ideas that have been implemented in logics that deal with concurrency. At the root is interference freedom. The file Шаблон:Citation contains references. Below, we summarize the major advances.

Historical graph of program logics for interference freedom
Historical graph of program logics for interference freedom
  • Rely-Guarantee. 1981. Interference freedom is not compositional. Cliff Jones[20][21] recovers compositionality by abstracting interference into two new predicates in a spec: a rely-condition records what interference a thread must be able to tolerate and a guarantee-condition sets an upper bound on the interference that the thread can inflict on its sibling threads. Xu et al [22] observe that Rely-Guarantee is a reformulation of interference freedom; revealing the connection between these two methods, they say, offers a deep understanding about verification of shared variable programs.
  • CSL. 2004. Separation logic supports local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component. Concurrent separation logic (CSL) was originally proposed by Peter O'Hearn,[23][24] We quote from:[23] "the Owicki-Gries method[2] involves explicit checking of non-interference between program components, while our system rules out interference in an implicit way, by the nature of the way that proofs are constructed."
  • Deriving concurrent programs. 2005-2007. Feijen and van Gasteren[25] show how to use Owicki-Gries to design concurrent programs, but the lack of a theory of progress means that designs are driven only by safety requirements. Dongol, Goldson, Mooij, and Hayes have extended this work to include a "logic of progress" based on Chandy and Misra's language Unity, molded to fit a sequential programming model. Dongel and Goldson[26] describe their logic of progress. Goldson and Dongol[27] show how this logic is used to improve the process of designing programs, using Dekker's algorithm for two processes as an example. Dongol and Mooij[28] present more techniques for deriving programs, using Peterson's mutual exclusion algorithm as one example. Dongol and Mooij[29] show how to reduce the calculational overhead in formal proofs and derivations and derive Dekker's algorithm again, leading to some new and simpler variants of the algorithm. Mooij[30] studies calculational rules for Unity's leads-to relation. Finally, Dongol and Hayes[31] provide a theoretical basis for and prove soundness of the process logic.
  • OGRA. 2015. Lahav and Vafeiadis strengthen the interference freedom check to produce (we quote from the abstract) "OGRA, a program logic that is sound for reasoning about programs in the release-acquire fragment of the C11 memory model." They provide several examples of its use, including an implementation of the RCU synchronization primitives.[32]
  • Quantum programming. 2018. Ying et al [33] extend interference freedom to quantum programming. Difficulties they face include intertwined nondeterminism: nondeterminism involving quantum measurements and nondeterminism introduced by parallelism occurring at the same time. The authors formally verify Bravyi-Gosset-König's parallel quantum algorithm solving a linear algebra problem, giving, they say, for the first time an unconditional proof of a computational quantum advantage.
  • POG. 2020. Raad et al present POG (Persistent Owicki-Gries), the first program logic for reasoning about non-volatile memory technologies, specifically the Intel-x86.[34]

Texts that discuss interference freedom

  • On A Method of Multiprogramming, 1999.[25] Van Gasteren and Feijen discuss the formal development of concurrent programs entirely on the idea of interference freedom.
  • On Current Programming, 1997.[35] Schneider uses interference freedom as the main tool in developing and proving concurrent programs. A connection to temporal logic is given, so arbitrary safety and liveness properties can be proven. Control predicates obviate the need for auxiliary variables for reasoning about program counters.
  • Verification of Sequential and Concurrent Programs, 1991,[36] 2009.[37] This first text to cover verification of structured concurrent programs, by Apt et al, has gone through several editions over several decades.
  • Concurrency Verification: Introduction to Compositional and Non-Compositional Methods, 2112.[38] De Roever et al provide a systematic and comprehensive introduction to compositional and non-compositional proof methods for the state-based verification of concurrent programs

Implementations of interference freedom

  • 1999: Nipkow and Nieto present the first formalization of interference freedom and its compositional version, the rely-guarantee method, in a theorem prover: Isabelle/HOL.[39][40]
  • 2005: Ábrahám's PhD thesis provides a way to prove multithreaded Java programs correct in three steps: (1) Annotate the program to produce a proof outline, (2) Use their tool Verger to automatically create verification conditions, and (3) Use the theorem prover PVS to prove the verification conditions interactively.[41][42]
  • 2017: Denissen[43] reports on an implementation of Owicki-Gries in the "verification ready" programming language Dafny.[44] Denissen remarks on the ease of use of Dafny and his extension to it, making it extremely suitable when teaching students about interference freedom. Its simplicity and intuitiveness outweighs the drawback of being non-compositional. He lists some twenty institutions that teach interference freedom.
  • 2017: Amani et al combine the approaches of Hoare-Parallel, a formalisation of Owicki-Gries in Isabelle/HOL for a simple while-language, and SIMPL, a generic language embedded in Isabelle/HOL, to allow formal reasoning on C programs.[45]
  • 2022: Dalvandi et al introduce the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs, building on Nipkow and Nieto's encoding of Owicki–Gries in the Isabelle theorem prover.[46]
  • 2022: This webpage [47] describes the Civl verifier for concurrent programs and gives instructions for installing it on your computer. It is built on top of Boogie, a verifier for sequential programs. Kragl et al [48] describe how interference freedom is achieved in Civl using their new specification idiom, yield invariants. One can also use specs in the rely-guarantee style. Civl offers a combination of linear typing and logic that allows economical and local reasoning about disjointness (like separation logic). Civl is the first system that offers refinement reasoning on structured concurrent programs.
  • 2022. Esen and Rümmer developed TRICERA,[49] an automated open-source verification tool for C programs. It is based on the concept of constrained Horn clauses, and it handles programs operating on the heap using a theory of heaps. A web interface to try it online is available. To handle concurrency, TRICERA uses a variant of the Owicki-Gries proof rules, with explicit variables to added to represent time and clocks.

References

Шаблон:Reflist

  1. 1,0 1,1 1,2 Шаблон:Cite thesis
  2. 2,0 2,1 2,2 2,3 2,4 2,5 2,6 2,7 2,8 Шаблон:Cite journal
  3. 3,0 3,1 3,2 3,3 Шаблон:Cite journal
  4. 4,0 4,1 Шаблон:Citation
  5. Шаблон:Cite web
  6. Шаблон:Cite web
  7. 7,0 7,1 Шаблон:Cite journal
  8. Шаблон:Cite book
  9. 9,0 9,1 Шаблон:Cite journal
  10. Шаблон:Cite web
  11. Шаблон:Cite book
  12. Шаблон:Cite journal
  13. Шаблон:Cite book
  14. 14,0 14,1 Шаблон:Cite journal
  15. Шаблон:Cite journal
  16. Шаблон:Cite journal
  17. Шаблон:Cite tech report
  18. Шаблон:Cite journal
  19. Шаблон:Cite conference
  20. Шаблон:Cite thesis
  21. Шаблон:Cite conference
  22. Шаблон:Cite journal
  23. 23,0 23,1 Шаблон:Cite conference
  24. Шаблон:Cite journal
  25. 25,0 25,1 Шаблон:Cite book
  26. Шаблон:Cite journal
  27. Шаблон:Cite conference
  28. Шаблон:Cite conference
  29. Шаблон:Cite journal
  30. Шаблон:Cite conference
  31. Шаблон:Cite tech report
  32. Шаблон:Cite conference
  33. Шаблон:Cite arXiv
  34. Шаблон:Cite conference
  35. Шаблон:Cite book
  36. Шаблон:Cite book
  37. Шаблон:Cite book
  38. Шаблон:Cite book
  39. Шаблон:Cite thesis
  40. Шаблон:Cite conference
  41. Шаблон:Cite thesis
  42. Шаблон:Cite journal
  43. Шаблон:Cite thesis
  44. Шаблон:Cite web
  45. Шаблон:Cite conference
  46. Шаблон:Cite journal
  47. Шаблон:Cite web
  48. Шаблон:Cite conference
  49. Шаблон:Cite conference