Beruflich Dokumente
Kultur Dokumente
always react in bounded time and arrive at the same final This restriction, which is statically checked by the compiler,
finite memory state. Conceiving a formal semantics for Céu ensures that every reaction runs in bounded time, eventually
leads to a number of capabilities and outcomes as follows: terminating with all trails blocked in await statements. Céu
1. Understanding, finding corner cases, and stabilizing has an additional restriction, which it shares with Esterel
the language. After the semantics was complete and and synchronous languages in general [4]: computations that
discussed in extent in our group, we found a critical take a non-negligible time to run (e.g., cryptography or image
bug in the order of execution of statements. processing algorithms) violate the zero-delay hypothesis, and
2. Explaining core aspects of the language in a reduced, thus cannot be directly implemented.
precise, and unambiguous way. This is particularly Listing 1 shows a compact reference of Céu.
important when comparing languages that are similar // Declarations
in the surface (e.g., Céu and Esterel). input ⟨type⟩ ⟨id⟩; // declares an external input event
3. Implementing the language. A small-step operational event ⟨type⟩ ⟨id⟩; // declares an internal event
semantics describes an abstract machine that is close var ⟨type⟩ ⟨id⟩; // declares a variable
to a concrete implementation. The current implemen-
tation of the Céu scheduler is based on the formal // Event handling
semantics presented in this paper. ⟨id⟩ = await ⟨id⟩; // awaits event and assigns the received value
4. Proving properties for particular or all programs in the emit ⟨id⟩(⟨expr⟩); // emits event passing a value
language. For instance, in this work, we prove that all
// Control flow
programs in Céu are memory bounded, deterministic,
⟨stmt⟩ ; ⟨stmt⟩ // sequence
and react in finite time. if ⟨expr⟩ then ⟨stmt⟩ else ⟨stmt⟩ end // conditional
The last item is particularly important in the context of loop do ⟨stmt⟩ end // repetition
constrained embedded systems: every ⟨id⟩ in ⟨id⟩ do ⟨stmt⟩ end // event iteration
finalize [⟨stmt⟩] with ⟨stmt⟩ end // finalization
Memory Boundedness: At compile time, we can ensure
that the program fits in the device’s restricted memory
// Logical parallelism
and that it will not grow unexpectedly during runtime. par/or do ⟨stmt⟩ with ⟨stmt⟩ end // aborts if any side ends
Deterministic Execution: We can simulate an entire pro- par/and do ⟨stmt⟩ with ⟨stmt⟩ end // terminates if all sides end
gram execution providing an input history with the
guarantee that it will always have the same behavior. // Assignment & Integration with C
This can be done in negligible time in a controlled ⟨id⟩ = ⟨expr⟩; // assigns a value to a variable
development environment before deploying the appli- _⟨id⟩(⟨exprs⟩) // calls a C function (id starts with ‘_’ )
cation to the actual device (e.g., by multiple developers
Listing 1. The concrete syntax of Céu.
in standard desktops).
Terminating Reactions: Real-time applications must guar- Listing 2 shows a complete example in Céu that toggles
antee responses within specified deadlines. A terminat- a LED whenever a radio packet is received, terminating on
ing semantics enforces upper bounds for all reactions a button press always with the LED off. The program first
and guarantees that programs always progress with declares the BUTTON and RADIO_RECV as input events (ln. 1–
the time. 2). Then, it uses a par/or composition to run two activities
in parallel: a single-statement trail that waits for a button
2 Céu press before terminating (ln. 4), and an endless loop that
Céu [17, 19] is a synchronous reactive language in which toggles the LED on and off on radio receives (ln. 9–14). The
programs evolve in a sequence of discrete reactions to ex- finalize clause (ln. 6–8) ensures that, no matter how its
ternal events. It is designed for control-intensive applica- enclosing trail terminates, the LED will be unconditionally
tions, supporting concurrent lines of execution, known as turned off (ln. 7).
trails, and instantaneous broadcast communication through The par/or composition, which stands for a parallel-or,
events. Computations within a reaction (such as expressions, provides an orthogonal abortion mechanism [4] in which
assignments, and system calls) are also instantaneous con- its composed trails do not know when and how they are
sidering the synchronous hypothesis [9]. Céu provides an aborted (i.e., abortion is external to them). The finalization
await statement that blocks the current running trail allow- mechanism extends orthogonal abortion to also work with
ing the program to execute its other trails; when all trails activities that use stateful resources from the environment
are blocked, the reaction terminates and control returns to (such as files and network handlers), as we discuss in Sec-
the environment. tion 2.3.
In Céu, every execution path within loops must contain at In Céu, any identifier prefixed with an underscore (e.g.,
least one await statement to an external input event [6, 19]. _led) is passed unchanged to the underlying C compiler.
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
2.1 External and Internal Events Figure 1. The discrete notion of time in Céu.
Céu defines time as a discrete sequence of reactions to unique Unique input events imply mutually exclusive reactions,
external input events received from the environment. Each which execute atomically and never overlap. Automatic mu-
input event delimits a new logical unit of time that triggers tual exclusion is a prerequisite for deterministic reactions as
an associated reaction. The life-cycle of a program in Céu we discuss in Section 3.
can be summarized as follows [19]: In practice, the synchronous hypothesis for Céu holds if re-
(i) The program initiates a “boot reaction” in a single trail actions execute faster than the rate of incoming input events.
(parallel constructs may create new trails). Otherwise, the program would continuously accumulate de-
(ii) Active trails execute until they await or terminate, one lays between physical occurrences and actual reactions for
after another. This step is called a reaction chain, and the input events. Considering the context of soft real-time
always runs in bounded time. systems, postponed reactions might be tolerated as long as
(iii) When all trails are blocked, the program goes idle and they are infrequent and the application does not take too
the environment takes control. long to catch up with real time. Note that the synchronous
(iv) On the occurrence of a new external input event, the semantics is also the norm in typical event-driven systems,
environment awakes all trails awaiting that event, and such as event dispatching in UI toolkits, game loops in game
the program goes back to step (ii). engines, and clock ticks in embedded systems.
A program must react to an event completely before han- Internal events as subroutines
dling the next one. By the synchronous hypothesis, the time
In Céu, queue-based processing of events applies only to
the program spends in step (ii) is conceptually zero (in prac-
external input events, i.e., events submitted to the program
tice, negligible). Hence, from the point of view of the environ-
by the environment. Internal events, which are events gen-
ment, the program is always idle on step (iii). In practice, if
erated internally by the program via emit statements, are
a new external input event occurs while a reaction executes,
processed in a stack-based manner. Internal events provide
the event is saved on a queue, which effectively schedules it
a fine-grained execution control, and, because of their stack-
to be processed in a subsequent reaction.
based processing, can be used to implement a limited form
of subroutines, as illustrated in Listing 3.
1 The actual implementation of Céu supports a command-line option that In the example, the “subroutine” inc is defined as an event
accepts a white list of library calls. If a program tries to use a function not iterator (ln. 4–6) that continuously awaits its identifying
in the list, the compiler raises an error. event (ln. 4), and increments the value passed by reference
LCTES’18, June 2018, Philadelphia, USA R. C. M. Santos, G. F. Lima, F. Sant’Anna, R. Ierusalimschy, and E. H. Haeusler
1 event int* inc; // declares subroutine "inc" input void A; 1 input void A;
2 par/or do input void B; 2 // (empty line)
3 var int* p; var int x = 1; 3 var int y = 1;
4 every p in inc do // implements "inc" through event iterator par/and do 4 par/and do
5 *p = *p + 1; await A; 5 await A;
6 end x = x + 1; 6 y = y + 1;
7 with with 7 with
8 var int v = 1; await B; 8 await A;
9 emit inc(&v); // calls "inc" x = x * 2; 9 y = y * 2;
10 emit inc(&v); // calls "inc" end 10 end
11 _assert(v==3); // asserts result after the two returns
12 end [a] Accesses to x are never [b] Accesses to y are concurrent
concurrent. but still deterministic.
Listing 3. A “subroutine” that increments its argument.
Figure 2. Shared-memory concurrency in Céu: [a] is safe
(ln. 5). A trail in parallel (ln. 8–11) invokes the subroutine because the trails access x atomically in different reactions;
through two consecutive emit statements (ln. 9–10). Given [b] is unsafe because both trails access y in the same reaction
the stack-based execution for internal events, as the first with concurrent lines of execution sharing memory and in-
emit executes, the calling trail pauses (ln. 9), the subroutine teracting with the environment.
awakes (ln. 4), runs its body (yielding v=2), iterates, and In Céu, when multiple trails are active in the same reaction,
awaits the next “call” (ln. 4, again). Only after this sequence they are scheduled in lexical order, i.e., in the order they
does the calling trail resumes (ln. 9), makes a new invocation appear in the program source code. For instance, consider
(ln. 10), and passes the assertion test (ln. 11). the examples in Figure 2, both defining shared variables
Céu also supports nested emit invocations, e.g., the body (ln. 3), and assigning to them in parallel trails (ln. 6, 9).
of the subroutine inc (ln. 5) could emit an event targeting In [a], the two assignments to x can only execute in re-
another subroutine, creating a new level in the stack. We actions to different events A and B (ln. 5, 8), which cannot
can think of the stack as a record of the nested, fine-grained occur simultaneously by definition. Hence, for the sequence
internal reactions that happen inside the same outer reaction of events A->B, x becomes 4 ((1+1)*2), while for B->A, x
to a single external event. becomes 3 ((1*2)+1).
This form of subroutines has a significant limitation that In [b], the two assignments to y are simultaneous because
it cannot express recursion, since an emit to itself is always they execute in reaction to the same event A. Since Céu em-
ignored as a running trail cannot be waiting on itself. That ploys lexical order for intra-reaction statements, the execu-
being said, it is this very limitation that brings important tion is still deterministic, and y always becomes 4 ((1+1)*2).
safety properties to subroutines. First, they are guaranteed However, note that an apparently innocuous change in the
to react in bounded time. Second, memory for locals is also order of trails modifies the behavior of the program. To miti-
bounded, not requiring data stacks. gate this threat, Céu performs concurrency checks at compile
At first sight, the constructions “every e do ⟨· · ·⟩ end” time to detect conflicting accesses to shared variables: if a
and “loop do await e; ⟨· · ·⟩ end” seem to be equivalent. variable is written in a trail segment, then a concurrent trail
However, the loop variation would not compile since it does segment cannot access that variable [19]. Nonetheless, the
not contain an external input await (e is an internal event). static checks are optional and are not a prerequisite for the
The every variation compiles because event iterators have deterministic semantics of the language.
an additional syntactic restriction that they cannot contain
break or await statements. This restriction guarantees that
an iterator never terminates from itself and, thus, always
2.3 Abortion and Finalization
awaits its identifying event, essentially behaving as a safe
blocking point in the program. For this reason, the restriction The par/or of Céu is an orthogonal abortion mechanism be-
that execution paths within loops must contain at least one cause the two sides in the composition need not be tweaked
external await is extended to alternatively contain an every with synchronization primitives nor state variables to affect
statement. each other. In addition, abortion is immediate in the sense
that it executes atomically inside the current micro reaction.
2.2 Shared-Memory Concurrency Immediate orthogonal abortion is a distinctive feature of syn-
chronous languages and cannot be expressed effectively in
Embedded applications make extensive use of global mem-
traditional (asynchronous) multi-threaded languages [4, 14].
ory and shared resources, such as through memory-mapped
However, aborting lines of execution that deal with exter-
registers and system calls to device drivers. Hence, an impor-
nal resources may lead to inconsistencies. For this reason,
tant goal of Céu is to ensure a reliable behavior for programs
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
par/or do 1 par/or do the control aspects of Céu. Side-effects and system calls are
var _msg_t msg; 2 var _FILE* f; encapsulated in a memory access primitive and are assumed
⟨· · ·⟩ // prepare msg 3 finalize to behave like in conventional imperative languages.
finalize 4 f = _fopen(. . .);
_send(&msg); 5 with 3.1 Abstract Syntax
with 6 _fclose(f);
_cancel(&msg); 7 end
The grammar below defines the syntax of a subset of Céu
end 8 _fwrite(. . ., f); that is sufficient to describe all peculiarities of the language.
await SEND_ACK; 9 await A; P F mem(id) any memory access to “id”
with 10 _fwrite(. . ., f); | awaitext (id) await external event “id”
⟨· · ·⟩ 11 with | awaitint (id) await internal event “id”
end 12 ⟨· · ·⟩ | emitint (id) emit internal event “id”
// (empty line) 13 end | break loop escape
[a] Local resource finalization. [b] External resource finalization. | if mem(id) then P1 else P2 conditional
| P1 ; P2 sequence
Figure 3. Céu enforces the use of finalization to prevent | loop P 1 repetition
dangling pointers and memory leaks. | every id P1 event iteration
| P1 and P 2 par/and
Céu provides a finalize construct to unconditionally ex- | P2 or P2 par/or
ecute a series of statements even if the enclosing block is | fin P finalization
externally aborted. | P1 @loop P2 unwinded loop
Céu also enforces, at compile time, the use of finalize for | P1 @and P 2 unwinded par/and
system calls that deal with pointers representing resources, | P1 @or P2 unwinded par/or
as illustrated in the two examples of Figure 3. If Céu passes | @canrun(n) can run on stack level n
| @nop terminated program
a pointer to a system call (ln. [a]:5), the pointer represents a
local resource (ln. [a]:2) that requires finalization (ln. [a]:7). The mem(id) primitive represents an assignment or sys-
If Céu receives a pointer from a system call return (ln. [b]:4), tem call that affects the memory location identified by id.3
the pointer represents an external resource (ln. [b]:2) that Following the synchronous hypothesis, mem statements and
requires finalization (ln. [b]:6). expressions are considered to be atomic and instantaneous.
Céu tracks the interaction of system calls with pointers Most statements in the abstract language are mapped to
and requires finalization clauses to accompany them. In Fig- their counterparts in the concrete language. The exceptions
ure 3.a, the local variable msg (ln. 2) is an internal resource are the finalization block fin p and the @-statements which
passed as a pointer to _send (ln. 5), which is an asynchro- do not exist in the concrete language. (These result from the
nous call that transmits the buffer in the background. If the expansion of the transition rules to be presented.)
block aborts (ln. 11) before receiving an acknowledge from A further difference between the concrete and abstract lan-
the environment (ln. 9), the local msg goes out of scope and guages regards the emit-await pair. In the concrete language,
the external transmission now holds a dangling pointer. The the await can be used as an expression which evaluates to
finalization ensures that the transmission also aborts (ln. 7). the value stored in the emitted event, e.g., an “emit a(10)”
In Figure 3.b, the call to _fopen (ln. 4) returns an external file awakes a “v=await a” setting variable v to 10. Although the
resource as a pointer. If the block aborts (ln. 12) during the abstract awaitint and emitint do not support such commu-
await A (ln. 9), the file remains open as a memory leak. The nication of values, it can be easily simulated: one can use a
finalization ensures that the file closes properly (ln. 6). In shared variable to hold the value of an emitint and access it
both cases, the code does not compile without the finalize after the corresponding awaitint awakes.
construct.2 Finally, a “finalize A with B end; C” in the concrete
The finalization mechanism of Céu is fundamental to language is equivalent to “A; ((fin B) or C)” in the abstract
preserve the orthogonality of the par/or construct since the language. In the concrete language, A and C execute in se-
clean up code is encapsulated in the aborted trail itself. quence, and the finalization code B is implicitly suspended
waiting for C to terminate. In the abstract language, “fin B”
3 Formal Semantics suspends forever when reached (it is an awaiting statement
In this section, we introduce a reduced, abstract syntax for that never awakes). Hence, we need an explicit or to exe-
Céu and present an operational semantics that formalizes cute C in parallel, whose termination aborts “fin B”, which
the behavior of its programs. The semantics deals only with finally causes B to execute (by the semantic rules below).
3 Although the same symbol id is used in their definition, mem, awaitext ,
2 Thecompiler only forces the programmer to write the finalization clause, awaitint and emitint do not share identifiers: any identifier is either a
but cannot check if it actually handles the resource properly. variable, an external event, or an internal event.
LCTES’18, June 2018, Philadelphia, USA R. C. M. Santos, G. F. Lima, F. Sant’Anna, R. Ierusalimschy, and E. H. Haeusler
i
The main result of this section, Theorem 3.3, establishes If ⟨p2 , n, e⟩ −→
nst ⟨p 2 , n, e ⟩, for any p 1 such that isblocked(p 1 , n):
′ ′
that any given number i ≥ 0 of applications of arbitrary tran- i
(a) ⟨p1 @and p2 , n, e⟩ −→
nst ⟨p 1 @and p 2 , n, e ⟩.
′ ′
sition rules, starting from the same input description, will i
always lead to the same output description. In other words, (b) ⟨p1 @or p2 , n, e⟩ −→
nst ⟨p 1 @or p 2 , n, e ⟩.
′ ′
any finite sequence of transitions behave deterministically. The syntactic restriction discussed in Section 2.1, regard-
ing the body of loops, and the restriction mentioned earlier
i i in this section, regarding the body of fin statements, are
Theorem 3.3 (Determinism). δ −→ δ 1 and δ −→ δ 2 im-
plies δ 1 = δ 2 . 4 We often abbreviate “p = @nop or p = break” as “p = @nop, break”.
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
formalized in Assumption 3.7 below. These restrictions are Definition 3.10. The rank of a description δ = ⟨p, n, e⟩,
essential to prove the next theorem, Theorem 3.8. denoted rank(δ ), is a pair of nonnegative integers ⟨i, j⟩ such
that
Assumption 3.7 (Syntactic restrictions). (
(a) If p = fin p1 then p1 contains no occurrences of awaitext , n if e = ε
i = pot(p, e) and j =
awaitint , every, or fin expressions. Consequently, for n + 1 otherwise .
∗
any n, ⟨clear(p1 ), n, ε⟩ −→
nst ⟨@nop, n, ε⟩. Definition 3.11. A description δ is irreducible (in symbols,
(b) If p = loop p1 then all execution paths of p1 contain a δ # ) iff it is nested-irreducible and its rank(δ ) is ⟨i, 0⟩, for
matching break or an awaitext . Consequently, for all n, some i ≥ 0.
∗
there are p1′ and e such that ⟨loop p1 , n, ε⟩ −→
nst ⟨p 1 , n, e⟩,
′
More formally, pot(p, e) = pot ′(bcast(p, e)) where pot ′ is Proof. By induction on the structure of −→
nst derivations. Let
an auxiliary function that counts the maximum number of
reachable emitint expressions in the program resulting from δ = ⟨p, n, e⟩ and δ ′ = ⟨p ′, n ′, e ′⟩
the broadcast of event e to p. be descriptions with rank(δ ) = ⟨i, j⟩ and rank(δ ′) = ⟨i ′, j ′⟩
The auxiliary function pot ′ is defined by clauses ((a))–((i)): such that δ −→
nst δ . Then there is a derivation π whose conclu-
′
Case 3.1. pot ′(p) = pot ′(p1 ). Then, by Definition ??, ev- Case 1.1. e , ε. Then, by out and by Theorem 3.8, there
ery execution path of p1 contains a (matching) break are δ 1′ and δ #nst
′ push ′ ∗
= ⟨p ′, n + 1, e ′⟩ such that δ −→
out δ 1 −→
nst δ #nst .
′
or awaitext expression. A single −→nst cannot terminate the Thus, by Lemma 3.12 and by Theorem 3.14,
loop, since p1 , break, nor it can consume an awaitext ,
rank(δ ) = rank(δ 1′ ) = ⟨i, j⟩ ≥ rank(δ ′) = ⟨i ′, j ′⟩ .
which means that all execution paths in p1′ still contain
a break or awaitext . Hence pot ′(p ′) = pot ′(p1′ ). If e ′ = ε, then i = i ′ and j = j ′, and the rest of this proof is
There are two subcases. similar to that of Case 1.2 below. Otherwise, if e ′ , ε then
i > i ′, since an emitint (e ′) was consumed by the nested
Case 3.1.1. e ′ = ε. Then rank(δ ) = ⟨pot ′(p1 ), n⟩ and transitions. Thus, rank(δ ) > rank(δ ′). By the induction
rank(δ ′) = ⟨pot ′(p1′ ), n⟩. From (1), by the induction hy- hypothesis, δ ′ −→ ∗ ∗
δ #′′, for some δ #′′. Therefore, δ −→ δ #′′.
pothesis, pot ′(p1 ) ≥ pot(p1′ ). Thus rank(δ ) ≥ rank(δ ′). pop ′
Case 1.2. e = ε. Then, since j > 0, δ −→
out δ , for some δ . By
′
Case 3.1.2. e ′ , ε. Then π ′ contains one application item ((b)) of Lemma A.1, rank(δ ) > rank(δ ). Hence, by
′
of emit-int which consumes one emitint (e ′) expression ∗
the induction hypothesis, there is a δ #′′ such that δ ′ −→ δ #′′.
from p1 and which implies pot ′(p1 ) > pot ′(p1′ ). Thus ∗
Therefore, δ −→ δ # .
′′
rank(δ ) = ⟨pot ′(p1 ), n⟩ > ⟨pot ′(p1′ ), n + 1⟩ = rank(δ ′) . Case 2. δ is not nested-irreducible. Then e = ε and, by
∗
= Theorem 3.8 and 3.14, there is a δ #nst
′ such that δ −→
nst δ #nst
′
Case 3.1.3. pot ′(p) 1) +
pot ′(p pot ′(p
2 ). Then some ex- ′
ecution path in p1 does not contain a (matching) break with rank(δ ) ≥ rank(δ #nst ). The rest of this proof is similar
or awaitext expression. Since p1 , @nop, a single −→ to that of Case 1 above. □
nst can-
not restart the loop, which means that p1′ still contain 3.3.3 Memory bound
some execution path in which a break or awaitext does
not occur. Hence pot ′(p ′) = pot ′(p1′ ) + pot ′(p2 ). The rest As Céu has no mechanism for heap allocation, unbounded
of this proof is similar to that of Case 3.1. □ iteration, or general recursion, the maximum memory usage
of a given Céu program is determined solely by the length
∗
The next theorem is a generalization of Lemma 3.13 for −→
nst of its code, the number of variables it uses, and the size of
transitions. Its proof is by induction on i. the event stack that it requires to run. The code length and
∗ the number of variables used are easily determined by code
Theorem 3.14. If δ −→
nst δ then rank(δ ) ≥ rank(δ ).
′ ′
inspection. The maximum size of the event stack during
We now state and prove the main result of this section, a reaction of program p to external event e corresponds
∗
Theorem 3.15, viz., the termination theorem for −→ . The to pot(p, e), i.e., to the maximum number of internal events
idea of the proof is that a sufficiently large sequence of −→nst that p may emit in reaction to e. If p may react to external
and −→
out transitions eventually decrease the rank of the cur- events e 1 , . . . , en then, in the worst case, its event stack will
rent description until an irreducible description is reached. need to store max{pot(p, e 1 ), . . . , pot(p, en )} events.
This irreducible description is the final result of the reaction.
4 Related Work
Céu follows the lineage of imperative synchronous languages
Theorem 3.15 (Termination).
∗ initiated by Esterel [8]. These languages typically define time
For any δ , there is a δ #′ such that δ −→ δ #′ .
as a discrete sequence of logical “ticks” in which multiple si-
Proof. By lexicographic induction on rank(δ ). Let δ = ⟨p, n, e⟩ multaneous input events can be active [17]. The presence of
and rank(δ ) = ⟨i, j⟩. multiple inputs requires careful static analysis to detect and
reject programs with causality cycles and schizophrenia prob-
Basis. If ⟨i, j⟩ = ⟨0, 0⟩ then δ cannot be advanced by −→ out , lems [5]. In contrast, Céu defines time as a discrete sequence
as j = 0 implies e = ε and n = 0 (neither push nor pop can of reactions to unique input events, which is a prerequisite
be applied). There are two possibilities: either δ is nested- for the concurrency checks that enable safe shared-memory
irreducible or it is not. In the first case, the theorem is trivially concurrency, as discussed in Section 2.2.
0
true, as δ −→
nst δ #nst . Suppose δ is not nested-irreducible. Then, In most synchronous languages, the behavior of external
∗
by Theorem 3.8, δ −→ nst δ #nst for some δ #nst . By Theorem 3.14,
′ ′
and internal events is equivalent. However, in Céu, internal
⟨i, j⟩ = ⟨0, 0⟩ ≥ rank(δ ), which implies rank(δ ′) = ⟨0, 0⟩.
′
events introduce stack-based micro reactions within exter-
Induction. Let ⟨i, j⟩ , ⟨0, 0⟩. There are two subcases depend- nal reactions, providing more fine-grained control for intra-
ing on whether or not δ is nested-irreducible. reaction execution. This allows for memory-bounded sub-
routines that can execute multiple times during the same ex-
Case 1. δ is nested-irreducible. If j = 0 then, by Defini- ternal reaction. The synchronous languages Statecharts [21]
0
tion 3.11, δ # , and thus δ −→ δ # . Suppose that is not the case, and Statemate [11] also distinguish internal from external
i.e., that j > 0. Then there are two subcases. events. In the former, “reactions to external and internal events
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
(...) can be sensed only after completion of the step”. In the lat- of MEMOCODE’10. IEEE, 159–168.
ter, “the receiving state (of the internal event) acts here as a [3] Albert Benveniste, Paul Caspi, Stephen A. Edwards, Nicolas Halbwachs,
function”. Although the descriptions suggest a stack-based Paul Le Guernic, and Robert De Simone. 2003. The synchronous
languages twelve years later. In Proceedings of the IEEE, Vol. 91. 64–83.
semantics, we are not aware of formalizations for these ideas [4] Gérard Berry. 1993. Preemption in Concurrent Systems.. In FSTTCS
for a deeper comparison with Céu. (LNCS), Vol. 761. Springer, 72–93.
Like Céu, many other synchronous languages [2, 7, 10, 12, [5] Gérard Berry. 1999. The Constructive Semantics of Pure Esterel (draft
22] also rely on lexical scheduling to preserve determinism. version 3). Ecole des Mines de Paris and INRIA.
In contrast, in Esterel, the execution order for operations [6] Gérard Berry. 2000. The Esterel-V5 Language Primer. CMA and Inria,
Sophia-Antipolis, France. Version 5.10, Release 2.0.
within a reaction is non-deterministic: “if there is no control [7] Frédéric Boussinot. 1991. Reactive C: An extension of C to program
dependency, as in (call f1() || call f2()), the order reactive systems. Software: Practice and Experience 21, 4 (1991), 401–
is unspecified and it would be an error to rely on it” [6]. For 428.
this reason, Esterel, does not support shared-memory con- [8] Frédéric Boussinot and Robert De Simone. 1991. The Esterel language.
currency: “if a variable is written by some thread, then it can Proc. IEEE 79, 9 (Sep 1991), 1293–1304.
[9] Robert de Simone, Jean-Pierre Talpin, and Dumitru Potop-Butucaru.
neither be read nor be written by concurrent threads” [6]. Con- 2005. The Synchronous Hypothesis and Synchronous Languages. In
sidering the constant and low-level interactions with the Embedded Systems Handbook, R. Zurawski (Ed.).
underlying architecture in embedded systems (e.g., direct [10] Adam Dunkels, Oliver Schmidt, Thiemo Voigt, and Muneeb Ali. 2006.
port manipulation), we believe that it is advantageous to em- Protothreads: simplifying event-driven programming of memory-
constrained embedded systems. In Proceedings of SenSys’06. ACM,
brace lexical scheduling as part of the language specification
29–42.
as a pragmatic design decision to enforce determinism. How- [11] David Harel and Amnon Naamad. 1996. The STATEMATE seman-
ever, since Céu statically detects trails not sharing memory, tics of statecharts. ACM Transactions on Software Engineering and
an optimized scheduler could exploit real parallelism in such Methodology 5, 4 (1996), 293–333.
cases. [12] Marcin Karpinski and Vinny Cahill. 2007. High-Level Application
Regarding the integration with C language-based environ- Development is Realistic for Wireless Sensor Networks. In Proceedings
of SECON’07. 610–619.
ments, Céu supports a finalization mechanism for external [13] Ingo Maier, Tiark Rompf, and Martin Odersky. 2010. Deprecating the
resources. In addition, Céu also tracks pointers representing observer pattern. Technical Report.
resources that cross C boundaries and forces the programmer [14] ORACLE. 2011. Java Thread Primitive Deprecation. http:
to provide associated finalizers. As far as we know, this extra //docs.oracle.com/javase/6/docs/technotes/guides/concurrency/
safety level is unique to Céu among synchronous languages. threadPrimitiveDeprecation.html (accessed in Aug-2014). (2011).
[15] Gordon D. Plotkin. 1981. A Structural Approach to Operational Seman-
tics. Technical Report 19. Computer Science Departement, Aarhus
5 Conclusion University, Aarhus, Denmark.
[16] Guido Salvaneschi et al. 2014. REScala: Bridging between object-
The programming language Céu aims to offer a concurrent,
oriented and functional style in reactive applications. In Proceedings
safe, and realistic alternative to C for embedded soft real-time of Modularity’13. ACM, 25–36.
systems, such as sensor networks and multimedia systems. [17] Francisco Sant’anna, Roberto Ierusalimschy, Noemi Rodriguez, Silvana
Céu inherits the synchronous and imperative mindset of Rossetto, and Adriano Branco. 2017. The Design and Implementation
Esterel but adopts a simpler semantics with fine-grained exe- of the Synchronous Language CÉU. ACM Trans. Embed. Comput. Syst.
16, 4, Article 98 (July 2017), 26 pages. https://doi.org/10.1145/3035544
cution control, which makes the language fully deterministic.
[18] Francisco Sant’Anna, Noemi Rodriguez, and Roberto Ierusalimschy.
In addition, its stack-based execution for internal events pro- 2015. Structured Synchronous Reactive Programming with Céu. In
vides a limited but memory-bounded form of subroutines. Proceedings of Modularity’15.
Céu also provides a finalization mechanism for resources [19] Francisco Sant’Anna, Noemi Rodriguez, Roberto Ierusalimschy, Olaf
when interacting with the external environment. Landsiedel, and Philippas Tsigas. 2013. Safe System-level Concurrency
on Resource-Constrained Nodes. In Proceedings of SenSys’13. ACM.
In this paper, we proposed a small-step structural opera-
[20] Rodrigo Santos, Guilherme Lima, Francisco Sant’Anna, and Noemi
tional semantics for Céu and proved that under it reactions Rodriguez. 2016. Céu-Media: Local Inter-Media Synchronization Using
are deterministic, terminate in finite time, and use bounded Céu. In Proceedings of WebMedia’16. ACM, New York, NY, USA, 143–
memory, i.e., that for a given arbitrary timeline of input 150. https://doi.org/10.1145/2976796.2976856
events, multiple executions of the same program always [21] Michael von der Beeck. 1994. A comparison of statecharts variants. In
Proceedings of FTRTFT’94. Springer, 128–148.
react in bounded time and arrive at the same final finite
[22] Reinhard von Hanxleden. 2009. SyncCharts in C: a proposal for light-
memory state. weight, deterministic concurrency. In Proceedings EMSOFT’09. ACM,
225–234.
References
[1] A. Adya et al. 2002. Cooperative Task Management Without Manual
Stack Management. In Proceedings of ATEC’02. USENIX Association,
289–302.
[2] Sidharta Andalam, Partha Roop, and Alain Girault. 2010. Predictable
multithreading of embedded applications using PRET-C. In Proceeding
LCTES’18, June 2018, Philadelphia, USA R. C. M. Santos, G. F. Lima, F. Sant’Anna, R. Ierusalimschy, and E. H. Haeusler
A Detailed Proofs Case 5.3. p ′ , @nop, break. Then π 1 and π2 are instances
Lemma 3.1. If δ −→ of seq-adv. Thus there are derivations π1′ and π2′ such that
out δ 1 and δ −→
out δ 2 then δ 1 = δ 2 .
π 1′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩
′ ′
Proof. The lemma is vacuously true if δ cannot be advanced
π2′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 2 , n, e 2 ⟩
′ ′
out transitions. Suppose that is not the case and let δ =
by −→
⟨p, n, e⟩, δ 1 = ⟨p1 , n 1 , e 1 ⟩ and δ 2 = ⟨p2 , n 2 , e 2 ⟩. Then, there for some p1′ , p2′ , e 1′ , and e 2′ . By the induction hypothesis,
are two possibilities. p1′ = p2′ and e 1′ = e 2′ . Hence p1 = p1′ ; p ′′ = p2′ ; p ′′ = p2
Case 1. e , ε. Both transitions are applications of push. and e 1 = e 1′ = e 2′ = e 2 .
Hence p1 = p2 = bcast(p, e), n 1 = n 2 = n + 1, and e 1 = e 2 = ε. Case 6. p = loop p ′. Then π 1 and π2 are instances of loop-
Case 2. e = ε. Both transitions are applications of pop. Hence expd. Hence p1 = p2 = p ′ @loop p ′ and e 1 = e 2 = ε.
p1 = p2 = p, n 1 = n 2 = n − 1, and e 1 = e 2 = ε. □ Case 7. p = p ′ @loop p ′′. There are three subcases.
Case 7.1. p ′ = @nop. Then π1 and π2 are instances of loop-
Lemma 3.2. If δ −→
nst δ 1 and δ −→
nst δ 2 then δ 1 = δ 2 .
nop. Hence p1 = p2 = loop p ′′ and e 1 = e 2 = ε.
Case 7.2. p ′ = break. Then π1 and π2 are instances of loop-
Proof. By induction on the structure of −→ nst derivations. The brk. Hence p1 = p2 = @nop and e 1 = e 2 = ε.
lemma is vacuously true if δ cannot be advanced by −→ nst
transitions. Suppose that is not the case and let δ = ⟨p, n, e⟩, Case 7.3. p ′ , @nop, break. Then π 1 and π2 are instances
δ 1 = ⟨p1 , n 1 , e 1 ⟩ and δ 2 = ⟨p2 , n 2 , e 2 ⟩. Then, by the hypothesis of loop-adv. Thus there are derivations π1′ and π2′ such
of the lemma, there are derivations π1 and π2 such that that
π 1′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩
′ ′
π1 ⊩ ⟨p, n, e⟩ −→
nst ⟨p 1 , n 1 , e 1 ⟩
π2′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 2 , n, e 2 ⟩
′ ′
π2 ⊩ ⟨p, n, e⟩ −→
nst ⟨p 2 , n 2 , e 2 ⟩
for some p1′ , p2′ , e 1′ , and e 2′ . By the induction hypothe-
i.e., the conclusion of π1 is ⟨p, n, e⟩ −→nst ⟨p 1 , n 1 , e 1 ⟩ and the sis, p1′ = p2′ and e 1′ = e 2′ . Hence p1 = p1′ @loop p ′′ =
conclusion of π2 is ⟨p, n, e⟩ −→
nst ⟨p 2 2 , e 2 ⟩.
, n p2′ @loop p ′′ = p2 and e 1 = e 1′ = e 2′ = e 2 .
By definition of −→ , we have that e = ε and n 1 = n 2 = n.
nst Case 8. p = p ′ and p ′′. Then π1 and π2 are instances of and-
It remains to be shown that p1 = p2 and e 1 = e 2 .
expd. Hence p1 = p2 = p ′ @and (@canrun(n); p ′′) and e 1 =
Depending on the structure of program p, the follow-
e 2 = ε.
ing 11 cases are possible. (Note that p cannot be an awaitext ,
awaitint , break, every, fin, or @nop statement as there are Case 9. p = p ′ @and p ′′. There are two subcases.
no −→ Case 9.1. ¬ isblocked(p ′, n). There are three subcases.
nst rules to transition such programs.)
Case 9.1.1. p ′ = @nop. Then π1 and π2 are instances
Case 1. p = mem(id). Then derivations π1 and π2 are in- of and-nop1. Hence p1 = p2 = p ′′ and e 1 = e 2 = ε.
stances of rule mem, i.e., their conclusions are obtained by
an application of this rule. Hence p1 = p2 = @nop and e 1 = Case 9.1.2. p ′ = break. Then π1 and π2 are instances
e 2 = ε. of and-brk1. Hence p1 = p2 = clear(p ′′); break and e 1 =
e 2 = ε.
Case 2. p = emitint (e ′). Then π1 and π2 are instances of emit-
int. Hence p1 = p2 = @canrun(n) and e 1 = e 2 = e ′. Case 9.1.3. p ′ , @nop, break. Then π 1 and π2 are in-
stances of and-adv1. Thus there are derivations π 1′
Case 3. p = @canrun(n). Then π1 and π2 are instances of can- and π2′ such that
run. Hence p1 = p2 = @nop and e 1 = e 2 = ε.
π1′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩
′ ′
Case 4. p = if mem(id) then p ′ else p ′′. There are two sub-
cases. π2′ ⊩ ⟨p ′, n, ε⟩ −→
nst ⟨p 2 , n, e 2 ⟩
′ ′
k
Case 9.2.3. p ′′ , @nop, break. Then π1 and π2 are in- Case 2. Suppose there are k < i and δ #nst
′′ such that δ −→ δ ′′ .
nst
stances of and-adv2. Thus there are derivations π1′′ Then, since i > k, by Case 1, δ cannot exist, which is absurd.
′
and π2′′ such that Therefore, the assumption that there is such k is false. □
π1′′ ⊩ ⟨p ′′, n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩
′′ ′′
Lemma 3.6.
i
π2′′ ⊩ ⟨p ′′, n, ε⟩ −→
nst ⟨p 2 , n, e 2 ⟩
′′ ′′
If ⟨p1 , n, e⟩ −→
nst ⟨p 1 , n, e ⟩, for any p 2 :
′ ′
i
for some p1′′, p2′′, e 1′′, and e 2′′. By the induction hypoth- (a) ⟨p1 ; p2 , n, e⟩ −→
nst ⟨p 1 ; p 2 , n, e ⟩.
′ ′
esis, p1′′ = p2′′ and e 1′′ = e 2′′. Hence p1 = p ′ and p1′′ = (b) ⟨p1 @loop p2 , n, e⟩ −→ i
nst ⟨p 1 @loop p 2 , n, e ⟩.
′ ′
p ′ and p2′′ = p2 and e 1 = e 1′′ = e 2′′ = e 2 . ⟨p1 @and p2 , n, e⟩ −→ i
nst ⟨p 1 @and p 2 , n, e ⟩.
(c) ′ ′
i
Case 10. p = p ′ or p ′′. Then π1 and π2 are instances of or- (d) ⟨p1 @or p2 , n, e⟩ −→nst ⟨p 1 @or p 2 , n, e ⟩.
′ ′
Proof. By induction on i. The theorem is trivially true if i = 0 (b) Similar to item (a).
and follows directly from Lemmas 3.1 and 3.2 for i = 1. (c) Similar to item (a).
Suppose (d) Similar to item (a).
1
δ −→ i−1
δ 1′ −→ δ1 1
and δ −→ i−1
δ 2′ −→ δ2 , (e) The lemma is trivially true for i = 0, as p2 = p2′ , and
follows directly from and-adv2 for i = 1. Suppose
for some i > 1, δ 1′ and δ 2′ . There are two possibilities.
1 ′′ i−1
1
Case 1. δ −→ 1 ⟨p2 , n, e⟩ −→
nst ⟨p 2 , n, e ⟩ −→
′′
nst ⟨p 2 , n, e ⟩ ,
′ ′
(6)
out δ 1 and δ −→
out δ 2 . Then, by Lemma 3.1, δ 1 = δ 2 ,
′ ′ ′ ′
and by the induction hypothesis, δ 1 = δ 2 . for some i > 1. Then ⟨p2′′, n, e ′′⟩ is not nested-irreducible.
1 1 By (6) and by and-adv2,
Case 2. δ nst δ 1
−→ and δ
′
nst δ 2 .
−→ ′
Then, by Lemma 3.2, δ 1′ = δ 2′ ,
1
and by the induction hypothesis, δ 1 = δ 2 . □ ⟨p1 @and p2 , n, e⟩ −→
nst ⟨p 1 @and p 2 , n, e ⟩ .
′′ ′′
(7)
i From (6), by the induction hypothesis,
Proposition 3.5. If δ −→ nst δ #nst then, for all k , i, there is
′
k i−1
no δ #nst such that δ −→
′′
nst δ #nst .
′′ ⟨p1 @and p2′′, n, e ′′⟩ −→
nst ⟨p 1 @or p 2 , n, e ⟩ .
′ ′
(8)
Proof. By contradiction on the hypothesis that there is such k. From (7) and (8),
i
Let δ −→
nst δ #nst , for some i ≥ 0. There are two cases.
′ i
⟨p1 @and p2 , n, e⟩ −→
nst ⟨p 1 @and p 2 , n, e ⟩ .
′ ′
k
Case 1. Suppose there are k > i and δ #nst
′′ such that δ −→
nst δ .
′′
(f) Similar to item (a). □
k
Then, by definition of −→
nst , ∗
Theorem 3.8. For any δ there is a δ #nst
′ such that δ nst δ #nst .
−→ ′
i i+1 i+2 k
δ −→
nst δ −→
′
nst δ 1 −→
′
nst δ .
nst · · · −→
′′
(2) Proof. By induction on the structure of programs. Let δ =
Since δ ′ = ⟨p ′, n, e ′⟩ is nested-irreducible, e ′ = ε or p = ⟨p, n, ε⟩. The theorem is trivially true if δ is nested-irreducible,
@nop, break or isblocked(p ′, n). In any of these cases, by the as by definition δ −→0
nst δ . Suppose that is not the case. Then,
′ 1 δ ′ , which
nst , there is no δ 1 such that δ −→ depending on the structure of p, there are 11 possibilities. In
definition of −→ ′
nst 1
contradicts (2). Therefore, no such k can exist. each one of them, we show that such δ #nst ′ indeed exists.
LCTES’18, June 2018, Philadelphia, USA R. C. M. Santos, G. F. Lima, F. Sant’Anna, R. Ierusalimschy, and E. H. Haeusler
Case 1. p = mem(id). Then, by mem, for some e and p1′ such that either p1′ = break @loop p ′
1 or isblocked(p1′ , n).
⟨mem(id), n, ε⟩ −→
nst ⟨@nop, n, ε⟩#nst . Case 6.1. p1′ = break @loop p ′. From (10), by loop-brk,
Case 2. p = emitint (e). Then, by emit-int, ∗
⟨loop p ′, n, ε⟩ −→
nst ⟨break @loop p , n, e⟩
′
1
⟨emitint (e), n, ε⟩ −→ ⟨@canrun(n), n, e⟩#nst . 1
nst ⟨@nop, n, e⟩#nst .
nst −→
Case 3. p = @canrun(n). Then, by can-run,
Case 6.2. isblocked(p1′ , n). Hence, from (10) and by the def-
1
⟨@canrun(n), n, ε⟩ −→
nst ⟨@nop, n, ε⟩#nst . inition of #nst, ⟨p1′ , n, e⟩#nst .
Case 4. p = if mem(id) then p ′ else p ′′. There are
two sub- Case 7. p = p ′ @loop p ′′. There are three subcases.
cases. Case 7.1. p ′ = @nop. Then, by loop-nop,
Case 4.1. eval(mem(id)). Then, by if-true and by the in- 1
⟨@nop @loop p ′′, n, ε⟩ −→
nst ⟨loop p , n, ε⟩ .
′′
duction hypothesis, there is a δ ′ such that
1 From this point on, this case is similar to Case 6.
⟨if mem(id) then p ′ else p ′′, n, ε⟩ −→
nst ⟨p , n, e⟩
′
Case 5. p = p ′; p ′′. There are three subcases. Case 7.3. p ′ , @nop, break. Then, by the induction hy-
Case 5.1. p ′ = @nop. Then, by seq-nop and by the induc- pothesis, there are p1′ and e such that
tion hypothesis, there is a δ ′ such that ∗
⟨p ′, n, ε⟩ −→
nst ⟨p 1 , n, e⟩#nst .
′
1 ∗
⟨@nop; p ′′, n, ε⟩ −→
nst ⟨p , n, e⟩ −→
′′
nst δ #nst .
′
By item (b) of Lemma 3.6,
Case 5.2. p′ = break. Then, by seq-brk, ⟨p ′ @loop p ′′, n, ε⟩ −→
∗
nst ⟨p 1 @loop p , n, e⟩ .
′ ′′
1
⟨break; p ′′, n, ε⟩ −→
nst ⟨break, n, ε⟩#nst . It remains to be show that ⟨p1′ @loop p ′′, n, e⟩ is nested-
Case 5.3. p′
, @nop, break. Then, by the induction hy- irreducible. The rest of this proof is similar to that of
pothesis, there are p1′ and e such that Case 5.3.
∗
⟨p ′, n, ε⟩ −→ Case 8. p = p ′ and p ′′. Then, by and-expd,
nst ⟨p 1 , n, e⟩#nst .
′
1
By item (a) of Lemma 3.6, ⟨p ′ and p ′′, n, ε⟩ −→
nst ⟨p @and (@canrun(n); p ), n, ε⟩ .
′ ′′
Case 5.3.2. p1′ = @nop. From (9), Case 9.1.2. p ′ = break. Then, by and-brk1,
∗
⟨p ′; p ′′, n, ε⟩ −→
nst ⟨@nop; p ′′, n, e⟩ . ⟨break @and p ′′, n, ε⟩ (11)
1
From this point on, this case is similar to Case 5.1. −→
nst ⟨clear(p ); break, n, ε⟩ .
′′
Case 5.3.3. p1′ = break. From (9), From (11), by item (a) of Assumption 3.7 and by seq-
∗
⟨p ; p , n, ε⟩
′ ′′
−→
nst ⟨break; p , n, e⟩ .
′′ nop,
∗
From this point on, this case is similar to Case 5.2. ⟨clear(p ′′); break, n, ε⟩ −→
nst ⟨@nop; break, n, ε⟩
1
Case 5.3.4. isblocked(p1′ , n). Then, by definition, nst ⟨break, n, ε⟩#nst .
−→
isblocked(p1′ ; p ′′, n) = isblocked(p1′ , n) = true . Case 9.1.3. p ′ , @nop, break. Then, by the induction
hypothesis, there are p1′ and e such that
Hence, from (9) and by the definition #nst, descrip-
∗
tion ⟨p1′ ; p ′′, n, e⟩ is nested-irreducible. ⟨p ′, n, ε⟩ −→
nst ⟨p 1 , n, e⟩#nst .
′
Case 6. p = loop p ′. Then, by item (b) of Assumption 3.7, By item (c) of Lemma 3.6,
∗ ∗
⟨loop p , n, ε⟩
′
−→
nst ⟨p1′ , n, e⟩ , (10) ⟨p ′ @and p ′′, n, ε⟩ −→
nst ⟨p 1 @and p , n, e⟩ .
′ ′′
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
It remains to be show that ⟨p1′ @and p ′′, n, e⟩ leads to an From (13), by item (a) of Assumption 3.7 and by defini-
nested-irreducible description. There are four possibili- tion of clear,
ties following from the fact that the simpler ⟨p1′ , n, e⟩ is ∗
⟨clear(p ′), n, ε⟩ −→
nst ⟨@nop, n, ε⟩#nst .
nested-irreducible.
1. If e , ε then, by definition, ⟨p1′ @and p ′′, n, e⟩#nst . Case 11.2.2. p ′′ = break. Similar to Case 9.2.2.
2. If p1′ = @nop, this case is similar to Case 9.1.1. Case 11.2.3. p ′′ , @nop, break. Similar to Case 9.2.3.
□
3. If p1′ = break, this case is similar to Case 9.1.2.
4. If isblocked(p1′ , n), this case is similar to Case 9.2. Lemma A.1.
Case 9.2. isblocked(p ′, n).
There are three subcases. push ′
(a) If δ −→
out δ then rank(δ ) = rank(δ ).
′
Case 9.2.1. p ′′ = @nop. Then, by and-nop2, pop ′
(b) If δ −→
out δ then rank(δ ) > rank(δ ).
′
1
⟨p ′ @and @nop, n, ε⟩ −→
nst ⟨p , n, ε⟩#nst .
′
Proof. Let δ = ⟨p, n, e⟩, δ ′ = ⟨p ′, n ′, e ′⟩, rank(δ ) = ⟨i, j⟩,
Case 9.2.2. p ′′ = break. Then, by and-brk2, and rank(δ ′) = ⟨i ′, j ′⟩.
push
1
⟨p ′ @and break, n, ε⟩ −→ (a) Suppose ⟨p, n, e⟩ −→out ⟨p , n , e ⟩. Then, by push, e , ε,
′ ′ ′
nst ⟨clear(p ); break, n, ε⟩ .
′
p = bcast(p, e), n = n +1, and e ′ = ε. By Definition ??,
′ ′
From this point on, this case is similar to Case 9.1.2. j = n+1, as e , ε, and j ′ = n+1, as e ′ = ε and n ′ = n+1;
Case 9.2.3. p ′′ , @nop, break. Then, by the induction hence j = j ′. It remains to be shown that i = i ′:
hypothesis, there are p1′′ and e such that
i = pot(p, e) by Definition ??
∗
⟨p , n, ε⟩
′′
−→
nst ⟨p1′′, n, e⟩#nst . = pot (bcast(p, e))
′
by Definition ??
By item (a) of Lemma 3.6, = pot (p )
′ ′
since p ′ = bcast(p, e)
∗
⟨p ′ @and p ′′, n, ε⟩ −→ = pot ′(bcast(p ′, ε)) by definition of bcast
nst ⟨p @and p 1 , n, e⟩ .
′ ′′
= pot (bcast(p , e ))
′ ′ ′
since e ′ = ε
It remains to be show that ⟨p ′ @and p1′′, n, e⟩ leads to an
nested-irreducible description. There are four possibili- = pot(p ′, e ′) by Definition ??
ties following from the fact that the simpler ⟨p1′′, n, e⟩ =i ′
by Definition ??
is nested-irreducible.
1. If e , ε then, by definition, ⟨p ′ @and p1′′, n, e⟩#nst . Therefore, ⟨i, j⟩ = ⟨i ′, j ′⟩.
pop
(b) Suppose ⟨p, n, e⟩ −→out ⟨p , n , e ⟩. Then, by pop, p = p ,
′ ′ ′ ′
2. If p1′′ = @nop, this case is similar to Case 9.2.1. n > 0, n ′ = n − 1, and e = e ′ = ε. By Definition ??,
3. If p1′′ = break, this case is similar to Case 9.2.2. pot(bcast(p, e)) = pot(bcast(p ′, e ′)); hence i = i ′. And
4. If isblocked(p1′′, n) then, as both sides are blocked, by by Definition ??, j = n, as e = ε, and j ′ = n − 1,
definition, ⟨p ′ @and p1′′, n, e⟩#nst . as e ′ = ε and n ′ = n−1; hence j > j ′. Therefore, ⟨i, j⟩ >
⟨i ′, j ′⟩. □
Case 10. p = p ′ or p ′′. Then, by or-expd,
1 Lemma A.2. If δ −→
nst δ then rank(δ ) ≥ rank(δ ).
′ ′
⟨p ′ or p ′′, n, ε⟩ −→
nst ⟨p @or (@canrun(n); p ), n, ε⟩ .
′ ′′
From this point on, this case is similar to Case 11. Proof. We proceed by induction on the structure of −→ nst
derivations. Let δ = ⟨p, n, e⟩, δ ′ = ⟨p ′, n ′, e ′⟩, rank(δ ) = ⟨i, j⟩,
Case 11. p = p ′ @or p ′′. There are two subcases. and rank(δ ′) = ⟨i ′, j ′⟩. By the hypothesis of the lemma, there
Case 11.1. ¬ isblocked(p ′, n). There are three subcases. is a derivation π such that
Case 11.1.1. p ′ = @nop. Then, by or-nop1,
1
π ⊩ ⟨p, n, e⟩ −→
nst ⟨p , n , e ⟩ .
′ ′ ′
⟨@nop @or p ′′, n, ε⟩ −→
nst ⟨clear(p ), n, ε⟩ .
′′
(12)
nst , e = ε and n = n . Depending on the
By definition of −→ ′
From (12), by item (a) Assumption 3.7, structure of program p, there are 11 possibilities. In each one
∗
⟨clear(p ′′), n, ε⟩ −→ of them, we show that rank(δ ) ≥ rank(δ ′).
nst ⟨@nop, n, ε⟩#nst .
Case 11.1.2. p ′ = break. Similar to Case 9.1.2. Case 1. p = mem(id). Then π is an instance of mem. Hence p ′ =
Case 11.1.3. p ′ , @nop, break. Similar to Case 9.1.3. @nop and e ′ = ε. Thus rank(δ ) = rank(δ ′) = ⟨0, n⟩.
Case 11.2. isblocked(p ′, n). There are three subcases. Case 2. p = emitint (e 1 ). Then π is an instance of emit-int.
Case 11.2.1. p ′′ = @nop. Then, by or-nop2, Hence p ′ = @canrun and e ′ = e 1 , ε. Thus
1
⟨p ′ @or @nop, n, ε⟩ −→
nst ⟨clear(p ), n, ε⟩ .
′
(13) rank(δ ) = ⟨1, n⟩ > ⟨0, n + 1⟩ = rank(δ ′) .
LCTES’18, June 2018, Philadelphia, USA R. C. M. Santos, G. F. Lima, F. Sant’Anna, R. Ierusalimschy, and E. H. Haeusler
Case 3. p = @canrun(n). Then π is an instance of can-run. Case 7.2. p1 = break. Similar to Case 5.2.
Hence p ′ = @nop and e ′ = ε. Thus
Case 7.3. p1 , @nop, break. Then π is an instance of loop-
rank(δ ) = rank(δ ′) = ⟨0, n⟩ . adv. Hence there is a derivation π ′ such that
Case 4. p = if p then p1 else p2 . There are two subcases. π ′ ⊩ ⟨p1 , n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩ ,
′ ′
Case 4.1. eval(mem(id)) is true. Then π is an instance of if- for some p1′ and e 1′ . Thus p ′ = p1′ @loop p2 and e ′ = e 1′ .
true. Hence p ′ = p1 and e ′ = ε. Thus There are two subcases.
rank(δ ) = ⟨max{pot ′(p1 ), pot ′(p2 )}, n⟩ Case 7.3.1. pot ′(p) = pot ′(p1 ). Then every execution
≥ ⟨pot ′(p1 ), n⟩ = rank(δ ′) . path of p1 contains a break or awaitext expression. A
nst cannot terminate the loop, since p 1 , break,
single −→
Case 4.2. eval(mem(id)) is false. Similar to Case 4.1. nor can it consume an awaitext , which means that all
Case 5. p = p1 ; p2 . There are three subcases. execution paths in p1′ still contain a break or awaitext .
Hence pot ′(p ′) = pot ′(p1′ ). The rest of this proof is simi-
Case 5.1. p1 = @nop. Then π is an instance of seq-nop. lar to that of Case 5.3.
Hence p ′ = p2 and e ′ = ε. Thus
Case 7.3.2. pot ′(p) = pot ′(p1 ) + pot ′(p2 ). Then some ex-
rank(δ ) = ⟨pot ′(p1 ) + pot ′(p2 ), n⟩ ecution path in p1 does not contain a break or awaitext
≥ ⟨pot ′(p2 ), n⟩ = rank(δ ′) . expression. Since p1 , @nop, a single −→ nst cannot restart
the loop, which means that p1′ still contain some execu-
Case 5.2. p1 = break. Then π is an instance of seq-brk.
tion path in which a break or awaitext does not occur.
Hence p ′ = p1 and e ′ = ε. Thus
Hence pot ′(p ′) = pot ′(p1′ ) + pot ′(p2 ). The rest of this
rank(δ ) = rank(δ ′) = ⟨0, n⟩ . proof is similar to that of Case 5.3.
Case 5.3. p1 , @nop, break. Then π is an instance of seq- Case 8. p = p1 and p2 . Then π is an instance of and-expd.
adv. Hence there is a derivation π ′ such that Hence p ′ = p1 @and (@canrun(n); p2 ) and e ′ = ε. Thus
π ′ ⊩ ⟨p1 , n, ε⟩ −→
nst ⟨p 1 , n, e 1 ⟩ ,
′ ′
rank(δ ) = rank(δ ′) = ⟨pot ′(p1 ) + pot ′(p2 ), n⟩ .
for some p1′ and e 1′ . Thus p ′ = p1′ ; p2 and e ′ = e 1′ . By the Case 9. p = p1 @and p2 . There are two subcases.
induction hypothesis,
Case 9.1. isblocked(p1 , n) is false. There are three subcases.
rank(⟨p1 , n, ε⟩) ≥ rank(⟨p1′ , n, e 1′ ⟩) . (14)
There are two subcases.
Case 9.1.1. p1 = @nop. Then π is an instance of and-
Case 5.3.1. e ′ = ε. Then nop1. Hence p ′ = p2 and e ′ = ε. Thus
rank(δ ) = ⟨pot ′(p1 ) + pot ′(p2 ), n⟩ and rank(δ ) = rank(δ ′) = ⟨0 + pot ′(p2 ), n⟩ .
rank(δ ) = ⟨pot (p1′ ) + pot ′(p2 ), n⟩
′ ′
.
Case 9.1.2. p1 = break. Then π is an instance of and-
By (14), pot ′(p1 ) ≥ pot ′(p1′ ). Thus brk1. Hence p ′ = clear(p2 ); break and e ′ = ε. By item
rank(δ ) ≥ rank(δ ) .′ (??) of Assumption ?? and by the definition of clear,
clear(p2 ) does not contain emitint expressions. Thus
Case 5.3.2. e ′ , ε. Then π ′ contains one application
of emit-int, which consumes one emitint (e ′) expres- rank(δ ) = rank(δ ′) = ⟨0, n⟩ .
sion from p1 and implies pot ′(p1 ) > pot ′(p1′ ). Thus Case 9.1.3. p1 , @nop, break. Then π is an instance
rank(δ ) = ⟨pot (p1 ) + pot (p2 ), n⟩
′ ′ of and-adv1. As p1 , break and p2 , break (other-
wise and-brk2 would have taken precedence), the rest
> ⟨pot ′(p1′ ) + pot ′(p2 ), n + 1⟩ = rank(δ ′) .
of this proof is similar to that of Case 5.3.
Case 6. p = loop p1 . Then π is an instance of loop-expd.
Case 9.2. isblocked(p1 , n) is true. Similar to Case 9.1
Hence p ′ = p1 @loop p1 and e ′ = ε. By item (??) of Assump-
tion ??, all execution paths of p1 contain at least one oc- Case 10. p = p1 or p2 . Then π is an instance of or-expd.
currence of break or awaitext . Thus, by condition (†) in Hence p ′ = p1 @or (@canrun(n); p2 ) and e ′ = ε. Thus
Definition ??, rank(δ ) = rank(δ ′) = ⟨pot ′(p1 ) + pot ′(p2 ), n⟩ .
rank(δ ) = rank(δ ) = ⟨pot (p1 ), n⟩ .
′ ′
Case 11. p = p1 @or p2 . There are two subcases.
Case 7. p = p1 @loop p2 . There are three cases.
Case 11.1. isblocked(p1 , n) is false. There are three sub-
Case 7.1. p1 = @nop. Similar to Case 5.1. cases.
A Memory-Bounded, Deterministic and Terminating Semantics for Céu LCTES’18, June 2018, Philadelphia, USA
pop
Case 11.1.1. p1 = @nop. Then π is an instance of or- Case 1.2.2. e = ε. Then, since j > 0, δ −→
out δ , for some δ .
′ ′′
nop1. Hence p ′ = clear(p2 ) and e ′ = ε. By item (??) of By item ((b)) of Lemma A.1,
Assumption ?? and by the definition of clear, p ′ does rank(δ ) > rank(δ ′) .
not contain emitint expressions. Thus
Hence, by the induction hypothesis, there is a δ #′′ such
rank(δ ) = rank(δ ′) = ⟨0, n⟩ . ∗
that δ ′ −→ ∗
δ #′′. Therefore, δ −→ δ #′′.
Case 11.1.2. p1 = break. Similar to Case 9.1.2.
Case 2. δ is not nested-irreducible. Then e = ε and, by
∗
Case 11.1.3. p1 , @nop, break. Similar to Case 9.1.3. Theorems 3.8 and A.3, there is a δ #nst
′ such that δ −→
nst δ #nst
′
′
with rank(δ ) ≥ rank(δ #nst ). The rest of this proof is similar
Case 11.2. isblocked(p1 , n) is true. Similar to Case 11.1. □
∗
to that of Case 1. □
Theorem A.3. If δ −→
nst δ then rank(δ ) ≥ rank(δ ).
′ ′
∗ i
Proof. If δ nst δ then δ −→
−→ ′
nst δ , for some i. We proceed
′ by
induction on i. The theorem is trivially true for i = 0 and
1
follows directly from Lemma A.2 for i = 1. Suppose δ −→ nst
i−1 ′
nst δ , for some i > 1 and δ 1 . Thus, by Lemma A.2 and by
δ 1′ −→ ′
B.4 Installation
Install all required software (assuming an Ubuntu-based distribu-
tion):
$ sudo apt-get install git lua5.3 lua-lpeg liblua5.3-0 \
liblua5.3-dev
Clone the repository of Céu:
$ git clone https://github.com/fsantanna/ceu
$ cd ceu/
$ git checkout v0.30
Install Céu: