Data Representation and Efficient Solution: A Decision Diagram Approach Gianfranco Ciardo University of California, Riverside PART: FILE: . 2 Decision diagrams: a static view PART: FILE:BDD/binary-decision-diagramsMOD.tex (Reduced ordered) binary decision diagrams (BDDs) 3 “Graph-based algorithms for boolean function manipulation” Randy Bryant (Carnegie Mellon University) IEEE Transactions on Computers, 1986                      x4    x3 x2 0  x1          x2 1 BDDs are a canonical representation of boolean functions f : {0, 1}L → {0, 1} PART: FILE:BDD/bdd-def.tex Ordered binary decision diagrams (BDDs) 4 A BDD is an acyclic directed edge-labeled graph where: • The only terminal nodes can be 0 and 1, and are at level 0 0.lvl = 1.lvl = 0 • A nonterminal node p is at a level k , with L ≥ k ≥ 1 p.lvl = k • A nonterminal node p has two outgoing edges labelled 0 and 1, pointing to children p[0] and p[1] • The level of the children is lower than that of p; p[0].lvl < p.lvl, p[1].lvl < p.lvl • A node p at level k encodes the function vp : BL → B defined recursively by ( p if k = 0 vp (xL , ..., x1 ) = vp[xk ] (xL , ..., x1 ) if k > 0 Instead of levels, we can also talk of variables: • The terminal nodes are associated with the range variable x0 • A nonterminal node is associated with a domain variable xk , with L ≥ k ≥ 1 PART: FILE:BDD/bdd-def.tex Canonical versions of BDDs 5 For canonical BDDs, we further require that • There are no duplicates: if p.lvl = q.lvl and p[0] = q[0] and p[1] = q[1], then p = q Then, if the BDD is quasi-reduced, there is no level skipping: • The only root nodes with no incoming arcs are at level L • The children p[0] and p[1] of a node p are at level p.lvl − 1 Or, if the BDD is fully-reduced, there is maximum level skipping: • There are no redundant nodes p satisfying p[0] = p[1] Both versions are canonical, if functions f and g are encoded using BDDs: • Satisfiability, f 6= 0, or equivalence, f = g O(1) • Conjunction, f ∧ g , disjunction, f ∨ g , relational O(|Nf | × |Ng |), if fully-reduced Pproduct: . L≥k≥1 O(|Nf,k | × |Ng,k |), if quasi-reduced Nf = set of nodes in the BDD encoding f Nf,k = set of nodes at level k in the BDD encoding f PART: FILE:BDD/bdd-def.tex Quasi-reduced vs. fully reduced BDDs 6 x4 x3 x2 x1 + (x4 + x3) (x2 + x1) 0 1 x4 x4 x3 x2 x1 + (x4 + x3) (x2 + x1) 0 1 x3 x2 x1 + x3 (x2 + x1) 0 1 x2 + x1 0 1 x3 x3 x2 x1 + x3 (x2 + x1) 0 1 x2 x1 0 1 0 0 1 0 0 x2 + x1 0 1 x1 0 1 1 1 1 0 1 x2 x2 x1 0 1 x2 + x1 0 1 x1 0 1 x1 0 0 1 1 PART: FILE:MDD/mdd-def.tex Ordered multiway decision diagrams (MDDs) b Assume a domain X 7 = XL × · · · × X1 , where Xk = {0, 1, ..., nk −1}, for some nk ∈ N An MDD is an acyclic directed edge-labeled graph where: • The only terminal nodes can be 0 and 1, and are at level 0 0.lvl = 1.lvl = 0 • A nonterminal node p is at a level k , with L ≥ k ≥ 1 p.lvl = k • A nonterminal node p at level k has nk outgoing edges pointing to children p[ik ], for ik ∈ Xk • The level of the children is lower than that of p; • A node p at level k encodes the function vp : Xb → B defined recursively by ( p if k = 0 vp (xL , ..., x1 ) = vp[xk ] (xL , ..., x1 ) if k > 0 Instead of levels, we can also talk of variables: • The terminal nodes are associated with the range variable x0 • A nonterminal node is associated with a domain variable xk , with L ≥ k ≥ 1 p[ik ].lvl < p.lvl PART: FILE:MDD/mdd-def.tex Canonical versions of MDDs For canonical MDDs, we further require that • There are no duplicates: if p.lvl = q.lvl = k and p[ik ] = q[ik ] for all ik ∈ Xk , then p = q Then, if the MDD is quasi-reduced, there is no level skipping: • The only root nodes with no incoming arcs are at level L • Each child p[ik ] of a node p is at level p.lvl − 1 Or, if the MDD is fully-reduced, there is maximum level skipping: • There are no redundant nodes p satisfying p[ik ] = q for all ik ∈ Xk 8 PART: FILE:MDD/mdd-def.tex Quasi-reduced vs. fully-reduced MDDs; full vs. sparse storage 0 1 2 3 0 1 2 0 1 0 1 2 0 1 0 1 2 0 1 0 1 2 0 0 1 2 x3 0 1 x2 0 1 2 x1 0 1 0 1 0 1 2 0 0 1 2 0 1 0 1 2 x3 0 1 x2 0 1 2 x1 0 1 2 0 1 1 0 1 2 3 x4 0 1 2 0 1 2 0 1 2 1 0 1 2 3 0 1 2 0 1 2 3 x4 2 0 1 2 0 1 0 1 2 1 0 0 1 0 1 2 1 9 PART: FILE:MTMDD/mtmdd-def.tex Ordered multiterminal multiway decision diagrams (MTMDDs) b Assume a domain X Assume a range X0 10 = XL × · · · × X1 , where Xk = {0, 1, ..., nk −1}, for some nk ∈ N = {0, 1, ..., n0 −1}, for some n0 ∈ N (or an arbitray X0 ...) An MTMDD is an acyclic directed edge-labeled graph where: • The only terminal nodes are values from X0 and are at level 0 ∀i0 ∈ X0 , i0 .lvl = 0 • A nonterminal node p is at a level k , with L ≥ k ≥ 1 p.lvl = k • A nonterminal node p at level k has nk outgoing edges pointing to children p[ik ], for ik ∈ Xk • The level of the children is lower than that of p; p[0].lvl < p.lvl, p[1].lvl < p.lvl • A node p at level k encodes the function vp : Xb → X0 defined recursively by ( p if k = 0 vp (xL , ..., x1 ) = vp[xk ] (xL , ..., x1 ) if k > 0 Instead of levels, we can also talk of variables: • The terminal nodes are associated with the range variable x0 • A nonterminal node is associated with a domain variable xk , with L ≥ k ≥ 1 PART: FILE:MTMDD/mtmdd-def.tex Canonical versions of MTMDDs For canonical MTMDDs, we further require that • There are no duplicates: if p.lvl = q.lvl = k and p[ik ] = q[ik ] for all ik ∈ Xk , then p = q Then, if the MTMDD is quasi-reduced, there is no level skipping: • The only root nodes with no incoming arcs are at level L • Each child p[ik ] of a node p is at level p.lvl − 1 Or, if the MTMDD is fully-reduced, there is maximum level skipping: • There are no redundant nodes p satisfying p[ik ] = q for all ik ∈ Xk 11 PART: FILE:MTMDD/mtmdd-def.tex Quasi-reduced vs. fully reduced MTMDDs 0 1 2 3 0 1 2 0 1 0 1 0 1 2 0 1 2 0 1 0 1 0 1 2 5 0 1 2 3 x4 0 1 2 0 1 12 7 0 1 2 x3 0 1 x2 0 1 2 x1 8 0 1 2 0 1 2 0 1 2 0 1 0 1 0 1 2 0 1 5 0 1 2 7 8 PART: FILE:FOILS/matrices-with-dd.tex Representing matrices with decision diagrams A function f : Xb → X0 can be thought of as an X0 -valued one-dimensional vector of size |Xb| b × Xb We also need to store functions X → X0 , or two-dimensional matrices We can use a decision diagram with 2L levels: • Unprimed xk for the rows, or from, variables • Primed x0k for columns, or to variables • Levels can be interleaved, (xL , x0L , ..., x1 , x01 ), or non-interleaved, (xL , ..., x1 , x0L , ..., x01 ) We can use a (terminal-valued) matrix diagram (MxD), analogous to a BDD, MDD, or MTMDD: • A non-terminal node P at level k , for L ≥ k ≥ 1, has nk × nk edges • P [ik , i0k ] points to the child corresponding to the choices xk = ik and x0k = i0k 13 PART: FILE:FOILS/matrices-with-dd.tex Identity patterns and identity-reduced decision diagrams In the matrices that we need to encode, it is often the case that the entry is 0 if xk An identity pattern in an interleaved 2L-level MDD is • a node p at level k • with p[ik ] = p0ik • such that p0ik [i0k ] = 0 for i0k 6= ik • and p0ik = q 6= 0 only for i0k = ik In an identity-reduced primed level k , we skip the nodes p0ik An identity node in an MxD is • a node P • such that P [ik , i0k ] = 0 for all ik , i0k ∈ Xk , ik 6= i0k • and P [ik , ik ] = q for all ik ∈ Xk In an identity-reduced MxD, we skip these identity nodes 6= x0k 14 PART: FILE:FOILS/matrices-with-dd.tex 2L-level MDDs vs. MxDs: encoding a (3 · 2) × (3 · 2) matrix 0 ≡ (x2 1 ≡ (x2 2 ≡ (x2 3 ≡ (x2 4 ≡ (x2 5 ≡ (x2 0 1 .2 3 4 5 0 0 0 0 1 0 0 1 0 0 0 1 0 0 = 0, x1 = 0, x1 = 1, x1 = 1, x1 = 2, x1 = 2, x1 2 0 0 0 1 1 0 3 0 0 0 1 0 1 4 0 1 0 0 0 0 0 1 2 = 0) = 1) = 0) = 1) = 0) = 1) 0 1 2 x2 2 0 1 1 x02 1 1 0 1 x1 1 5 0 0 0 0 0 0 15 0 1 0 1 x01 0 1 2 x2 0 1 x1 0 1 0 1 0 1 0 1 1 1 0 1 2 0 1 2 x2 x2 x02 0 1 0 1 1 2 2 0 1 1 0 1 0 1 0 1 1 x1 x1 x01 1 1 1 1 0 1 1 0 1 PART: FILE:FOILS/evmdd-def.tex Ordered edge-valued multiway decision diagrams (EVMDDs) b Assume a domain X 16 = XL × · · · × X1 , where Xk = {0, 1, ..., nk −1}, for some nk ∈ N Assume the range Z (can generalize to an arbitrary set) An EVMDD is an acyclic directed edge-labeled graph where: • The only terminal node is Ω and is at level 0 Ω.lvl = 0 • A nonterminal node p is at a level k , with L ≥ k ≥ 1 p.lvl = k • A nonterminal node p at level k has nk outgoing edges • For ik ∈ Xk , edge p[ik ] points to child p[ik ].child, and has value p[ik ].val ∈ Z • The level of the children is lower than that of p; p[ik ].child.lvl < p.lvl • An edge (σ, p), with p.lvl = k encodes the function v(σ,p) : Xb → Z defined recursively by ( σ if k = 0 v(σ,p) (xL , ..., x1 ) = σ + vp[xk ] (xL , ..., x1 ) if k > 0 PART: FILE:FOILS/evmdd-def.tex Canonical versions of EVMDDs For canonical EVMDDs, we first normalize each node p at level k 17 ≥ 1 in one of two ways: • p[0].val = 0, or • p[ik ].val ≥ 0 for all ik ∈ Xk , and p[jk ] = 0 for at least one jk ∈ Xk EVMDDs EV+MDDs Then, the usual reduction requirements apply: • There are no duplicates: if p.lvl = q.lvl = k and p[ik ] = q[ik ] for all ik ∈ Xk , then p = q And, if the MDD is quasi-reduced, there is no level skipping: • The only root nodes with no incoming arcs are at level L, and have root edge values in Z • Each child p[ik ].child of a node p is at level p.lvl − 1 Or, if the MDD is fully-reduced, there is maximum level skipping: • There are no redundant nodes p satisfying p[ik ].child = q and p[ik ].val = 0 for all ik ∈ Xk PART: FILE:FOILS/evmdd-def.tex EVMDDs, quasi-reduced EV+MDDs, fully-reduced EV+MDDs 18 For EVMDDs, the value of the incoming root edge is f (0, ..., 0) For EV+MDDs, the value of the incoming root edge is min f The EV+MDDs normalization allows to store partial functions f (0,...,0) min f 0 1 2 3 0 18 36 54 x4 0 1 2 0 6 12 x3 0 1 0 3 0 1 2 0 1 2 Ω x2 x1 0 1 2 3 1 0 2 2 0 1 2 3 0 ∞ 0 1 2 0 0 0 0 1 ∞ 0 0 1 4 0 0 1 2 0 0 0 0 1 2 0 1 6 Ω Xb → Z ∪ {∞} 0 1 2 3 1 0 2 2 x4 x3 x2 min f 0 1 2 3 0 ∞ 0 1 ∞ 0 0 1 4 0 0 1 2 0 1 6 x1 Ω PART: FILE:FOILS/mxd-def.tex Matrix diagrams (MxDs) b Assume a domain X 19 = XL × · · · × X1 , where Xk = {0, 1, ..., nk −1}, for some nk ∈ N Assume the range R≥0 = [0, +∞) (can generalize to an arbitrary set) An (edge-valued) MxD is an acyclic directed edge-labeled graph where: • The only terminal node is Ω and is at level 0 Ω.lvl = 0 • A nonterminal node P is at a level k , with L ≥ k ≥ 1 P.lvl = k • A nonterminal node P at level k has nk × nk outgoing edges • For ik , i0k ∈ Xk , edge P [ik , i0k ] points to child P [ik , jk].child, and has value P [ik , i0k ].val ≥ 0 • The level of the children is lower than that of P P [ik , i0k ].child.lvl < P.lvl • An edge (σ, P ), with P.lvl = k encodes the function v(σ,P ) : Xb → Z defined recursively by ( σ if k = 0 0 0 v(σ,P ) (xL , xL , ..., x1 , x1 ) = σ · vP [xk ,x0k ] (xL , x0L ..., x1 , x01 ) if k > 0 PART: FILE:FOILS/mxd-def.tex Canonical versions of MxDs For canonical MxDs, we first normalize each node P in one of two ways: • max{P [ik , i0k ].val : ik , i0k ∈ Xk } = 1, or • min{P [ik , i0k ].val : ik , i0k ∈ Xk , P [ik , i0k ].val 6= 0} = 1 Then, the usual reduction requirements apply, there are no duplicates: • If P.lvl = Q.lvl = k and P [ik ] = Q[ik ] for all ik ∈ Xk , then P = Q And, if the MxD is quasi-reduced, there is no level skipping: • The only root nodes with no incoming arcs are at level L, and have root edge values in Z • Each child P [ik , i0k ].child of a node P is at level p.lvl − 1 Or, if the MxD is fully-reduced, there is no redundant node P satisfying: • P [ik , i0k ].child = Q and P [ik , i0k ].val = 1 for all ik , i0k ∈ Xk Or, if the MxD is identity-reduced, there are no identity nodes P satisfying: • P [ik , ik ].child = Q and P [ik , ik ].val = 1 for all ik ∈ Xk • P [ik , i0k ].val = 0 for all ik 6= i0k 20 PART: FILE:FOILS/mxd-def.tex Quasi-reduced vs. identity-reduced MxDs; sparse storage min f 0 1 2 0 1 1 1 2 2 7 min f 0 1 2 0 1 1 1 2 2 7 x2 x02 21 min f 0 1 1 2 2 0 1 1 1 1 2 7 x2 0 1 0 1 1 6 0 1 0 1 1 1 Ω 0 1 0 1 0 1 1 0 1 1 6 x1 0 1 0 1 Ω 1 x1 x01 1 1 1 1 1 0 1 1 6 Ω PART: FILE: . 22 Properties and applications PART: FILE:BDD/properties.tex Properties of fully-reduced ordered BDDs 23 • Given a boolean expression, or a function, f : BL → B, there is a unique BDD encoding it (for a fixed variable order xL , . . . , x1 ) • Many functions have a very compact encoding as a BDD • The constant functions 0 and 1 are represented by the nodes 0 and 1, respectively • Given the BDD encoding of a boolean expression f : test whether f ≡ 0 or f ≡ 1 in O(1) time • Given the BDD encodings of boolean expressions f and g : test whether f ≡ g in O(1) time • The variable ordering affects the size of the BDD, consider xL ⇔ yL ∧ · · · ∧ x1 ⇔ y1 ◦ with the order (xL , yL , . . . , x1 , y1 ) ◦ with the order (xL , . . . , x1 , yL , . . . , y1 ) O(L) nodes O(2L ) nodes • The BDD encoding of some functions is large (exponential) for any order ◦ the expression for bit 32 of the 64-bit result of the multiplication of two 32-bit integers • Finding the optimal ordering that minimizes the BDD size is an NP-complete problem PART: FILE:FOILS/set-encoded-by-mdd.tex MDDs to encode sets 24 An important application of BDDs and MDDs is to encode large sets to be manipulated symbolically To encode a set S ⊆ Xb, we simply store its indicator function χS in a decision diagram: χS (iL , ..., i1 ) = 1 ⇔ (iL , ..., i1 ) ∈ S X4 = {0, 1, 2, 3} 0 1 2 3 X3 = {0, 1, 2} 0 1 2 X2 = {0, 1} 0 1 X1 = {0, 1, 2} 0 1 2 0 1 2 3 0 1 2 0 1 0 1 2 0 1 0 1 0 1 2 0 0 1 2 1 0 1 2 0 1 2 0 1 0 1 2 0 1 01 0 0 1 2 1     0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 3 3 3   2 , 0 , 0 , 1 , 1 , 2 , 0 , 0 , 1 , 1 , 2 , 0 , 1 , 2 , 2 , 2 , 2 , 2 , 2 S= 1 1    10 00 10 00 10 10 00 10 00 10 10 10 10 00 01 02 1 0 1 2  PART: FILE:MTMDD/mtmdd-example.tex An example of MTMDD: the transition rate matrix of an SPN X4 : {p1,p0} ≡ {0,1} X3 : {q 0 r0,q 1 r0,q 0 r1} ≡ {0,1,2} X2 : {s0,s1} ≡ {0,1} R p q 0 1 s 0 b X1 : {t0,t1} ≡ {0,1} 0 1 0 1 a 25 c 1 d 0 0 r 0 1 2 2 1 2 0 1 1 2 0 1 0 1 0 1 t e α = rate of a β = rate of b γ = rate of c δ = rate of d . = rate of e 0 1 0 1 0 0 0 1 1 α 1 0 0 1 1 γ 1 0 1 β note the shaded identity patterns!!! 0 0 1 0 1 1 0 δ ε An example of EV∗MDD: the transition rate matrix of an SPN PART: FILE:EVMDD/evmddstar-example.tex X4 : {p1,p0} ≡ {0,1} X3 : {q 0 r0,q 1 r0,q 0 r1} ≡ {0,1,2} X2 : {s0,s1} ≡ {0,1} a q s c d 0 1 ε/β 0 b 0 1 0 1 α/β X1 : {t0,t1} ≡ {0,1} β R p 0 1 2 δ /β 1 2 γ/β 0 1 2 1 2 δ /β r t 26 0 0 δ /β 1 0 1 0 1 e 0 α = rate of a β = rate of b γ = rate of c δ = rate of d . = rate of e 0 0 1 0 1 0 hidden identity patterns remain!!! 1 1 Ω 0 1 PART: FILE:EVMDD/evbdd-example.tex An example of EVBDDs 27 [Lai et al. 1992] defined edge-valued binary decision diagrams 0 0 1 i3 0 0 0 0 1 1 1 1 0 i2 0 0 1 1 0 0 1 1 2 0 1 0 1 -1 0 1 0 2 f 02322410 1 0 1 3 0 0 i1 0 1 0 1 0 1 0 1 0 0 1 0 -1 Ω -3 0 1 1 0 1 0 1 1 2 0 2 0 1 -1 0 1 -1 1 1 0 Ω 3 0 1 0 1 2 3 2 0 1 -1 0 1 -1 1 2 1 Ω Canonicity: all nodes have a value 0 associated to the 0-arc (only the EVBDD on the left is canonical) In canonical form, the root edge has value f (0, . . . , 0) PART: FILE:EVMDD/evmddplus-example.tex An example of EV+MDDs 28 [CiaSim FMCAD’02] defined edge-valued positive multiway decision diagrams From BDD to MDD: the usual extension ∞-edge values: can store partial arithmetic functions Canonization rule different from that of EVBDDs: essential to encode partial arithmetic functions 0 0 1 i3 0 0 0 0 1 1 1 1 0 i2 0 0 1 1 0 0 1 1 i1 0 1 0 1 0 1 0 1 0 0 1 2 2 0 1 i2 0 0 1 1 0 0 1 1 0 0 1 0 2 f 02322410 i3 0 0 0 0 1 1 1 1 0 0 1 0 0 0 1 1 0 Ω 0 1 0 i1 0 1 0 1 0 1 0 1 0 0 1 0 1 3 4 0 0 1 0 1 0 2 0 f 0 2 3 ∞∞ 4 1 0 Canonicity: all edge values are non-negative and at least one is zero In canonical form, the root edge has value mini∈Xb f (i) f (1, 0, 0) = ∞ but f (1, 0, 1) = 4 0 Ω 1 0 PART: FILE:EVMDD/evmdd-storing-lexMOD.tex + EV MDDs to store the lexicographic state indexing function ψ X4 = {0, 1, 2, 3}   0 2 S=  1 0 0 1 2 3 X3 = {0, 1, 2} 0 1 2 X2 = {0, 1} 0 1 X1 = {0, 1, 2} 0 1 2 0 1 1 0 1 0 1 1 0 0 1 1 1 0 1 2 1 0 2 0 0 0 2 0 1 0 0 1 0 1 0 1 2 0 1 0 0 0 0 1 2 0 1 2 2 1 0 0 0 1 2 1 2 1 1 0 2 2 1 0 3 0 1 0 3 1 1 0 3 2 0 0 3 2 0 1 ψ(0, 2, 0, 0) = 0 + 0 + ∞ = ∞ lexicographic, not discovery, order!!! 3 2 1 1 0 1 2 3 • Sum the values found on the corresponding path: • A state is unreachable if the path is not complete: 3 2 1 0 0 To compute the index of a state, use edge values: ψ(2, 1, 1, 0) = 6 + 2 + 1 + 0 = 9 3 2 0 2  3  2 1  2 0 16 2 11 0 1 2 0 1 2 0 0 2 4 0 1 0 1 01 0 1 0 1 0 3 0 0 1 2 0 01 2 Ω 2 29 PART: FILE: . 30 Decision diagrams: a dynamic view PART: FILE:FOILS/ut-and-cache.tex The unique table and the operation cache 31 To ensure canonicity, thus greater efficiency, all operations use a Unique Table (a hash table): • Search key: level p.lvl and edges p[0], ..., p[nk −1] of a node p Return value: a node id • Alternative: one UT per level, no need to store p.lvl, but more fragmentation • All (non-dead) nodes are referenced by the UT • Collision must be without loss, multiple nodes with different node id may have the same hash val With the UT, we avoid duplicate nodes To achieve polynomial complexity, all operations use an Operation Cache (a hash table): • Search key: op code and operands node id1 ,node id2 ,... Return value: node id • Alternative: one OC per operation type, no need to store op code, but more fragmentation • Before computing op code(node id1 , node id2 , ...), we search the OC • If the search is successful, we avoid recomputing a result. • Collision can be managed either without loss or with loss With the OC, we visit every node combination instead of traveling every path PART: FILE:BDD/union-intersection.tex Union (Or ) and Intersection (And ) for fully-reduced BDDs 32 bdd Union(bdd p, bdd q) is local bdd r ; 1 if p = 0 or q = 1 then return q ; 2 if q = 0 or p = 1 then return p; 3 if p = q then return p; 4 if UnionCache contains entry h{p, q} : ri then return r ; 5 if p.lvl = q.lvl then r ← UniqueTableInsert(p.lvl , Union(p[0], q[0]), Union(p[1], q[1])); 6 7 else if a.lvl > b.lvl then r ← UniqueTableInsert(p.lvl , Union(p[0], q), Union(p[1], q)); 8 9 else since a.lvl < b.lvl then r ← UniqueTableInsert(q.lvl , Union(p, q[0]), Union(p, q[1])); 10 11 enter h{p, q} : ri in UnionCache ; 12 return r ; Intersection(p, q) differs from Union(p, q) only in the terminal cases: Union : if p = 0 or q = 1 then return q ; if q = 0 or p = 1 then return p; complexity: Intersection : if p = 1 or q = 0 then return q ; if q = 1 or p = 0 then return p; O(|Np | × |Nq |) PART: FILE:MDD/union-intersection.tex Union (Or ) and Intersection (And ) for quasi-reduced MDDs 33 mdd Union(lvl k, mdd p, mdd q) is local mdd r, r0 , ..., rnk −1 ; 1 if k = 0 then return p ∨ q ; 2 if p = q then return p; 3 if UnionCache contains entry h(k, {p, q}) = ri then return r ; 4 for i = 0 to nk − 1 do 5 ri ← Union(k−1, p[i], q[i]); 6 end for 7 r ← UniqueTableInsert(k, r0 , . . . , rnk −1 ); 8 enter h(k, {p, q}) = ri in UnionCache ; 9 return r ; p and q are 0 or 1 Intersection(k, p, q) differs from Union(k, p, q) only in the terminal case: Union : if k = 0 then return p ∨ q ; Intersection : if k = 0 then return p ∧ q ; P complexity: O( L≥k≥1 |Np,k | × |Nq,k |) PART: FILE:BDD/apply.tex IT E and the Apply operator for fully-reduced BDDs The if-then-else, or ITE, ternary operator is defined as ITE (f, g, h) 34 = (f ∧ g) ∨ (¬f ∧ h) Let f [c/xk ] be the function obtained from f by substituting variable xk with the constant c Then, f ∈B = ITE (xk , f [1/xk ], f [0/xk ]) is the Shannon expansion of f with respect to variable xk For any binary boolean operator : ITE (x, u, v) ITE (x, y, z) = ITE (x, u This is the basis for the recursive BDD operator Apply bdd Apply(operator , bdd p, bdd q) is local bdd r ; q; 1 if p ∈ {0, 1} and q ∈ {0, 1} then return p 2 if OperationCache contains entry h , p, q : ri then return r ; 3 if p.lvl = q.lvl then 4 r ← UniqueTableInsert(p.lvl , Apply( , p[0], q[0]), Apply( , p[1], q[1])); 5 else if p.lvl > q.lvl then 6 r ← UniqueTableInsert(p.lvl , Apply( , p[0], q), Apply( , p[1], q)); 7 else since p.lvl < q.lvl then 8 r ← UniqueTableInsert(q.lvl , Apply( , p, q[0]), Apply( , p, q[1])); 9 enter h , p, q : ri in OperationCache ; 10 return r ; y, v z) PART: FILE:BDD/relational-product.tex Computing the relational product symbolically 35 ⊆ Xb of states b Given a 2L-level BDD rooted at P ∗ , encoding a relation T over X The call RelationalProduct(p∗ , P ∗ ) returns the root r of the BDD encoding the set of states: Given an L-level BDD rooted at p∗ encoding a set X Y = {j : ∃i ∈ X ∧ ∃(i, j) ∈ T } bdd RelationalProduct(bdd p, bdd P ) is quasi-reduced version local bdd r , r1 , r2 ; 1 if p = 0 or P = 0 then return 0; 2 if p = 1 and P = 1 then return 1; 3 if RelationalProductCache contains entry hp, P : ri then return r ; 4 r0 ← Union(RelationalProduct(p[0], P [0][0]), RelationalProduct(p[1], P [1][0])); 5 r1 ← Union(RelationalProduct(p[0], P [0][1]), RelationalProduct(p[1], P [1][1])); 6 r ← UniqueTableInsert(p.lvl, r0 , r1 ); 7 enter hp, P : ri in RelationalProductCache ; The above algorithm assumes that: • the order of the variables for X is (xL , ..., x1 ) • the order of the variables for T is (xL , x0L ..., x1 , x01 ) PART: FILE:FOILS/evmdd-min.tex The Min operator for quasi-reduced EV+MDDs edge Min(level k, edge (α, p), edge (β, q)) 0 36 edge is a pair (int, node) 0 local node p , q , r ; 0 0 local int µ, α , β ; local local ik ; 1 if α = ∞ then return (β, q); 2 if β = ∞ then return (α, p); 3 µ ← min(α, β); 4 if k = 0 then return (µ, Ω); 5 if MinCache contains entry hk, p, q, α − β : 6 r ← NewNode(k); 7 for ik = 0 to nk − 1 do 8 p0 ← p.child [ik ]; 9 α0 ← α − µ + p.val [ik ]; q 0 ← q.child [ik ]; 10 11 β 0 ← β − µ + q.val [ik ]; 12 r[ik ] ← Min(k−1, (α0 , p0 ), (β 0 , q 0 )); 13 UniqueTableInsert(k, r); 14 enter hk, p, q, α − β : µ, ri in MinCache ; 15 return (µ, r); the only node at level 0 is Ω γ, ri then return (γ + µ, r); create new node at level k with edges set to (∞, Ω) continue downstream PART: FILE:FOILS/evmdd-min-example.tex An example of the Min operator for quasi-reduced EV+MDDs 0 f 0 1 2 i3 0 0 0 0 1 1 1 1 2 2 2 2 0 i2 0 0 1 1 0 0 1 1 0 0 1 1 02 i1 f g h 01 0 101 0 101 01 0∞ 2 ∞2∞∞ 1 3∞∞2 0 2 ∞∞2 4 ∞∞1 3 ∞3 0 2 2 ∞2 4 ∞ 1 1 3 ∞2 0 2 0 1 1 0 0 1 0 0 Ω h=min(f,g) 0 0 1 2 1 2 0 1 0 g 0 1 2 1 0 0 1 1 0 1 2 0 2 1 0 0 1 1 0 1 0 1 0 1 Ω 0 0 1 0 0 2 37 0 0 1 0 2 0 Ω 0 1 0 PART: FILE: . 38 Structured system analysis PART: FILE:LOGICAL/structured-model.tex Definition of structured discrete-state model 39 A structured discrete-state model is specified by • a potential state space Xb = XL × · · · × X1 ◦ the “type” of the (global) state ◦ Xk is the (discrete) local state space for the k th submodel ◦ if Xk is finite, we can map it to {0, 1, . . . , nk −1} nk might be unknown a priori • a set of initial states Xinit ⊆ Xb ◦ often there is a single initial state xinit • a set of events E defining a disjunctively-partitioned next-state function or transition relation b ◦ Tα : Xb → 2X j ∈ Tα (i) iff state j can be reached by firing event α in state i S b X b ◦ T :X →2 T (i) = α∈E Tα (i) S S ◦ naturally extended to sets of states Tα (X ) = i∈X Tα (i) and T (X ) = i∈X T (i) ◦ α is enabled in i iff Tα (i) 6= ∅, otherwise it is disabled ◦ i is absorbing, or dead, if T (i) = ∅ PART: FILE:BDD/bdd-generation.tex Using BDDs to build the state space Xreach 40 L-level BDD encodes a set of states S as a subset of the potential state space Xb = {0, 1}L i ≡ (iL , ..., i1 ) ∈ S ⇔ the corresponding path from the root leads to terminal 1 2L-level BDD encodes the transition relation T ⊆ Xb × Xb (i, j) ≡ (iL , jL , ..., i1 , j1 ) ∈ T ⇔ the system can go from i to j in one step We can also think of it as the next-state function T b : Xb → 2X j ∈ T (i) ⇔ the system can go from i to j in one step Alternative Standard method ExploreBdd (Xinit , T ) is 1 S ← Xinit ; 2 U ← Xinit ; 3 repeat 4 X ← T (U); 5 U ← X \ S; 6 S ← S ∪ U; 7 until U = ∅; 8 return S ; known states unexplored states potentially new states truly new states All method AllExploreBdd (Xinit , T ) is 1 S ← Xinit ; 2 repeat O ← S; 3 4 S ←O∪T 5 until O = S ; 6 return S ; (O); old states new states PART: FILE:SYMBOLIC-GENERATION/comparing.tex Explicit vs. symbolic state space generation 41 Explicit generation of the state space Xreach adds one state at a time • memory O(states), increases linearly, peaks at the end ' &% $ 3 21 0 C BA @ "!# .-/ , >=? < + *) ( 7 65 4 G FE D ; :9 8 K JI H                   O NM L Symbolic generation of the state space Xreach with decision diagrams adds sets of states instead • memory O(decision diagram nodes), grows and shrinks, usually peaks well before the end i [0] L i [0] L-1 i [0] i [0] 2 i [0] 1 PART: FILE:PETRINETS/finite-rs.tex Petri nets and their state space Xreach : finite case e fires p a q b p c s d p c s es q fir s r e b q a c d a fires b c t q p d b c s d b c r If the initial state is xinit es c s fir e t b r es fir e t a q d r fir e s b a es a fir fir d es p 42 = (N , 0, 0, 0, 0), Xreach contains r e t d e t (N + 1)(N + 2)(2N + 3) states 6 PART: FILE:PETRINETS/selfmodifyingPNguards.tex Self-modifying Petri nets with inhibitor arcs, guards, priorities 43 A self-modifying Petri net with inhibitor arcs, guards, and priorities is a tuple (P, E, D− , D+ , D◦ , G, , xinit ) • P and E places and events • D− , D+ : E × P × N|P| → N • D◦ : E × P × N|P| → N ∪ {∞} state-dependent input, output arc cardinalities state-dependent inhibitor arc cardinalities • G : E × N|P| → {true, f alse} •  ⊂E ×E state-dependent guards acyclic (preselection) priority relation • xinit : N|P| Event α is enabled in a state i initial state ∈ N|P| , written α ∈ E(i), iff ◦ ∀p ∈ P, D− (i) ≤ i ∧ D p α,p α,p (i) > ip ∧ Gα (i) ∧ ∀β ∈ E, β  α ⇒ β 6∈ E(i) α If i+j, the new state j satisfies + ∀p ∈ P, jp = ip − D− α,p (i) + Dα,p (i) (deterministic effect) PART: FILE:SYMBOLIC-GENERATION/safe-pn.tex Symbolic state-space generation of safe Petri nets [Pastor94] 44 We can store • any set of markings X ⊆ Xb = {0, 1}|P| of a safe PN with a |P|-level BDD b • any relation over Xb, or function Xb → 2X , such as T , with a 2|P|-level BDD We can encode T using 4|E| boolean functions, each corrresponding to a very simple BDD • APM α = • NPM α = • ASM α = • NSM α = Q p:F− p,α=1 (xp = 1) (all predecessor places of α are marked) p:F− p,α=1 (xp = 0) (no predecessor place of α is marked) p:F+ p,α=1 (xp = 1) (all successor places of α are marked) p:F+ p,α=1 (xp = 0) (no successor place of α is marked) Q Q Q PART: FILE:SYMBOLIC-GENERATION/safe-pn.tex Symbolic state-space generation of safe Petri nets (cont.) The topological image computation for a transition α on a set of states U can be expressed as Tα (U) = (((U ÷ APM α ) · NPM α ) ÷ NSM α ) · ASM α where “÷” indicates the cofactor operator and “·” indicates boolean conjunction Given • a boolean function f over (xL , . . . , x1 ) • a literal xk = ik , with L ≥ k ≥ 1 and ik ∈ B the cofactor f ÷ (xk = ik ) is defined as • f (xL , . . . , xk+1 , ik , xk−1 , . . . , x1 ) The extension to multiple literals, f ÷ (xkc = ikc , . . . , xk1 = ik1 ), is recursively defined as • f (xL , . . . , xkc +1 , ikc , xkc −1 , . . . , x1 ) ÷ (xkc−1 = ikc−1 , . . . , xk1 = ik1 ) Thus, T is stored in a disjunctively partition form as T = [ α∈E Tα 45 PART: FILE:STRUCTURAL/chaining.tex Chaining [Roig95] For a Petri net where T is stored in a disjunctively partitioned form, the effect of X ← T (U); U ← X \ S; is exactly achieved with the statements X ← ∅; for each α ∈ E do X ← X ∪ Tα (U); U ← X \ S; However, if we do not require strict breadth-first order, we can use chaining for each α ∈ E do U ← U ∪ Tα (U); U ← U \ S; 46 PART: FILE:STRUCTURAL/all-vs-frontier.tex Symbolic SsGen : breadth-first vs. chaining, new vs. all states BfSsGen(Xinit , {Tα : α ∈ E} 1 S ← Xinit ; 2 U ← Xinit ; 3 repeat X ← ∅; 4 5 for each α ∈ E do 6 X ← X ∪ Tα (U); 7 U ← X \ S; 8 S ← S ∪ U; 9 until U = ∅; 10 return S ; AllBfSsGen(Xinit , {Tα : α ∈ E}) 1 S ← Xinit ; 2 repeat 3 O ← S; 4 X ← ∅; 5 for each α ∈ E do 6 X ← X ∪ Tα (O); 7 S ← O ∪ X; 8 until O = S ; 9 return S ; ChSsGen(Xinit , {Tα : α ∈ E}) 1 S ← Xinit ; 2 U ← Xinit ; 3 repeat 4 for each α ∈ E do 5 U ← U ∪ Tα (U); 6 U ← U \ S; 7 S ← S ∪ U; 8 until U = ∅; 9 return S ; AllChSsGen(Xinit , {Tα : α ∈ E}) 1 S ← Xinit ; 2 repeat 3 O ← S; 4 for each α ∈ E do 5 S ← S ∪ Tα (S); 6 until O = S ; 7 return S ; 47 PART: FILE:STRUCTURAL/all-vs-frontier.tex Comparing the four approaches Time (sec) N |Xreach | Bf 48 Memory (MB) AllBf Ch AllCh Bf AllBf Ch AllCh Dining Philosophers: L = N/2, |Xk | = 34 for all k 50 2.2×1031 37.6 36.8 1.3 1.3 146.8 131.6 2.2 2.2 100 5.0×1062 644.1 630.4 5.4 5.3 >999.9 >999.9 8.9 8.9 1000 9.2×10626 — — 895.4 915.5 — — 895.2 895.0 Slotted Ring Network: L = N , |Xk | = 15 for all k 5 5.3×104 0.2 0.3 0.1 0.1 0.8 1.1 0.3 0.2 10 8.3×109 21.5 24.1 2.1 1.2 39.0 45.0 5.7 3.3 15 1.5×1015 745.4 771.5 18.5 8.9 344.3 375.4 35.1 20.2 Round Robin Mutual Exclusion: L = N +1, |Xk | = 10 for all k except |X1 | = N +1 10 2.3×104 0.2 0.3 0.1 0.1 0.6 1.2 0.1 0.1 20 4.7×107 2.7 4.4 0.3 0.3 5.9 12.8 0.5 0.5 50 1.3×1017 263.2 427.6 2.9 2.8 126.7 257.7 4.3 3.8 FMS: L = 19, |Xk | = N +1 for all k except |X17 | = 4, |X12 | = 3, |X7 | = 2 5 2.9×106 0.7 0.7 0.1 0.1 2.6 2.2 0.4 0.2 10 2.5×109 7.0 5.8 0.5 0.3 18.2 14.7 2.3 1.3 25 8.5×1013 677.2 437.9 12.9 5.1 319.7 245.3 42.7 21.2 final 0.0 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.1 PART: FILE:TEMPORAL-LOGIC/ctl-def.tex CTL: computation tree logic b, Xinit , T Given a Kripke structure (X 49 , A, L) CTL has state formulas and path formulas • State formulas: ◦ if a ∈ A, a is a state formula (a is an atomic proposition, true or false in each state) ◦ if p and p0 are state formulas, ¬p, p ∨ p0 , p ∧ p0 are state formulas ◦ if q is a path formula, Eq , Aq are state formulas • Path formulas: ◦ if p and p0 are state formulas, Xp, Fp, Gp, pUp0 , pRp0 are path formulas ◦ Note: unlike CTL∗ , a state formula is not also a path formula In CTL, operators occur in pairs: • a path quantifier, E or A, must always immediately precede a temporal operator, X, F, G, U, R Of course, CTL expressions can be nested: p ∨ E¬pU(¬p ∧ AXp) A CTL formula p identifies a set of model states (those satisfying p) PART: FILE:TEMPORAL-LOGIC/ctl-def.tex CTL semantics 50 EXp EFp EGp E[pUq] AXp AFp AGp A[pUq] LEGEND: p holds q holds don’t care EX, EU, and EG form a complete set of CTL operators, since: = ¬EX¬p AFp = ¬EG¬p AGp = ¬EF¬p AXp EFp = E[true U p] A[p U q] = ¬E[¬q U ¬p ∧ ¬q] ∧ ¬EG¬q = ¬A[¬pU¬q] A[pRq] = ¬E[¬pU¬q] E[pRq] PART: FILE:TEMPORAL-LOGIC/ctl-explicit-algorithms.tex The EX algorithm for CTL (explicit version) 51 An algorithm to label all states that satisfy EXp We assume that all states satisfying p have been correctly labeled already BuildEX (p) is 1 X ← {i ∈ Xreach : p ∈ labels(i)}; 2 while X 6= ∅ do 3 pick and remove a state j from X ; −1 4 for each i ∈ T (j) do 5 labels(i) ← labels(i) ∪ {EXp}; initialize X with the states satisfying p state i can transition to state j PART: FILE:TEMPORAL-LOGIC/ctl-explicit-algorithms.tex The EU algorithm for CTL (explicit version) 52 An algorithm to label all states that satisfy E[pUq] We assume that all states satisfying p and all states satisfying q have been correctly labeled already BuildEU (p, q) is 1 X ← {i ∈ Xreach : q ∈ labels(i)}; 2 for each i ∈ X do 3 labels(i) ← labels(i) ∪ {E[pUq]}; 4 while X 6= ∅ do 5 pick and remove a state j from X ; −1 6 for each i ∈ T (j) do 7 if E[pUq] 6∈ labels(i) and p ∈ labels(i) then 8 labels(i) ← labels(i) ∪ {E[pUq]}; 9 X ← X ∪ {i}; initialize X with the states satisfying q state i can transition to state j PART: FILE:TEMPORAL-LOGIC/ctl-explicit-algorithms.tex The EG algorithm for CTL (explicit version) 53 An algorithm to label all states that satisfy EGp We assume that all states satisfying p have been correctly labeled already BuildEG(p) is 1 X ← {i ∈ Xreach : p ∈ labels(i)}; 2 build the set C of SCCs in the subgraph of T induced by X ; 3 Y ← {i : i is in a SCC of C}; 4 for each i ∈ Y do 5 labels(i) ← labels(i) ∪ {EGp}; 6 while Y = 6 ∅ do 7 pick and remove a state j from Y ; −1 8 for each i ∈ T (j) do 9 if EGp 6∈ labels(i) and p ∈ labels(i) then 10 labels(i) ← labels(i) ∪ {EGp}; 11 Y ← Y ∪ {i}; initialize X with the states satisfying p state i can transition to state j This algorithm relies on finding the (nontrivial) strongly connected components (SCCs) of a graph PART: FILE:TEMPORAL-LOGIC/ctl-symbolic-algorithms.tex The EX algorithm for CTL (symbolic version) All sets of states and relations over sets of states are encoded using BDDs An algorithm to build the BDD encoding the set of states that satisfy EXp Assume that the BDD encoding the set P of states satisfying p has been built already BuildEXsymbolic(P) is 1 X ← RelationalProduct(P, T 2 return X ; −1 ); perform one backward step in the transition relation 54 PART: FILE:TEMPORAL-LOGIC/ctl-symbolic-algorithms.tex The EU algorithm for CTL (symbolic version) 55 Two algorithms to build the BDD encoding the set of states that satisfy E[pUq] Assume that the BDDs encoding the sets P and Q of states satisfying p and q have been built already BuildEUsymbolic(P, Q) is 1 X ← ∅; 2 U ← Q; 3 repeat 4 X ← Union . (X , U); 5 Y ← RelationalProduct(U, T −1 ); 6 Z ← Intersection(Y, P); 7 U ← Difference(Z, X ); 8 until U = ∅; 9 return X ; initialize the unexplored set U with the states satisfying q currently known states satisfying E[pUq] perform one backward step in the transition relation discard the states that do not satisfy p discard the states that are not new BuildEUsymbolicAll (P, Q) is 1 X ← Q; initialize the currently known result with the states satisfying q 2 repeat 3 O ← X; save the old set of states 4 Y ← RelationalProduct(X , T −1 ); perform one backward step in the transition relation 5 Z ← Intersection(Y, P); discard the states that do not satisfy p 6 X ← Union(Z, X ); add to the currently known result 7 until O = X ; 8 return X ; PART: FILE:TEMPORAL-LOGIC/ctl-symbolic-algorithms.tex The EG algorithm for CTL (symbolic version) An algorithm to build the BDD encoding the set of states that satisfy EGp Assume that the BDDs encoding the set P of states satisfying p has been built already BuildEGsymbolic(P) is 1 X ← P; 2 repeat O ← X; 3 4 Y ← RelationalProduct(X , T −1 ); 5 X ← Intersection(X , Y); 6 until O = X ; 7 return X ; initialize X with the states satisfying p save the old set of states perform one backward step in the transition relation This algorithm starts with a larger set of states and reduces it This algorithm is not based on finding the strongly connected components of T 56 PART: FILE: . 57 Locality and the Saturation algorithm PART: FILE:STRUCTURAL/structural-decomposition.tex Kronecker-consistent decomposition of a structured model 58 A decomposition of a discrete-state model is Kronecker-consistent if: • T is disjunctively partitioned according to a set of events E T (i) = • Xb = ×L≥k≥1 Xk , a global state i consists of L local states • and, most importantly, we can write Define the (potential) incidence matrix T= P α∈E S α∈E Tα (i) i = (iL , . . . , i1 ) Tα (i) = ×L≥k≥1 Tk,α (ik ) T[i, j] = 1 ⇔ j ∈ T (i) Tα = P α∈E N L≥k≥1 We encode the next state function with L · |E| small matrices Tk,α Tk,α ∈ B|Xk ×Xk | for Petri nets, any partition of the places into L subsets will do! (even with inhibitor, reset, or probabilistic arcs) PART: FILE:MDD/next-state-function-encodingA.tex Using structural information to encode T X5 = ? X4 = ? X3 = ? EVENTS → L E V E L S ↓   T5,a: ? 01 I I       T4,a: ? 00 T4,b: ? 00 T4,c: ? 01 I   T2,a: ? 00 I     T3,b: ? 01 T3,c: ? 00 I I I I 59 X2 = ? I   T5,e: ? 00 I I I   T3,e: ? 01   T2,d: ? 01 (L = 5) I     0 T1,d: ? 00 T1,e: ? 10 X1 = ? p a q b s c d r Top(a) : 5 Top(b) : 4 Top(c) : 4 Top(d) : 2 Top(e) : 5 Bot(a) : 2 Bot(b) : 3 Bot(c) : 3 Bot(d) : 1 Bot(e) : 1 we determine a priori from the model whether Tk,α = I t e PART: FILE:MDD/next-state-function-encodingB.tex Kronecker encoding of T : T= P α∈{a,b,c,d,e} N 5≥k≥1 Tk,α 60 X5 : {p1,p0}≡{0,1} X4 : {q 0,q 1}≡{0,1} X3 : {r0,r1}≡{0,1} X2 : {s0,s1}≡{0,1} X1 : {t0,t1}≡{0,1} EVENTS → L E V E L S ↓   T5,a: 00 10 I I       T4,a: 00 10 T4,b: 00 10 T4,c: 01 00 I   T2,a: 00 10 I     T3,b: 01 00 T3,c: 00 10 I I I I I   T5,e: 01 00 I I I   T3,e: 01 00   T2,d: 01 00 I     T1,d: 00 10 T1,e: 01 00 Top(a) : 5 Top(b) : 4 Top(c) : 4 Top(d) : 2 Top(e) : 5 Bot(a) : 2 Bot(b) : 3 Bot(c) : 3 Bot(d) : 1 Bot(e) : 1 p a q b s c d r t e PART: FILE:MDD/next-state-function-encodingC.tex Using structural information to encode T X4 = ? X3 = ? (K = 4) 61 X2 = ? X1 = ? EVENTS → L E V E L S ↓ T4,a :? p I I I T4,e :? a T3,a :? T3,b :? T3,c :? I T3,e :? T2,a :? I I T2,d :? I q b I I I T1,d :? s c d T1,e :? Top(a) : 4 Top(b) : 3 Top(c) : 3 Top(d) : 2 Top(e) : 4 Bot(a) : 2 Bot(b) : 3 Bot(c) : 3 Bot(d) : 1 Bot(e) : 1 r t e Top(b) = Bot(b) = Top(c) = Bot(c) = 3: we can merge b and c into a single local event l we determine automatically from the model whether Tk,α = I PART: FILE:MDD/next-state-function-encodingDmod.tex Kronecker encoding of T : X4 : {p1,p0} ≡ {0,1} T= P α∈{a,b,c,d,e} X3 : {q 0 r0,q 1 r0,q 0 r1} ≡ {0,1,2} EVENTS → L E V E L S   T4,a: 00 10 " # I " I # " I # Top(a) : 4 Bot(a) : 2 I Top(b) : 3 Bot(b) : 3 I 4≥k≥1 Tk,α X2 : {s0,s1} ≡ {0,1} " Top(c) : 3 Top(d) : 2 Top(e) : 4 Bot(c) : 3 Bot(d) : 1 Bot(e) : 1 p #     0 T1,d: 00 10 T1,e: 0 10 62 X1 : {t0,t1} ≡ {0,1}   T4,e: 01 0 0 010 000 000 000 T3,a: 0 0 0 T3,b: 0 0 0 T3,c: 0 0 1 I T3,e: 0 0 0 000 010 000 100     00 ↓ T2,a: 0 1 I I T : I 2,d 00 10 I N a q b s c d r t e PART: FILE:MDD/next-state-functionD.tex The matrix T encoded by the Kronecker descriptor (L q b s c d r t e {p1,p0 }≡ {0,1} {q 0 r0,q 1 r0,q 0 r1 }≡ {0,1,2} {s0,s1 }≡ {0,1} {t0,t1 }≡ {0,1} 63 0000• 0001 0010 0011 0100 0101 0110 0111 0200 0201 0210 0211 00 00 00 01 • ·· ·· ·d ·· ·· ·· ·· ·· ·· ·· ·· ·· 00 00 11 01 ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 00 11 00 01 ·· ·· ·· ·· ·· ·· ·d ·· b· ·b ·· ·· 00 11 11 01 ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· b· ·b 00 22 00 01 ·· ·· ·· ·· c· ·c ·· ·· ·· ·· ·d ·· 00 22 11 01 ·· ·· ·· ·· ·· ·· c· ·c ·· ·· ·· ·· 11 00 00 01 ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 11 00 11 01 ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 11 11 00 01 • ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 11 11 11 01 • a· ·a ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 11 22 00 01 • ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 11 22 11 01 • ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· 1000 1001 1010 1011 1100 1101• 1110• 1111 1200 1201• 1210• 1211 ·· ·· ·· ·· ·· ·· ·· ·· ·· e· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· e· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·d ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·d ·· b· ·b ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· ·· b· ·b ·· ·· ·· ·· c· ·c ·· ·· ·· ·· ·d ·· ·· ·· ·· ·· ·· ·· c· ·c ·· ·· ·· ·· p a = 4) PART: FILE:MDD/locality-def.tex Locality 64 The Kronecker encoding of T evidences locality: • If Tk,α = I, we say that event α and submodel k are independent • If ∀jk ∈ Xk , Tk,α [ik , jk ] = 0, the state of submodel k affects the enabling of event α • If ∃jk 6= ik , Tk,α [ik , jk ] = 1, the firing of event α can change the state of submodel k • In the last two cases, we say that event α depends on submodel k and vice versa Most events in a globally-asynchronous locally-synchronous model are highly localized: • Let Top(α) and Bot(α) be the highest and lowest levels on which α depends • {Top(α), ..., Bot(α)} is the range (of levels) for event α, often much smaller than {L, ..., 1} standard 2L-level MDD encoding of T does not exploit locality need Kronecker or identity-reduced 2L-level MDD encoding PART: FILE:MDD/locality.tex Exploiting locality 65 Locality takes into account the range of levels to which Tα must be applied: If i ∈ Xreach , j ∈ Tα (i), Top(α) = k ∧ Bot(α) = h: j = (iL , ..., ik+1 , jk , ..., jh , ih−1 , ..., i1 ) In addition, it enables in-place updates of a node p at level k : If i0 = (i0L , ..., i0k+1 , ik , ..., i1 ) ∈ Xreach : j0 ∈ Tα (i0 ) ∧ j0 = (i0L , ..., i0k+1 , jk , ..., jh , ih−1 , ..., i1 ) Top(α) = Bot(α) α Local event α: ik +jk Top(α) > Bot(α) Synchronizing event α: p p ik jk ik jk q r q Union(q,r) α (ik , ..., ih )+(jk , ..., jh ) p p ik jk ik jk q r q Union(Fire(α,q),r) locality and in-place-updates save huge amounts of computation PART: FILE:STRUCTURAL/saturation-algorithm.tex Saturation: an iteration strategy based on the model structure An MDD node p at level k is saturated if it encodes a fixed point w.r.t. any α s.t. Top(α) 66 ≤k (this implies that all nodes reachable from p are also saturated) • build the L-level MDD encoding of Xinit (if |Xinit | = 1, there is one node per level) • saturate each node at level 1: fire in them all events α s.t. Top(α) = 1 • saturate each node at level 2: fire in them all events α s.t. Top(α) = 2 (if this creates nodes at level 1, saturate them immediately upon creation) • saturate each node at level 3: fire in them all events α s.t. Top(α) = 3 (if this creates nodes at levels 2 or 1, saturate them immediately upon creation) • ... • saturate the root node at level L: fire in it all events α s.t. Top(α) = L (if this creates nodes at levels L−1, L−2, . . . , 1, saturate them immediately upon creation) States are not discovered in breadth-first order This can lead to enormous time and memory savings for asynchronous systems PART: FILE:STRUCTURAL/saturation-properties.tex Saturation behavior and properties 67 Traditional approaches apply the global next-state function T once to each node at each iteration and make extensive use of the unique table and operation caches • We exhaustively fire each event α in each node p at level k = Top(α), from k = 1 up to L • We must consider redundant nodes as well, thus we prefer quasi-reduced MDDs • Once node p at level k is saturated, we never fire any event α with k = Top(α) on p again • The recursive Fire calls stop at level Bot(α), although the Union calls can go deeper • Only saturated nodes are placed in the unique table and in the union and firing caches • Many (most?) nodes we insert in the MDD will still be present in the final MDD • Firing α in p benefits from having saturated the nodes below p usually enormous memory and time savings but Saturation is not optimal for all models PART: FILE:FOILS/saturation-pseudo.tex Saturation pseudocode: Saturate(L, Xinit ) mdd Saturate(level k , mdd p) is local mdd r, r0 , ..., rnk −1 ; 1 if p = 0 then return 0; 2 if p = 1 then return 1; 3 if Cache contains entry hSaturateCODE , p : ri then return r ; 4 for i = to nk − 1 do ri ← Saturate(k − 1, p[i]); first, be sure that the children are saturated 5 repeat Ek = {α : Top(α) = k} 6 choose e ∈ Ek , i, j ∈ Xk , s.t. ri 6= 0 and Te [i][j] 6= 0; 7 rj ← Or (rj , RelProdSat(k − 1, ri , Te [i][j])); 8 until r0 , ..., rnk −1 do not change; 9 r ← UniqueTableInsert(k, r0 , ..., rnk −1 ); 10 enter hSaturateCODE , p : ri in Cache ; 11 return r ; mdd RelProdSat(level k , mdd q , mdd2 F ) is local mdd r, r0 , ..., rnk −1 ; 1 if q = 0 or F = 0 then return 0; 2 if Cache contains entry hRelProDsatCODE , q, F : ri then return r ; 3 for each i, j ∈ Xk s.t. q[i] 6= 0 and F [i][j] 6= 0 do rj ← Or (rj , RelProdSat(k − 1, q[i], F [i][j])); 4 r ← Saturate(k, UniqueTableInsert(k, r0 , ..., rnk −1 )); 5 enter hRelProdSatCODE , q, F : ri in Cache ; 6 return r . 68 PART: FILE:STRUCTURAL/smart-vs-nusmv.tex Results: SMART vs. NuSMV 69 Time and memory to generate Xreach using Saturation in SMART vs. breadth–first iterations in NuSMV N |Xreach | Dining Philosophers: 50 200 10,000 2.23×1031 2.47×10125 4.26×106269 Slotted Ring Network: 10 15 200 8.29×109 1.46×1015 8.38×10211 Peak memory (kB) SMART NuSMV L=N 22 93 4,686 4.72×107 2.85×1032 1.37×1093 28 80 120,316 20 372 3,109 Flexible Manufacturing System: 10 20 250 4.28×106 3.84×109 3.47×1026 10,819 72,199 — 0.15 0.68 877.82 5.9 12,905.7 — 10,819 13,573 — 0.13 0.39 902.11 5.5 2,039.5 — 0.07 3.81 140.98 0.8 2,475.3 — 0.05 0.20 231.17 9.4 1,747.8 — L=N Round Robin Mutual Exclusion: 20 100 300 Time (sec) SMART NuSMV 26 101 69,087 L=N +1 7,306 26,628 — L = 19 11,238 31,718 — PART: FILE: . 70 + EV MDDs and the distance function PART: FILE:EVMDD/add-vs-forest.tex The distance function 71 b, Xinit , T ), we can define the distance function δ Given a model (X δ(i) = min{d : i ∈ T d (Xinit )} Build X [d] = {i : δ(i) for d = 0, 1, ..., dmax = d}, DistanceMddForestEQ(Xinit , T ) is 1 d ← 0; Xreach ← Xinit ; [0] 2 X ← Xinit ; 3 repeat 4 X [d+1] ← T (X [d] ) \ Xreach ; 5 d ← d + 1; [d] 6 until X = ∅; Xreach ← Xreach ∪ X [d] ; thus : Xb → N ∪ {∞} δ(i) = ∞ ⇔ i 6∈ Xreach Build Y [d] = {i : δ(i) for d = 0, 1, ..., dmax ≤ d}, DistanceMddForestLE (Xinit , T ) is 1 d ← 0; [0] 2 Y ← Xinit ; 3 repeat Y [d+1] ← T (Y [d] ) ∪ Y [d] ; 4 d ← d + 1; 5 [d] 6 until Y = Y [d−1] ; This is breadth-first symbolic state space generation Xreach Sdmax [d] b = {i ∈ X : δ(i) < ∞} = d=0 X = Y [dmax ] is a a by-product of this process! PART: FILE:EVMDD/add-vs-forest.tex Encoding the distance function: MDD vs. ADD (a.k.a. MTMDD) i3 0 0 0 0 1 1 1 1 i2 0 0 1 1 0 0 1 1 i1 0 1 0 1 0 1 0 1 f 023∞∞410 dist=0 0 1 dist=1 dist=2 1 0 0 1 1 dist=3 dist=4 0 1 0 1 0 1 0 1 With an MDD forest: node merging can be poor at the top With an ADD: node merging can be poor at the bottom Both approaches are explicit in the number of distinct distance values 72 0 1 0 1 0 1 0 1 0 1 0 1 2 3 4 PART: FILE:EVMDD/evmdd.tex Encoding the distance function: EVMDD [Lai et al.] 0 0 1 i3 0 0 0 0 1 1 1 1 0 i2 0 0 1 1 0 0 1 1 0 1 -1 0 1 0 2 f 02322410 1 0 1 3 0 0 i1 0 1 0 1 0 1 0 1 0 0 1 2 0 1 0 -1 Ω -3 0 1 1 0 1 0 1 1 2 0 2 0 1 -1 0 1 -1 1 1 0 Ω The first EVMDD is canonical (all nodes have a value 0 associated to the 0-arc) The second and third EVMDDs are not normalized This encoding is truly “implicit” or “symbolic” 73 3 0 1 0 1 2 3 2 0 1 -1 0 1 -1 1 2 1 Ω PART: FILE:EVMDD/summary.tex Encoding the distance function: EV+MDDs 0 i2 0 0 1 1 0 0 1 1 i1 0 1 0 1 0 1 0 1 f 0 2 3 ∞∞ 4 1 0 If (ρ, r) encodes f , then ρ ρ value of the root edge (minimum value of the function) 0 0 1 i3 0 0 0 0 1 1 1 1 0 0 1 0 0 1 root node r, at level L = 3 0 1 3 4 0 0 1 0 1 0 2 0 74 0 1 0 0 + (0+4+0) = 4 f (1, 0, 1) = |{z} ρ Ω = min{f (i) : i ∈ Xb} EV+MDDs can canonically represent all functions Xb → Z ∪ {∞} EVBDDs [Lai et al. 1992] cannot represent certain partial functions PART: FILE:EVMDD/summary.tex Using Saturation to compute the distance function It is easy to build the distance function δ : Xb → N ∪ {∞} using a breadth-first iteration δ(i) = min{d : i ∈ T d (Xinit )} thus δ(i) = ∞ ⇔ i 6∈ Xreach To use Saturation instead, think of δ as the fixed-point of the iteration δ [m+1] = Φ(δ [m] ) where o  n δ [m+1] (i) = min δ [m] (i), min 1 + δ [m] (j) ∃α ∈ E : i ∈ Tα (j) initialized with δ [0] (i) = 0 if i ∈ Xinit and δ [0] (i) = ∞ otherwise        j    i 75   PART: FILE:EVMDD/results.tex Results: time and memory to generate and store δ Time (in seconds) N |S| Final nodes 76 Peak nodes Es Eb Mb As Ab Es Eb Mb As Ab Es Eb Mb As Ab Dining philosophers: dmax = 2N , L = N/2, |Xk | = 34 for all k 6 10 1.9·10 0.01 0.06 0.05 0.12 0.46 21 255 170 21 605 644 238 4022 18 30 6.4·10 0.02 0.86 0.70 7.39 56.80 71 2545 1710 71 7225 7364 2788 140262 626 1000 9.2·10 0.48 — — — — 2496 — — 2496 — — — — Kanban system: dmax = 14N , L = 4, |Xk | = (N +3)(N +2)(N +1)/6 for all k 6 5 2.5·10 0.02 0.14 0.12 0.24 1.55 9 444 133 57 1132 1156 776 13241 9 12 5.5·10 0.34 4.34 3.45 11.08 129.46 16 2368 518 218 5633 5805 5585 165938 16 50 1.0·10 179.48 — — — — 58 — — 2802 — — — — Flex. manuf. syst.: dmax = 14N , L = 19, |Xk | = N +1 for all k except |X17 | = 4, |X12 | = 3, |X2 | = 2 6 5 2.9·10 0.01 0.42 0.34 0.88 11.78 149 5640 2989 211 15205 15693 4903 179577 9 10 2.5·10 0.04 2.96 2.40 5.79 608.92 354 28225 11894 536 76676 78649 17885 1681625 23 140 2.0·10 20.03 — — — — 32012 — — 52864 — — — — Round–robin mutex protocol: dmax = 8N −6, L = N +1, |Xk | = 10 for all k except |X1 | = N +1 4 10 2.3·10 0.01 0.06 0.05 0.22 0.50 92 1038 1123 107 1898 1948 1210 9245 10 30 7.2·10 0.05 0.95 0.89 16.04 224.83 582 12798 19495 637 24122 24566 20072 376609 62 200 7.2·10 1.63 — — — — 20897 — — 21292 — — — — Es : EV+MDD & Saturation As : ADD & Saturation Eb : EV+MDD & breadth-first Ab : ADD & breadth-first Mb : multiple MDDs & breadth-first PART: FILE:EVMDD/trace-generation.tex Generating an EF trace using EV+MDDs INPUT: 77 the MDD x encoding a set of states X , the EV+MDD (ρ, r) encoding δ OUTPUT: a (minimum) µ-length trace j[0] , . . . , j[µ] from a state in Xinit to a state in X 1. Build the EV+MDD (0, x) encoding δx (i) = 0 if i ∈ X and δx (i) = ∞ if i ∈ Xb \ X 2. Compute the EV+MDD (µ, m) encoding Max ( (ρ, r), (0, x) ) µ is the length of one of the shortest-paths we are seeking 3. If µ = ∞, exit: X does not contain reachable states 4. Otherwise, extract from (µ, m) a state j[µ] [µ] [µ] = (jL , . . . , j1 ) on a 0-labelled path from m to Ω j[µ] is a reachable state in X at the desired minimum distance µ from Xinit 5. Initialize ν to µ and iterate until ν = 0: ∈ Xb such that j[ν] ∈ T (i) (use the backward function T −1 ) • compute δ(i) using (ρ, r) and stop on the first i such that δ(i) = ν − 1 there exists at least one such state i∗ (a) For each state i (b) Decrement ν (c) Let j[ν] be i∗ PART: FILE: . 78 Markovian system models PART: FILE:MARKOV-CHAINS/markov-chains.tex Markov chains 79 A stochastic process {X(t) : t ≥ 0} is a collection of r.v.’s indexed by a time parameter t We say that X(t) is the state of the process at time t The possible values X(t) can ever assume for any t is (a subset of) the state space Xreach {X(t) : t ≥ 0} over a discrete Xreach is a continuous-time Markov chain (CTMC) if n o [n+1] [n+1] [n] [n] [n−1] [n−1] [0] [0] Pr X(t )=i | X(t ) = i ∧ X(t )=i ∧ . . . ∧ X(t ) = i for any 0 o n [n+1] [n+1] [n] [n] = Pr X(t )=i | X(t ) = i ≤ t[0] ≤ . . . ≤ t[n−1] ≤ t[n] ≤ t[n+1] and i[0] , . . . , i[n−1] , i[n] , i[n+1] ∈ Xreach Markov property: “given the present state, the future is independent of the past” “the most recent knowledge about the state is all we need” PART: FILE:MARKOV-CHAINS/ctmc-def.tex Markov chain description and analysis A continuous-time Markov chain (CTMC) {X(t) 80 : t ≥ 0} with state space Xreach is described by • its infinitesimal generator Q = R − diag(R · 1) = R − diag(h)−1 ∈ R|Xreach |×|Xreach | π(0) ∈ R|Xreach | • its initial probability vector where • R is the transition rate matrix: • h is the expected holding time vector: R[i, j] is the rate of going to state j when in state i P h[i] = 1/ j∈Xreach R[i, j] • π(0)[i] = Pr {chain is in state i at time 0, i.e., initially} Transient probability vector π(t) • π(t) is the solution of ∈ R|Xreach | : π(t)[i] = Pr {X(t) = i} dπ(t) = π(t) · Q with initial condition π(0) dt Steady-state probability vector π ∈ R|Xreach | : • π is the solution of π · Q = 0 subject to P i∈Xreach π[i] = limt→∞ Pr {X(t) = i} π[i] = 1 Q must be ergodic PART: FILE:SOLUTION/solution-stationary-markovSHORT.tex Stationary solution of Markov models For ergodic CTMCs: solve π·Q = 0 subject to Direct methods are rarely applicable in practice P i∈Xreach 81 π[i] = 1 rank (Q) = |Xreach |−1 Iterative methods are preferable as they avoid fill-in Jacobi(in: π (old) , h, R; out: π (new) ) is 1 2 3 4 5 6 7 repeat for j = 0 to |Xreach | − 1 P (new) π [j] ← h[j] · 0≤i<|Xreach |,i6=j π (old) [i] · R[i, j]; “renormalize π (new) ”; π (old) ← π (new) ; until “converged”; (new) return π ; GaussSeidel(in: h, R; inout: π) is 1 2 3 4 5 6 repeat for j = 0 to |Xreach P| − 1 π[j] ← h[j] · 0≤i<|Xreach |,i6=j π[i] · R[i, j]; “renormalize π ”; until “converged”; return π ; Gauss-Seidel converges faster than Jacobi but requires strict by-column access to the entries of R PART: FILE:LOGICAL/state-indexing-options.tex b vs. actual ψ State indexing options: potential ψ For Markov analysis, we can generate Xreach first, using Once we know Xreach : 82 b Xinit and T : Xb → 2X • We can restrict T to T : Xreach → 2Xreach (if needed for further logical analysis) b : Xb × Xb → R or R : Xreach × Xreach → R • We can store R b : Xb → R or π : Xreach → R • We can choose algorithms that use π Strictly explicit methods: using actual R and π works best Strictly implicit methods: decision diagrams usually don’t work well to store Implicit methods have tradeoffs: b or π π b is often unavoidable if we employ a full vector and |Xb|  |Xreach | • Storing π instead of π b is usually cheaper than that of R in terms of memory requirements • Symbolic storage of R b in conjunction with π complicates indexing... • However, using R • ...forcing us to store ψ : Xb → {0, 1, . . . , |Xreach | − 1} ∪ {null}, hence Xreach PART: FILE:FOILS/vector-matrix-mult-pseudo.tex MxD-based vector-matrix multiplication algorithm real[n] VectorMatrixMult(real[n] x, mxd node A, evmdd node ψ) is 83 n = |Xreach | local natural s; state index in x local real[n] y; local sparse real c; 1 s ← 0; s = ψ(j) 2 for each j = (jL , ..., j1 ) ∈ Xreach in lexicographic order do 3 c ← GetCol (L, A, ψ, jL , ..., j1 ); build column j of A using sparse storage 4 y[s] ← ElementWiseMult(x, c); x uses full storage, c uses sparse storage 5 s ← s + 1; 6 return y; sparse real GetCol (level k , mxd node M , evmdd node φ, natural jk , ..., j1 ) is local sparse real c, d; 1 if k = 0 then return [1]; a vector of size one, with its entry set to 1 2 if Cache contains entry hGetColCODE , M, φ, jk , ..., j1 : ci then return c; 3 c ← 0; initialize the result to all zero entries 4 for each ik ∈ Xk such that M [ik , jk ].val 6= 0 and φ[ik ].val 6= ∞ do 5 d ← GetCol (k − 1, M [ik , jk ].child, φ[ik ].child, jk−1 , ..., j1 ); 6 for each i such that d[i] 6= 0 do 7 c[i + φ[ik ].val] ← c[i + φ[ik ].val] + M [ik , jk ].val · d[i]; 8 enter hGetColCODE , M, φ, jk , ..., j1 : ci in Cache ; 9 return c; PART: FILE: . 84 SMART PART: FILE:SMART/smart-short.tex What is SMART? • A package integrating logical and stochastic modeling formalisms into a single environment • Multiple parametric models expressed in different formalisms can be combined in a study • Easy addition of new formalisms and solution algorithms • For the study of logical behavior: ◦ explicit (BFS exploration) state-space generation [Tools97] ◦ implicit (symbolic MDD Saturation) state-space generation [TACAS01,03] ◦ next-state function with Kronecker products or matrix diagrams [PNPM99, IPDS01] ◦ Saturation-based CTL symbolic model checking [CAV03] • For the study of stochastic and timing behavior ◦ explicit (sparse storage) numerical solution of CTMCs and DTMCs ◦ implicit (Kronecker) numerical solution of CTMCs [INFORMSJC00] ◦ structural-based approximations of large CTMC models [SIGMETRICS00] ◦ explicit numerical solution of semi-regenerative models [PNPM01] ◦ simulation of arbitrary models ◦ regenerative simulation with automatic detection of regeration points [PMCCS03] ◦ structural caching to speed up simulation [PMCCS03] 85 PART: FILE:SMART/smart-logical.tex SMART Language 86 Strongly-typed, computation-on-demand, language. Five types of basic statements . . . • Declaration statements declare functions over some arguments (including constants) • Definition statements declare functions and specify how to compute their value • Model statements define parametric models (declarations, specifications, measures) • Expression statements print values • Option statements modify the behavior of SMART (may have side-effects) (precision, verbosity level) Two compound statements that can be arbitrarily nested • for statements define arrays or repeatedly evaluate parametric expressions Useful for parametric studies • converge statements specify numerical fixed-point iterations Useful for approximate performance or reliability studies Cannot appear within the declaration of a model PART: FILE:SMART/smart-logical.tex SMART Types Basic predefined types are available in SMART bool: the values true or false int: integers (machine-dependent) bigint: arbitrary-size integers bigint real: floating-point values (machine-dependent) string: character-array values 87 bool c := 3 - 2 > 0; int i := -12; i := 12345678901234567890*2; real x := sqrt(2.3); string s := "Monday"; Composite types can be defined aggregate: analogous to the Pascal “record” or C “struct” set: collection of homogeneous objects array: collection of homogeneous objects indexed by set elements p:t:3 {1..8,10,25,50} A type can be further modified by a nature describing stochastic characteristics const: (the default) a non-stochastic quantity ph: a random variable with discrete or continuous phase-type distribution rand: a random variable with arbitrary distribution proc: a random variable that depends on the state of a model at a given time Predefined formalism types can be used to define discrete state models (logical or stochastic) PART: FILE:SMART/smart-logical.tex Function declarations 88 Objects in SMART are functions, possibly recursive, that can be overloaded real pi := 3.14; // a constant, i.e., a 0-argument function bool close(real a, real b) := abs(a-b) < 0.00001; int pow(int b, int e) := cond(e==1, b, b*pow(b,e-1)); real pow(real b, int e) := cond(e==1, b, b*pow(b,e-1)); pow(5,3); // expression, not declaration, prints 125, int pow(0.5,3); // expression, not declaration, prints 0.125, real PART: FILE:SMART/smart-logical.tex Arrays Arrays are declared using a for statement An array’s dimensionality is determined by the enclosing iterators Indices along each dimension belong to a finite set We can define arrays with real indices for (int i in {1..5}, real r in {1..i..0.5}) { real res[i][r]:= MyModel(i,r).out1; } res is not a “rectangular” array of values: • res[1][1.0] • res[2][1.0], res[2][1.5], res[2][2.0] • ... • res[5][1.0], res[5][1.5], . . . , res[5][5.0] 89 PART: FILE:SMART/smart-logical.tex State-space generation and storage: model partition SMART uses the #StateStorage option to choose between • explicit techniques that store each state individually (AVL, SPLAY, HASHING) ◦ no restrictions on the model ◦ require time and memory at least linear in the number of reachable states • implicit techniques that employ MDDs to symbolically store sets of states (MDD LOCAL PREGEN, MDD SATURATION PREGEN, MDD SATURATION) ◦ normally much more efficient ◦ require a Kronecker-consistent partition of the model, automatically checked by SMART (global model behavior is the logical product of each submodel’s local behavior) A PN partition is specified by giving class indices (contiguous, positive integers) to places: partition(2:p); partition(1:r); partition(1:t, 2:q, 1:s); or by simply enumerating (without index information) the places in each class partition(p:q, r:s:t); 90 PART: FILE:SMART/smart-logical.tex State-space generation and storage: dining philosopers 91 spn phils(int N) := { for (int i in {0..N-1}) { place idle[i],waitL[i],waitR[i],hasL[i],hasR[i],fork[i]; partition(1+div(i,2):idle[i]:waitL[i]:waitR[i]:hasL[i]:hasR[i]:fork[i]); init(idle[i]:1, fork[i]:1); trans Go[i],GetL[i],GetR[i],Stop[i]; } for (int i in {0..N-1}) { arcs(idle[i]:Go[i], Go[i]:waitL[i], Go[i]:waitR[i], waitL[i]:GetL[i], waitR[i]:GetR[i], fork[i]:GetL[i], fork[mod(i+1,N)]:GetR[i], GetL[i]:hasL[i], GetR[i]:hasR[i], hasL[i]:Stop[i], hasR[i]:Stop[i], Stop[i]:idle[i], Stop[i]:fork[i], Stop[i]:fork[mod(i+1, N)]); } bigint n_s := num_states(false); }; # StateStorage MDD_SATURATION print("The model has ", phils(read_int("N")).n_s, " states.\n"); Number of States philosophers |S| 100 4.97×1062 MDD Nodes Mem. (bytes) CPU Final Peak Final Peak (secs) 197 246 30,732 38,376 0.04 1,000 9.18×10626 1,997 2,496 311,532 389,376 0.45 10,000 4.26×106269 19,997 24,496 3,119,532 3,821,376 314.13 PART: FILE:SMART/smart-logical.tex CTL model checking: operations An object of type stateset, a set of global model states, is stored as an MDD All MDDs for a model instance are stored in one MDD forest with shared nodes for efficiency • Atom builders: ◦ nostates, returns the empty set ◦ initialstate, returns the initial state or states of the model ◦ reachable, returns the set of reachable states in the model ◦ potential(e), returns the states of Xb satisfying condition e • Set operators: ◦ union(P, Q), returns P ∪ Q ◦ intersection(P, Q), returns P ∩ Q ◦ complement(P ), returns Xb \ P ◦ difference(P, Q), returns P \ Q ◦ includes(P, Q), returns true iff P ⊇ Q ◦ eq(P, Q), returns true iff P = Q 92 PART: FILE:SMART/smart-logical.tex CTL model checking: operations (cont.) • Temporal logic operators (future and past CTL operators): ◦ EX(P ) and EXbar(P ), AX(P ) and AXbar(P ) ◦ EF(P ) and EFbar(P ), AF(P ) and AFbar(P ) ◦ EG(P ) and EGbar(P ), AG(P ) and AGbar(P ) ◦ EU(P, Q) and EUbar(P, Q), AU(P, Q) and AUbar(P, Q) • Execution trace output: ◦ EFtrace(R, P ), prints a witness for EF(P ) starting from a state in R ◦ EGtrace(R, P), prints a witness for EG(P ) starting from a state in R ◦ EUtrace(R, P, Q), prints a witness for EU(P, Q) starting from a state in R ◦ dist(P, Q), returns the length of a shortest path from P to Q • Utility functions: ◦ card(P ), returns the number of states in P (as a bigint) ◦ printset(P ), prints the states in P (up to a given maximum) SMART uses EV+MDDs for efficient witness generation. . . . . . and Saturation for efficient CTL model checking 93 PART: FILE:TEMPORAL-LOGIC/dining-phils.tex Example: the dining philosophers (Petri net) Fork3 94 GoEat3 WaitLeft3 WaitRight3 GoEat2 WaitLeft2 WaitRight2 GoEat1 WaitLeft1 WaitRight1 GetLeft3 GetRight3 GetLeft2 GetRight2 GetLeft1 GetRight1 HasLeft3 HasRight3 HasLeft2 HasRight2 HasLeft1 HasRight1 Idle3 Fork2 Release3 N subnets connected in a circular fashion Idle2 Release2 Fork1 Idle1 Release1 PART: FILE:TEMPORAL-LOGIC/dining-phils.tex Example: the dining philosophers (SMART code) spn phils(int N) := { for (int i in {0..N-1}) { place Idle[i], WaitL[i], WaitR[i], HasL[i], HasR[i], Fork[i]; partition(i+1:Idle[i]:WaitL[i]:WaitR[i]:HasL[i]:HasR[i]:Fork[i]); trans GoEat[i], GetL[i], GetR[i], Release[i]; init(Idle[i]:1, Fork[i]:1); } for (int i in {0..N-1}) { arcs(Idle[i]:GoEat[i], GoEat[i]:WaitL[i], GoEat[i]:WaitR[i], WaitL[i]:GetL[i], Fork[i]:GetL[i], GetL[i]:HasL[i], WaitR[i]:GetR[i], Fork[mod(i+1, N)]:GetR[i], GetR[i]:HasR[i], HasL[i]:Release[i], HasR[i]:Release[i], Release[i]:Idle[i], Release[i]:Fork[i], Release[i]:Fork[mod(i+1, N)]); } bigint num := card(reachable); stateset g := EF(initialstate); bigint numg := card(g); stateset b := difference(reachable,g); bool out := printset(b); }; # StateStorage MDD_SATURATION int N := read_int("number of philosophers"); print("N=",N,"\n"); print("Reachable states: ",phils(N).num,"\n"); print("Good states: ",phils(N).numg,"\n"); print("The bad states are:"); phils(N).out; 95 PART: FILE:TEMPORAL-LOGIC/dining-phils.tex Example: the dining philosophers (results) 96 Reading input. N=50 Reachable states: 22,291,846,172,619,859,445,381,409,012,498 Good states: 22,291,846,172,619,859,445,381,409,012,496 The bad states are: State 0 : { WaitR[0]:1 HasL[0]:1 WaitR[1]:1 HasL[1]:1 WaitR[2]:1 HasL[2]:1 WaitR State 1 : { WaitL[0]:1 HasR[0]:1 WaitL[1]:1 HasR[1]:1 WaitL[2]:1 HasR[2]:1 WaitL true Done.