Breaking Up is Hard to Do: An Investigation of Decomposition for Assume-Guarantee Reasoning

Size: px
Start display at page:

Download "Breaking Up is Hard to Do: An Investigation of Decomposition for Assume-Guarantee Reasoning"

Transcription

1 Breaking Up is Hard to Do: An Investigation of Decomposition for Assume-Guarantee Reasoning Jamieson M. Cobleigh Dept. of Computer Science University of Massachusetts Amherst, MA 01003, USA George S. Avrunin Dept. of Computer Science University of Massachusetts Amherst, MA 01003, USA Lori A. Clarke Dept. of Computer Science University of Massachusetts Amherst, MA 01003, USA ABSTRACT Finite-state verification techniques are often hampered by the stateexplosion problem. One proposed approach for addressing this problem is assume-guarantee reasoning. We were interested in determining if assume-guarantee reasoning could provide an advantage over monolithic verification. Using recent advances in assume-guarantee reasoning that automatically generate assumptions, we undertook a study that considered all two-way decompositions for a set of systems and properties, using two different verifiers. By increasing the number of repeated tasks for a system, we evaluated the decompositions as the systems were scaled. In very few cases were we able to show that assume-guarantee reasoning could verify properties on larger systems than monolithic verification could. Additionally, assume-guarantee reasoning could only verify these properties on systems a few sizes larger than monolithic verification. This negative result, although preliminary, raises doubts about the usefulness of assume-guarantee reasoning as an effective technique. Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verification model checking General Terms Verification, Experimentation This research was partially supported by the National Science Foundation under grants CCR and CCF , by the U.S. Army Research Laboratory and the U.S. Army Research Office under agreement DAAD , and by the U.S. Department of Defense/Army Research Office under agreement DAAD Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation, the U.S. Army Research Office or the U.S. Department of Defense/Army Research Office. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 200X ACM X-XXXXX-XX-X/XX/XX...$ INTRODUCTION Finite-state verification techniques are being developed to verify behavioral properties of distributed and concurrent systems. One of the major problems that these techniques must address is the state-explosion problem, where the number of reachable states to be explored is exponential in the number of concurrent tasks in a system. Compositional reasoning techniques have been proposed as one way to address the state-explosion problem. One of the most frequently advocated compositional reasoning techniques is assume-guarantee reasoning [19, 26] in which a verification problem is represented as a triple, A S P, where S is the subsystem being analyzed, P is the property to be verified, and A is an assumption about the environment in which S is used. The formula A S P is true if, whenever S is used in an environment satisfying A, then the property P must hold. Consider a system S decomposed into two subsystems, S 1 and S 2 (which may then be further decomposed). To verify that a property P holds on the system made up of S 1 and S 2, denoted S 1 S 2, the following is the simplest assume-guarantee rule that can be used: (Premise 1) A S 1 P (Premise 2) true S 2 A true S 1 S 2 P By checking the two premises, a property can be verified on S 1 S 2 without ever having to examine S 1 S 2. There are several issues that make using the above assumeguarantee rule difficult. First, if the system under analysis is made up of more than two subsystems, which is often the case, then S 1 and S 2 may each need to be made up of several subsystems. Deciding how to partition the subsystems into S 1 and S 2 is not easy and can have a significant impact on the time and memory needed for verification. We have found, for example, that memory usage between two different decompositions can vary by over an order of magnitude. Second, once a decomposition is selected, it can be difficult to manually find an assumption A that completes the assumeguarantee proof. Because of the difficulty of doing compositional reasoning, it has not been practical to undertake an empirical evaluation of this approach, although several case studies have been reported [5, 8, 11, 14, 17]. There has been recent work on automatically computing assumptions for compositional analysis [4,8,14,17] that has eliminated one of the obstacles to empirical evaluation. By automatically creating the assumptions, it is now possible to examine a large number of decompositions. Using the algorithm in [8], we undertook such a study to evaluate the effectiveness of assume-guarantee reasoning. We initially undertook this study to gain insight into how to best decompose systems and to learn what kind of savings could be ex-

2 pected from assume-guarantee reasoning. For this study, we implemented this automated assume-guarantee reasoning algorithm for two verifiers, FLAVERS [10] and LTSA [21]. We began by looking at several examples using FLAVERS, but the results of these experiments were not as promising as the results seen in [8], which used LTSA. Although the two tools use different models and verification methods, this was surprising to us. As a result, we translated our examples into LTSA to see if our choice of tool affected our results. In the study reported here, we applied both tools to a small set of scalable systems and properties. For each property of each system, we examined all of the ways to partition the subsystems of that system into S 1 and S 2 at the smallest system size. We considered all of these two-way decompositions to find the best decomposition, that is, the decomposition that explored the fewest number of states. To determine if assume-guarantee reasoning could be used to verify these properties on larger system sizes than could be verified monolithically, we generalized the best decompositions to make them applicable to larger system sizes. We needed to use generalized assumptions because checking all two-way decompositions to find the best decomposition is infeasible for larger system sizes. To evaluate the generalized decompositions, we looked at two-way decompositions of some larger system sizes. We were not able to find the best decomposition in all cases because of the time cost involved. Still, we examined over 43,000 two-way decompositions and used over 1.43 years of CPU time. The results of our experiments are not very encouraging and raise concerns about the effectiveness of assume-guarantee reasoning. For the vast majority of decompositions, more states are explored using assume-guarantee reasoning than are explored using monolithic verification. If we restrict our attention to the best decomposition for each example, then we found that in about half of the examples our automated assume-guarantee reasoning technique explores fewer states than monolithic verification for the smallest system size. When we used generalized decompositions to scale the systems, compositional analysis often explores fewer states than monolithic verification. This memory savings, however, was rarely enough to increase the size of the systems that could be verified beyond what could be done with monolithic verification. Section 2 provides some background information about the finite-state verifiers and the automated assume-guarantee algorithm we used. In Section 3 we describe our experimental methodology and results. We end by discussing related work and conclusions. 2. BACKGROUND This section gives a brief description of the two verifiers used in our experiments, FLAVERS and LTSA. It also briefly describes the L* algorithm and how it can be used for automated assumeguarantee reasoning. 2.1 FLAVERS FLAVERS (FLow Analysis for VERification of Systems) is a finite-state verifier that can prove user-specified properties of sequential and concurrent systems [10]. These properties need to be expressed as sequences of events that should (or should not) happen on any execution of the system. A property can be expressed in a number of different notations, but is translated into a Finite- State Automaton (FSA). The model FLAVERS uses to represent a system is based on annotated Control Flow Graphs (CFGs). Annotations are placed on nodes of the CFGs to represent events that occur during execution of the actions associated with a node. Since a CFG corresponds to the control flow of a sequential system, this representation is not sufficient for modeling a concurrent system. FLAVERS uses a Trace Flow Graph (TFG) to represent concurrent systems. The TFG consists of a collection of CFGs with additional nodes and edges to represent intertask control flow. A CFG, and thus a TFG, over-approximates the sequences of events that can occur when executing a system. FLAVERS uses an efficient state-propagation algorithm to determine whether all potential behaviors of the system being analyzed are consistent with the property being verified. FLAVERS analyses are conservative, meaning FLAVERS will only report that the property holds when the property holds for all TFG paths. If FLAVERS reports that the property does not hold, this can either be because there is an execution that actually violates the property or because the property is violated on infeasible paths through the TFG. Infeasible paths do not correspond to any possible execution of the system but are an artifact of the imprecision of the model. The analyst can introduce feasibility constraints, also represented as FSAs, to improve the precision of the model and thereby eliminate some infeasible paths from consideration. An analyst might need to iteratively add feasibility constraints and observe the analysis results several times before determining whether or not a property holds. Feasibility constraints give analysts some control over the analysis process by letting them determine exactly what parts of a system need to be modeled in order to prove a property. The FLAVERS state-propagation algorithm has worst-case complexity that is O `N 2 Q, where N is the number of nodes in the TFG, and Q is the product of the number of states in the property and all constraints. Experimental evidence shows that the performance of FLAVERS is often sub-cubic in the size of the system [10] and that the performance of FLAVERS is good when compared to other finite-state verifiers [2, 3]. 2.2 LTSA LTSA (Labeled Transition Systems Analyzer) is a finite-state verifier that can prove user-specified properties of sequential and concurrent systems [21]. LTSA can check both safety and liveness properties, but the assume-guarantee algorithm we use can only handle safety properties. Safety properties in LTSA are specified as FSAs where every state except for one is an accepting state. This single non-accepting state must be a trap state, meaning all transitions that leave it must be self-loop transitions. This type of FSA corresponds to the set of prefix-closed regular languages, those languages where every prefix of every string in the language is also in the language. LTSA uses Labeled Transition Systems (LTSs), which resemble FSAs, for modeling the components of a system. LTSs are written in FSP, a process-algebra style notation. Unlike FLAVERS, in which the nodes of the model are labeled with the events of interest, in LTSA, the edges (or transitions) of the LTSs are labeled with the events of interest. To build a model of the entire system, individual LTSs are combined using the parallel composition operator ( ). The parallel composition operator is a commutative and associative operator that combines the behavior of two LTSs by synchronizing the events common to both and interleaving the remaining events. LTSA supports Compositional Reachability Analysis (CRA) of a software system based on its hierarchical structure. CRA incrementally computes and abstracts the behavior of composite components using the architecture of the system as a guide to the order to perform the composition [12]. While CRA can reduce the cost of verification, state explosion can still occur and result in the cost of verification being exponential in the number of concurrent tasks in the system. 2.3 Using the L* Algorithm for Automated Assume-guarantee Reasoning

3 The L* algorithm was originally developed by Angluin [1] and later improved by Rivest and Schapire [27]. In this work, we use Rivest and Schapire s version of the L* algorithm. The L* algorithm learns an FSA for an unknown regular language over an alphabet Σ by building an observation table through its interactions with a minimally adequate teacher, henceforth referred to as a teacher. The teacher needs to answer two types of questions, queries and conjectures. A query consists of a string in Σ and the teacher returns true if the string is in the regular language being learned and false otherwise. A conjecture consists of an FSA that the L* algorithm believes will recognize the language being learned. The teacher returns true if the FSA is correct. Otherwise, the teacher returns false and a counterexample, a string in Σ that is in the symmetric difference of the language of the conjectured automaton and the language being learned. We use the L* algorithm to learn an assumption to verify a property P with assume-guarantee reasoning as described in [8]. In this approach, the system under analysis needs to be divided into two subsystems, S 1 and S 2. Then, a teacher that is capable of answering the queries and conjectures made by the L* algorithm must be provided. At a high level, this teacher is the same for both FLAVERS and LTSA, however, differences in the models used by FLAVERS and LTSA necessitated differences in the implementation of their teachers. The implementation of the teachers are described in Appendix B for FLAVERS and [8] for LTSA. A query posed by the L* algorithm consists of a sequence of events from Σ. The teacher must answer true if this sequence is in the language being learned and false otherwise. To answer a query, the model of S 1 is examined to determine if the given sequence results in a violation of the property P. If this results in a violation of the property P, then the assumption needed to make A S 1 P true should not allow the event sequence in the query and false will be returned to the L* algorithm. Otherwise, the event sequence is permissible and true will be returned to the L* algorithm. A conjecture posed by the L* algorithm consists of an FSA that the L* algorithm believes accepts the language being learned. To answer a conjecture, the teacher needs to find an event sequence in the symmetric difference of the conjectured FSA and the language being learned, if such an event sequence exists. Since the conjectured FSA is a candidate assumption to be used to complete an assume-guarantee proof, conjectures are answered by determining if the conjectured assumption makes the two premises of the assume-guarantee proof rule true. First, the conjectured automaton, A, is checked in Premise 1, A S 1 P. To check this, the model of S 1, as constrained by the assumption A, is verified. If this verification reports that P does not hold, then the counterexample returned represents an event sequence permitted by A, but violating P. Thus, the conjecture is incorrect and the counterexample is returned to the L* algorithm. If the verification reports that the property does hold, then A is good enough to satisfy Premise 1 and Premise 2 can be checked. Premise 2 states that true S 2 A should be true. To check this, the model for S 2 is verified to see if it satisfies A. If this verification reports that A holds, then both Premise 1 and Premise 2 are true, so it can be concluded that P holds on S 1 S 2. If this verification reports that A does not hold, then the resulting counterexample is examined to determine what should be done next. First, a query is made to see if the event sequence of the counterexample leads to a violation of the property P on S 1. If a property violation results, then the counterexample is a behavior that occurs in S 2 that will result in a property violation when S 2 interacts with S 1, so it can be concluded that P does not hold on S 1 S 2. If a property violation does not occur, then the counterexample is a behavior that occurs in S 2 that will not result in a property violation when S 2 interacts with S 1 and, thus, A is restricting the behavior of S 2 unnecessarily. The counterexample is then returned to the L* algorithm in response to the conjecture. It was shown in [8] that this approach to assume-guarantee reasoning will terminate. Using Rivest and Schapire s L* algorithm, l 1 conjectures and O `kl 2 + l log m queries are needed where k is the size of the alphabet of the FSA being learned, l is the number of states in the minimal deterministic FSA that recognizes the language being learned, and m is the length of the longest counterexample returned when a conjecture is made. 3. METHODOLOGY AND RESULTS Our goal was to try to gain a sense of whether or not the automated assume-guarantee reasoning algorithm presented in [8] provides an advantage over monolithic verification. There are several different ways that this technique could provide such an advantage: 1. Does this technique use less memory than monolithic verification? 2. Does this technique use less time than monolithic verification? 3. Can this technique verify properties on larger systems than monolithic verification? Since finite-state verification techniques are more often limited by memory than by time, we focused our study on points 1 and 3 rather than point 2. To evaluate the usefulness of this automated assume-guarantee reasoning technique, we tried to verify properties that were known to hold on a small set of scalable systems: the Chiron user interface system [20], both the single and the multiple dispatcher versions as described in [3], the Gas Station problem [16], Peterson s mutual exclusion problem [25], the Relay problem [28], and the Smokers problem [24]. Both FLAVERS and LTSA prove that a property holds by exploring all of the reachable states in an abstracted model of a system. On properties that do not hold, these two tools stop as soon as a property violation is found. As a result, their performance is more variable. While using only properties that hold restricts the scope of our study, if we had included properties that do not hold, it would have made it more difficult to meaningfully compare the performance of monolithic verification to assume-guarantee reasoning. We implemented this technique in two verifiers, FLAVERS and LTSA. Since FLAVERS uses constraints to control the amount of precision in a verification, we used a minimal set of constraints when verifying properties monolithically and compositionally 1. We did not use the most recent version of LTSA, which is based on plugins [6], because the plugin interface does not provide direct access to the LTSs. An implementation of this automated assume-guarantee reasoning technique exists for the plugin version of LTSA [13], but verification takes significantly longer because all LTSs must be created by writing appropriate FSP, necessitating reparsing the entire model during each query and conjecture, even for the parts of the model that do not change between different queries 1 A minimal set of constraints is one such that removal of any constraint causes FLAVERS to report that the property may not hold. While these sets are minimal for each property, they may not be the smallest possible set of constraints with which FLAVERS can prove the property holds nor the best set with respect to the memory or time cost for FLAVERS. Some of these sets were found for [10], and the remainder were found using the process described in that paper. While the worst-case complexity of FLAVERS increases with each constraint that is added, sometimes adding more constraints can improve the performance of FLAVERS.

4 Table 1: Number of two-way decompositions examined for systems of size 2 FLAVERS LTSA System Properties Decompositions Total Properties Decompositions Total Chiron single Chiron multiple , ,032 Gas Station Peterson Relay Smokers Total 32 3, ,772 Percentage of Monolithic Percentage of Monolithic Properties, in Increasing Order of Percentage Figure 1: Memory Used by the Best Decomposition of Size 2 for FLAVERS Properties, in Increasing Order of Percentage Figure 2: Memory Used by the Best Decomposition of Size 2 for LTSA and conjectures. 3.1 Does Assume-guarantee Reasoning Save Memory at Small System Sizes? Since the scalable systems we looked at had more than two subsystems, we needed to partition those subsystems into S 1 and S 2 to use the assume-guarantee rule presented earlier. For both FLAVERS and LTSA we considered a task to be an indivisible subsystem. We wanted to find a two-way decomposition, that is, a partitioning of the tasks of that system into S 1 and S 2 where neither S 1 nor S 2 is empty, on which assume-guarantee reasoning used the least amount of memory. Each of the systems we used was scaled by creating more instances of one particular task and the size of the system is measured by counting the number of occurrences of that task in the system. For the Chiron systems we counted the number of artists, for the Gas Station system we counted the number of customers, for the Peterson system we counted the number of tasks trying to gain access to the critical section, for the Relay system we counted the number of tasks accessing the shared variable, and for the Smokers system we count the number of smokers. For each property, we examined all two-way decompositions of that system at size 2 to find the best decomposition with respect to memory. To determine the amount of memory used by assume-guarantee reasoning, we looked at the maximum number of states explored by the teacher when answering a query or a conjecture of the L* algorithm. While the data structures used by the L* algorithm and the artifacts created by the verifiers (e.g. TFGs and FSAs in FLAVERS, LTSs in LTSA) do use memory, we did not count them when determining memory usage since the amount of memory needed to store them is small when compared to the amount of memory needed to store the states explored when queries and conjectures are answered. We say one decomposition is better than another decomposition if the maximum number of states explored when the teacher answers a query or conjecture when using the first decomposition is smaller than the maximum number of states explored when the teacher answers a query or conjecture when using the second decomposition. On some properties, assume-guarantee reasoning using the L* algorithm explored more states than would have been explored had an assumption been supplied and just A S 1 P and true S 2 A checked. We count this additional overhead when determining the amount of memory used by assume-guarantee reasoning and discuss this issue in Section 3.5. Similarly, we determine the amount of memory used by monolithic verification by looking at the number of states explored during verification and do not consider the amount of memory needed to hold the artifacts created by the verifiers. Table 1 lists, for FLAVERS and LTSA, the number of properties for each system 2, the number of two-way decompositions 3 examined for each property when each system is size 2, and the total number of decompositions examined on each system when that system is size 2. Figures 1 and 2 show, for FLAVERS and LTSA respectively, the amount of memory used by the best decomposition at size 2 as a percentage of the amount of memory used by monolithic verification. For reference, a line at 100% has been drawn. Points below this line represent properties on which the best decomposition is 2 There is one fewer Chiron property for LTSA than for FLAVERS. The events needed to express that property are removed from the model when LTSA constructs the LTSs. Since this property states that those events cannot occur, LTSA proves this property holds during model construction, making verification unnecessary. 3 Note that the number of two-way decompositions examined for each property are always two less than a power of two because the two-way decompositions that put all of the subsystems into either S 1 or S 2 are not checked.

5 Table 2: Performance of the best decompositions for size 2 and the generalized decompositions compared to monolithic verification Best decomp. Generalized Total at size 2 decomp. number of is better is better Tool properties than mono. Percentage than mono. Percentage FLAVERS % % LTSA % % better than monolithic verification. The best decomposition is better than monolithic verification on 17 of 32 properties with FLAVERS. For these 17 properties, on average the best decomposition uses 48.4% of the memory used by monolithic verification. For the 15 properties where the best decomposition is worse than monolithic verification, on average the best decomposition uses 637.1% of the memory used by monolithic verification. The best decomposition is better than monolithic verification on 17 of 30 properties with LTSA. There is some overlap between the 17 properties on which the best decomposition is better than monolithic verification with FLAVERS and LTSA, but these sets of 17 properties are different for the two verifiers. For the 17 properties where the best decomposition is better than monolithic verification with LTSA, on average the best decomposition uses 33.6% of the memory used by monolithic verification. For the 13 properties where the best decomposition is worse than monolithic verification, on average the best decomposition uses 222.9% of the memory used by monolithic verification. It is important to note that the vast majority of decompositions are not better than monolithic verification. Even for the properties where the best decomposition is better than monolithic verification, most of the decompositions we examined for each property are not better than monolithic verification. Thus, randomly selecting decompositions would likely not yield a decomposition better than monolithic verification. Before we looked at all two-way decompositions for the systems of size 2, we also tried selecting decompositions manually for each property based on our understanding of the systems and properties. In many cases, we were able to find decompositions that used less memory than monolithic verification, when such decompositions existed. In very few cases, however, did we select the best decomposition. Although assume-guarantee reasoning using learned assumptions could save memory in only about half of our examples when the best decomposition is used and finding the decompositions to make assume-guarantee reasoning most effective was expensive, the overall approach was not too onerous. On average, it required about two minutes to examine one decomposition with FLAVERS and about half a minute to examine one decomposition with LTSA. It is infeasible to evaluate all two-way decompositions for larger system sizes, however, because the number of decompositions to be evaluated increases exponentially and the cost of evaluating each decomposition increases as well. In the next section we examine whether or not the best decomposition for size 2 can be used to help find a decomposition that can save memory compared to monolithic verification at larger system sizes. 3.2 Does Assume-guarantee Reasoning Save Memory at Larger System Sizes? While the cost to find the best decomposition for each property at size 2 was not too great, it is infeasible to evaluate all two-way decompositions for larger system sizes. We have several instances where evaluating a single decomposition on a system of size 4 took over 1 month. Thus, if memory was a concern in verifying a specific system and it was important to verify it for a larger size, a reasonable approach might be to examine all decompositions for a small system size and then generalize the best decomposition for that small system size to a larger system size. We used this approach to evaluate the memory usage of assume-guarantee reasoning for larger system sizes and our algorithm for generalizing decomposition from the best decomposition for size 2 is described in Appendix A. Table 2 gives the performance of the best decomposition at size 2 compared to monolithic verification. It also gives the performance of the generalized decomposition compared to monolithic verification at the largest system size that such a comparison could be made. This size varies from property to property depending on when compositional analysis and monolithic verification run out of memory. With FLAVERS, if the best decomposition at size 2 is better than monolithic verification, the associated generalized decomposition is usually better than monolithic verification. While there were 18 properties on which the best decomposition at size 2 is better than monolithic verification and 18 properties on which the generalized decomposition is better than monolithic verification, these are not exactly the same 18 properties. There was 1 property on which the best decomposition at size 2 is better than monolithic verification, but the generalized decomposition is not better than monolithic verification. There is also 1 property on which the best decomposition at size 2 is worse than monolithic verification, but the generalized decomposition is better than monolithic verification. With LTSA, for most of the properties on which the best decomposition at size 2 is better than monolithic verification, the generalized decomposition is not better than monolithic verification. In summary, with FLAVERS the best decomposition at size 2 is better than monolithic verification 56.3% of the time and the generalized decomposition is better than monolithic verification 56.3% of the time. However, with LTSA the best decomposition at size 2 is better than monolithic verification 56.7% of the time and the generalized decomposition is better than monolithic verification only 16.7% of the time. Thus, while the automated assume-guarantee reasoning technique we used was frequently not useful for saving memory on larger sized systems, there were still some systems on which it could save memory. This memory savings, however, is probably not much use unless it is significant enough to allow properties to be verified that could not be verified monolithically. In the next section, we examine whether or not the generalized decompositions allowed us to verify properties on systems larger than could be verified monolithically. 3.3 Can Assume-guarantee Reasoning Verify Properties of Larger Systems than Monolithic Verification Can? To see whether the automated assume-guarantee reasoning technique could verify properties of larger systems than monolithic verification, we used generalized decompositions based on the best

6 Table 3: Generalized Decompositions Compared to Monolithic Verification with respect to Scaling FLAVERS LTSA Number of Number of Properties Percentage Properties Percentage 1. Generalized can scale farther than % 0 0.0% monolithic Potential 2. Don t know, but generalized appears Success better than monolithic % % Subtotal % % 3. Generalized cannot scale farther than monolithic % 0 0.0% 4. Don t know, but generalized appears 2 6.3% % Likely worse than monolithic Failure 5. Monolithic scales well, but % 0 0.0% generalized unlikely to be of much use Subtotal % % Total % % decomposition for size 2, as described in Appendix A. We tried to determine, for each property, whether or not compositional analysis using the generalized decompositions would allow us to verify that property on larger systems sizes than monolithic verification. Unfortunately, we were not able to determine an answer to this question for each property in our study. There were some systems on which we were unable to build models for larger system sizes. As a result, we were unable to definitively determine whether or not compositional analysis using the generalized decompositions would allow us to verify properties on larger system sizes than monolithic verification would. For example, the language processing toolkit used by FLAVERS 4 [29] was unable to generate models for the Chiron systems larger than size 6. After running these experiments, we assigned each property to one of five categories: 1. Compositional analysis can verify a larger system than monolithic verification 2. It is unknown if compositional analysis can verify a larger system than monolithic verification. We consider it likely, however, because compositional analysis is substantially better than monolithic verification for the largest system size such a comparison could be made. 3. Compositional analysis cannot verify a larger system than monolithic verification 4. It is unknown if compositional analysis can verify a larger system than monolithic verification. We consider it unlikely, however, because compositional analysis is worse than monolithic verification for the largest system size such a comparison could be made. 5. Monolithic verification can verify the property on systems with sizes 45 or more. While compositional analysis might be able to verify a larger system, we think verifying these properties on larger systems will not be of much use 5. Table 3 shows the number of properties in each category for FLAVERS and LTSA. We consider using generalized decompositions to be a potential success in verifying a larger system than 4 This toolkit is very old and not easily modifiable. We are working on building models for FLAVERS from Java programs using Bandera [9] and, thus, expect to remove some of the limitations of our old language processing toolkit. 5 The somewhat arbitrary cutoff of 45 represents a substantial size for the systems under consideration. All of the properties we examined that could be verified on systems larger than size 10 could be verified on systems with size 45. monolithic verification if the property is in category 1 or 2. We consider using generalized decomposition to be a likely failure in verifying a larger system than monolithic verification if the property is in category 3, 4, or 5. This yields a potential success rate of 40.6% for FLAVERS and 16.7% for LTSA. Note that this rate is what we believe to be the upper bound of the success rate. By looking at just the properties where we could demonstrate that compositional analysis could scale farther, meaning those in category 1, we obtain the lower bound of the success rate: 18.8% for FLAVERS and 0.0% for LTSA. While we could demonstrate that assume-guarantee reasoning could scale farther than monolithic verification on six properties, it is also important to look at how much farther assume-guarantee reasoning could scale. On five of these six properties, compositional analysis could verify the property on a system one size larger, but not two sizes larger, than monolithic verification could. On the sixth property, compositional analysis could verify the property on a system two sizes larger, but not three sizes larger, than monolithic verification could. In addition, there was one property, counted in category 5, where compositional analysis could verify the system at least three sizes larger than monolithic verification could. This allowed us to increase the size of the system that could be verified from 47 to 50. Since monolithic verification could scale to size 47, a fairly large system size, we do not believe verifying the system on size 50 is particularly useful and do not count this example as a success. While there were six properties where we could demonstrate that assume-guarantee reasoning could scale farther than monolithic verification, compositional analysis could verify these properties on systems only slightly larger than the size of the system on which these properties could be verified monolithically. In summary, we could only verify a larger system using our automated assume-guarantee technique on 18.8% of the properties for FLAVERS and on no properties for LTSA. If we had not encountered model building issues, we believe that compositional analysis could verify a larger system size than monolithic verification on at most 40.6% of the properties for FLAVERS and 16.7% of the properties for LTSA. While a 40.6% success rate may look encouraging, compositional analysis using generalized decompositions did not significantly increase the size of the systems that could be verified. Considering the amount of time that was spent to find the best decomposition at size 2, it is questionable if the benefit of verifying a property on a slightly larger system size is worth the necessary investment of time.

7 States 10^3 10^4 10^5 10^6 10^7 10^8 10^9 Monolithic Generalized from 2 Generalized from size Figure 3: States Explored on the Mutual Exclusion Property of Gas Station with FLAVERS States 10^2 10^3 10^4 10^5 10^6 10^7 Monolithic Generalized from 2 Odd/Even Pattern size Figure 4: States Explored on the Always 1 Between 0 Property of Relay with FLAVERS 3.4 Are the Generalized Decompositions the Best Decompositions? These discouraging results were obtained using decompositions that were generalized from the best decomposition on problems of size 2. It is possible that the generalized decompositions we selected are not the best decompositions to use on the larger systems sizes. To investigate this issue, we tried to find the best decomposition for some larger system sizes. After encountering a number of two-way decompositions where it took more than a month to learn an assumption 6, we imposed an upper bound on the amount of time we would spend examining a single two-way decomposition. This bound was set to be the maximum of 1 hour and 10 times the amount of time needed to verify that property monolithically 7. On the largest-sized systems that we completed and for which such a comparison can be made, the generalized decompositions from size 2 are the best decompositions on 37.5% of the properties with FLAVERS and 36.7% of the properties with LTSA. When we found a decomposition at size n (n > 2) that was better than the generalized decomposition for size 2, we generalized the decomposition for size n to systems larger than size n. In all cases, the generalized decomposition for size n is better than the generalized decomposition for size 2. We also tried taking the decompositions for size n and simplifying them so they could be used on systems of size 2, similar to the process described in Appendix A but in reverse. The decompositions for size n, when simplified to size 2, are worse than monolithic verification in all but one case. In addition we have 3 examples where there are decompositions that can be used to verify a property on a larger system size than both monolithic verification and the generalized decompositions from size 2. One of these examples is the mutual exclusion property of the Gas Station System with FLAVERS. Figure 3 compares the number of states explored using two different generalizations (from size 2 and size 4) to the number of states explored using monolithic verification. On this example, the generalized decomposition for size 2 is the best decomposition for size 3. When the 6 The time needed to evaluate the decompositions that took more than a month is counted in the 1.43 years of CPU time needed for our experiments. Still, more than 99% of the decompositions required less than 1 day to evaluate. The time used evaluating just the decompositions that took less than 1 day to evaluate added up to over 10 months of CPU time. 7 On every property where the upper bound on time was reached, we were able to find at least one decomposition that was better than the generalized decomposition. system size is 4, however, the generalized decomposition for size 2 is not the best decomposition. We do not know what the best decomposition for size 4 is because it requires too much time to find. We do know that if we generalize the best known decomposition for size 4, we can verify this property on systems with size 6, one size greater than monolithic verification and the generalized decomposition for size 2. On one of the other examples, the always 1 between 0 property of the Relay system with FLAVERS, the generalized decompositions for size 2 are worse than monolithic. We were able to find a decomposition that could allow us to verify this property on the system with size 7, one size larger than monolithic verification. However, this decomposition was not easy to find. When looking at the best decompositions for size 2, 3, 4, and 5, we noticed that there was a pattern to the best decompositions. The pattern depended on whether or not the size of the system was odd or even. Figure 4 compares the number of states using the generalized decomposition for size 2, the decompositions based on the odd/even pattern, and the monolithic verification for this property. On these two examples, we were able to find decompositions that could scale farther than the generalized decomposition for size 2, but at significant expense, since it involved trying all two-way decompositions for larger system sizes; this approach is too costly to be useful in practice. 3.5 What is the Cost of Using the L* Algorithm In addition to evaluating our use of generalized decompositions, we were also interested in investigating whether or not using the L* algorithm to learn assumptions increased the cost of compositional analysis. To do this, we first determined, for each property, the cost of compositional analysis when the L* algorithm is used to learn an assumption. At the end of each compositional analysis, we saved the assumption that was used to complete the assumeguarantee proof. These assumptions were used to evaluate the cost of compositional analysis when assumptions are not learned. For each property P, this cost was determined by checking A S 1 P and true S 2 A, letting A be the assumption that was previously learned when P was verified using compositional analysis with the L* algorithm. For each property, we compared the time and memory costs for the largest sized system on which that property could be verified using automated assume-guarantee reasoning with the decompositions generalized from size 2. For a property P, we consider the amount of memory used during compositional analysis with the L* algorithm to be the maxi-

8 Percentage of Time Spent Learning Assumption Size 18 of 32 properties have assumption size 1 Figure 5: Percentage of Time Spent Learning for FLAVERS Percentage of Time Spent Learning Assumption Size 19 of 30 properties have assumption size 1 Figure 6: Percentage of Time Spent Learning for LTSA mum number of states explored when the teacher answers a query or conjecture of the L* algorithm. We consider the amount of memory used during compositional analysis without the L* algorithm to be the maximum number of states explored when verifying A S 1 P and true S 2 A, where A is the assumption that was previously learned by the L* algorithm. For FLAVERS, the amount of memory used by the two approaches is the same on 29 of the 32 properties. On the remaining three properties, on average, compositional analysis without the L* algorithm uses 88.6% of the memory of compositional analysis with the L* algorithm. On one of these properties, compositional analysis was able to scale at least three sizes farther than monolithic verifications. On the other two properties, the memory used by compositional analysis without the L* algorithm was about 10 times more than the memory used by monolithic verification. For LTSA, the amount of memory used by the two approaches is the same on 26 of the 30 properties. On the remaining four properties, on average, compositional analysis without the L* algorithm uses 65.9% of the memory of compositional analysis with the L* algorithm. On three of these properties, the memory used by compositional analysis without the L* algorithm was about 2 times more than the memory used by monolithic verification. On the fourth property, the amount of memory used by compositional analysis without the L* algorithm was 78.1% of the memory used by monolithic verification. To summarize, on one of the properties where compositional analysis with the L* algorithm used more memory than compositional analysis without the L* algorithm, compositional analysis with the L* algorithm could scale at least three sizes farther than monolithic verification. This allowed us to increase the size of the system that could be verified from 47 to 50. It is possible that compositional analysis without the L* algorithm could scale farther than compositional analysis with the L* algorithm. Since monolithic verification, however, could scale to size 47, a fairly large system size, we do not believe that verifying the system on size 50 is particularly useful. On one of the other properties, compositional analysis without the L* algorithm used 78.1% of the memory used by monolithic verification, so it is unlikely that compositional analysis without the L* algorithm would be able to scale significantly father than monolithic verification. On the remaining properties, compositional analysis without the L* algorithm used more memory than monolithic verification. Unless a better assumption and decomposition could be found for these properties, it is unlikely that compositional analysis would be able to scale father than monolithic verification. While such assumptions and decompositions may exist, we do not know of a good way to find them and do not believe that it will be easy for analysts to find them without some form of automated support. To determine the time cost for using the L* algorithm to learn an assumption, we looked at the amount of time that was needed to learn an assumption to verify each property. Figures 5 and 6 show the percentage of time that was spent learning an assumption compared to the size of the assumption that was learned. When the size of the learned assumption is small, less than 10 states, less than 50% of the total verification time is spent learning the assumptions. When the size of the learned assumption is larger than 10 states, however, in most cases over 90% of the verification time is spent learning the assumptions, a substantial overhead. Because of this time and memory overhead, we looked at two ways to reduce the cost of automatically learning an assumption. First, since we were able to use generalized decompositions to apply assume-guarantee reasoning to larger-sized systems, we tried generalizing the assumptions in a similar fashion. When the learned assumption was large, however, it was difficult to understand what behavior the assumption is trying to capture. Without such an understanding, it is impossible to determine what the assumption should be for a larger-sized system. As a result, we were unable to use generalized assumptions to reduce the cost of automated assume-guarantee reasoning. Second, Groce et al. developed a technique for initializing some of the L* algorithm s data structures when given an automaton that recognizes a language close to the one being learned [15]. By doing this, they reduced the amount of time needed to run Angluin s version of the L* algorithm. To determine if this technique would reduce the cost of using learning, for each property, we used the L* algorithm to learn an assumption capable of completing an assumeguarantee proof. This assumption was then used to initialize some of the data structures of Angluin s version of the L* algorithm. Performing this initialization reduced the number of queries made by the L* algorithm (and consequently the running time) when compared to not initializing these data structures. Initializing these data structures in Angluin s version of the L* algorithm, however, did not offer any performance benefits over using Rivest and Schapire s version of the L* algorithm, which has better worst-case bounds. We have been unable to find a similar technique to initialize the data structures of Rivest and Schapire s version of the L* algorithm because of the constraints this version places on its data structures. Using the L* algorithm to learn an assumption can increase both the time and memory cost needed to complete an assume-guarantee proof, compared to the cost of completing a proof using a supplied assumption. Some of the learned assumptions are very large: in fact, one has over 250 states. For such systems, analysts cannot be

9 expected to develop these assumptions manually and we do not believe that small assumptions exist that can be used to complete the assume-guarantee proof. Thus, some automated support is needed to make assume-guarantee reasoning practical on these systems. 4. RELATED WORK Our work uses the automated assume-guarantee approach of Cobleigh et al. [8], which uses the L* algorithm to learn assumptions to complete assume-guarantee proofs. Several other assumeguarantee reasoning approaches based on the L* algorithm have been proposed. The work of Barringer et al. [4] extends this approach to use symmetric assume-guarantee rules. Chaki et al. [5] developed an algorithm based on the L* algorithm for learning tree automata for checking simulation conformance. The work of Giannakopoulou et al. [14] also computes assumptions, but requires exploring the entire state space of S 1. Since this may not be necessary when the L* algorithm is used, we believe the approach using the L* algorithm is more scalable. Henzinger et al. [17] have presented a framework for threadmodular abstraction refinement, in which assumptions and guarantees are both refined in an iterative fashion. This framework applies to programs that communicate through shared variables, and, unlike our approach, is not guaranteed to terminate. The work of Flanagan and Qadeer also focuses on a shared-memory communication model [11], but does not address notions of abstractions as is done in [17]. Jeffords and Heitmeyer use an invariant generation tool to generate invariants for components that can be used to complete an assume-guarantee proof [18]. While their proof rules are sound and complete, their invariant generation algorithm is not guaranteed to produce invariants that will complete an assumeguarantee proof even if such invariants exist. Another compositional analysis approach that has been advocated is Compositional Reachability Analysis (CRA) (e.g., [12, 30]). CRA incrementally computes and abstracts the behavior of composite components using the architecture of the system as a guide to the order in which to perform the composition. CRA can be automated and in some case studies (e.g., [7]) has been shown to reduce the cost of verification. Still, CRA is hampered by the state explosion problem. Although constraints, both manually supplied and automatically derived, can help reduce the cost of CRA [7], determining how to apply CRA to effectively reduce the cost of verification still remains a difficult problem. 5. CONCLUSIONS In this work, we explored the question of whether or not assumeguarantee reasoning provides an advantage over monolithic verification. Our experiments were limited in scope: we used one assume-guarantee reasoning technique, two finite-state verifiers, and a small number of systems. Even in this limited context, our experiments were expensive to perform; we examined over 43,000 two-way decompositions and used over 1.43 years of CPU time. The results of our experiments are not very encouraging. The vast majority of decompositions explored more states than monolithic verification. If we restrict our attention to just the best decomposition at the smallest size for each example, then in only about half of the properties we examined did the assume-guarantee reasoning technique explore fewer states than monolithic verification. On the properties where there is a memory savings, with FLAVERS assume-guarantee reasoning used, on average, 48.4% of the memory used by monolithic verification. With LTSA, this percentage was better, 33.6%. We were most interested in determining if this memory savings would be substantial enough to allow us to verify properties on larger systems than could be verified monolithically. Since it is impractical to examine all two-way decompositions for larger system sizes, we used a generalization approach. For each property, we found the best decomposition for a small system size and then generalized that best decomposition so it could be used on larger system sizes. Using this approach, we found that when assume-guarantee reasoning could save memory over monolithic verification, there were very few properties on which we could demonstrate that assume-guarantee reasoning could verify a larger system than monolithic verification. For the properties where assume-guarantee reasoning could verify a larger system than monolithic verification, it could not significantly increase the size of the system that could be verified. Of course, there are decompositions other than the generalized ones that we could have tried on larger systems. In fact, we know that there are some decompositions for some properties that can be used to verify larger systems than what the generalized decompositions can be used to verify. We were unable to find such decompositions using our intuition and examining all two-way decompositions for larger systems sizes to find decompositions better than the generalized ones is too cost prohibitive to be useful in practice. While automated assume-guarantee reasoning techniques can make compositional analysis easier to use, determining how to apply these techniques most effectively is still difficult, can be expensive, and may not significantly increase the sizes of the systems that can be verified. These negative results, although preliminary, raise doubts about the usefulness of assume-guarantee reasoning as an effective compositional analysis technique. Clearly additional experiments should be done using other automated learning techniques, other verification systems, and other decompositions approaches. Proposed advances in assume-guarantee reasoning, however, should now be accompanied by reasonable experimental evidence of effectiveness. 6. REFERENCES [1] D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 75(2):87 106, Nov [2] G. S. Avrunin, J. C. Corbett, and M. B. Dwyer. Benchmarking finite-state verifiers. Int. J. on Soft. Tools for Tech. Transfer, 2(4): , [3] G. S. Avrunin, J. C. Corbett, M. B. Dwyer, C. S. Păsăreanu, and S. F. Siegel. Comparing finite-state verification techniques for concurrent software. TR 99-69, U. of Massachusetts, Dept. of Comp. Sci., Nov [4] H. Barringer, D. Giannakopoulou, and C. S. Păsăreanu. Proof rules for automated compositional verification through learning. In Proc. of the Second Workshop on Spec. and Verification of Component-Based Systems, pages 14 21, Sept [5] S. Chaki, E. M. Clarke, N. Sinha, and P. Thati. Automated assume-guarantee reasoning for simulation conformance. In K. Etessami and S. K. Rajamani, editors, Proc. of the Seventeenth Int. Conf. on Computer-Aided Verification, volume 3576 of LNCS, pages , July [6] R. Chatley, S. Eisenbach, and J. Magee. MagicBeans: a platform for deploying plugin components. In W. Emmerich and A. L. Wolf, editors, Proc. of the Second Int. Working Conf. on Component Development, volume 3083 of LNCS, pages , May [7] S.-C. Cheung and J. Kramer. Context constraints for compositional reachability analysis. ACM Trans. on Soft. Eng. and Methodology, 5(4): , Oct

Optimizing Cyclist Parking in a Closed System

Optimizing Cyclist Parking in a Closed System Optimizing Cyclist Parking in a Closed System Letu Qingge, Killian Smith Gianforte School of Computing, Montana State University, Bozeman, MT 59717, USA Abstract. In this paper, we consider the two different

More information

ENHANCED PARKWAY STUDY: PHASE 2 CONTINUOUS FLOW INTERSECTIONS. Final Report

ENHANCED PARKWAY STUDY: PHASE 2 CONTINUOUS FLOW INTERSECTIONS. Final Report Preparedby: ENHANCED PARKWAY STUDY: PHASE 2 CONTINUOUS FLOW INTERSECTIONS Final Report Prepared for Maricopa County Department of Transportation Prepared by TABLE OF CONTENTS Page EXECUTIVE SUMMARY ES-1

More information

Uninformed Search (Ch )

Uninformed Search (Ch ) 1 Uninformed Search (Ch. 3-3.4) 3 Terminology review State: a representation of a possible configuration of our problem Action: -how our agent interacts with the problem -can be different depending on

More information

D-Case Modeling Guide for Target System

D-Case Modeling Guide for Target System D-Case Modeling Guide for Target System 1/32 Table of Contents 1 Scope...4 2 Overview of D-Case and SysML Modeling Guide...4 2.1 Background and Purpose...4 2.2 Target System of Modeling Guide...5 2.3 Constitution

More information

Citation for published version (APA): Canudas Romo, V. (2003). Decomposition Methods in Demography Groningen: s.n.

Citation for published version (APA): Canudas Romo, V. (2003). Decomposition Methods in Demography Groningen: s.n. University of Groningen Decomposition Methods in Demography Canudas Romo, Vladimir IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please

More information

THE CANDU 9 DISTRffiUTED CONTROL SYSTEM DESIGN PROCESS

THE CANDU 9 DISTRffiUTED CONTROL SYSTEM DESIGN PROCESS THE CANDU 9 DISTRffiUTED CONTROL SYSTEM DESIGN PROCESS J.E. HARBER, M.K. KATTAN Atomic Energy of Canada Limited 2251 Speakman Drive, Mississauga, Ont., L5K 1B2 CA9900006 and M.J. MACBETH Institute for

More information

Uninformed Search (Ch )

Uninformed Search (Ch ) 1 Uninformed Search (Ch. 3-3.4) 7 Small examples 8-Queens: how to fit 8 queens on a 8x8 board so no 2 queens can capture each other Two ways to model this: Incremental = each action is to add a queen to

More information

From Bombe stops to Enigma keys

From Bombe stops to Enigma keys From Bombe stops to Enigma keys A remarkably succinct description of the Bombe written many years ago, reads as follows:- The apparatus for breaking Enigma keys, by testing a crib and its implications

More information

Critical Systems Validation

Critical Systems Validation Critical Systems Validation Objectives To explain how system reliability can be measured and how reliability growth models can be used for reliability prediction To describe safety arguments and how these

More information

Application of Bayesian Networks to Shopping Assistance

Application of Bayesian Networks to Shopping Assistance Application of Bayesian Networks to Shopping Assistance Yang Xiang, Chenwen Ye, and Deborah Ann Stacey University of Guelph, CANADA Abstract. We develop an on-line shopping assistant that can help a e-shopper

More information

The Incremental Evolution of Gaits for Hexapod Robots

The Incremental Evolution of Gaits for Hexapod Robots The Incremental Evolution of Gaits for Hexapod Robots Abstract Gait control programs for hexapod robots are learned by incremental evolution. The first increment is used to learn the activations required

More information

Calculation of Trail Usage from Counter Data

Calculation of Trail Usage from Counter Data 1. Introduction 1 Calculation of Trail Usage from Counter Data 1/17/17 Stephen Martin, Ph.D. Automatic counters are used on trails to measure how many people are using the trail. A fundamental question

More information

PSM I PROFESSIONAL SCRUM MASTER

PSM I PROFESSIONAL SCRUM MASTER PSM I PROFESSIONAL SCRUM MASTER 1 Upon What kind of process control is SCRUM based? a) IDEAL b) SCRUM enterprise c) Empirical d) Agile 2 If burndown charts are used to visualize progress, what do they

More information

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms *

Queue analysis for the toll station of the Öresund fixed link. Pontus Matstoms * Queue analysis for the toll station of the Öresund fixed link Pontus Matstoms * Abstract A new simulation model for queue and capacity analysis of a toll station is presented. The model and its software

More information

Performing Hazard Analysis on Complex, Software- and Human-Intensive Systems

Performing Hazard Analysis on Complex, Software- and Human-Intensive Systems Performing Hazard Analysis on Complex, Software- and Human-Intensive Systems J. Thomas, S.M.; Massachusetts Institute of Technology; Cambridge, Massachusetts, USA N. G. Leveson Ph.D.; Massachusetts Institute

More information

The system design must obey these constraints. The system is to have the minimum cost (capital plus operating) while meeting the constraints.

The system design must obey these constraints. The system is to have the minimum cost (capital plus operating) while meeting the constraints. Computer Algorithms in Systems Engineering Spring 2010 Problem Set 6: Building ventilation design (dynamic programming) Due: 12 noon, Wednesday, April 21, 2010 Problem statement Buildings require exhaust

More information

Lecture 10. Support Vector Machines (cont.)

Lecture 10. Support Vector Machines (cont.) Lecture 10. Support Vector Machines (cont.) COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Soft margin SVM Intuition and problem

More information

Improving WCET Analysis for Synthesized Code from Matlab/Simulink/Stateflow. Lili Tan Compiler Design Lab Saarland University

Improving WCET Analysis for Synthesized Code from Matlab/Simulink/Stateflow. Lili Tan Compiler Design Lab Saarland University Improving WCET Analysis for Synthesized Code from Matlab/Simulink/Stateflow Lili Tan Compiler Design Lab Saarland University Safety-Criticial Embedded Software WCET bound required for Scheduling >=0 Execution

More information

FIRE PROTECTION. In fact, hydraulic modeling allows for infinite what if scenarios including:

FIRE PROTECTION. In fact, hydraulic modeling allows for infinite what if scenarios including: By Phil Smith, Project Manager and Chen-Hsiang Su, PE, Senior Consultant, Lincolnshire, IL, JENSEN HUGHES A hydraulic model is a computer program configured to simulate flows for a hydraulic system. The

More information

Robust Task Execution: Procedural and Model-based. Outline. Desiderata: Robust Task-level Execution

Robust Task Execution: Procedural and Model-based. Outline. Desiderata: Robust Task-level Execution Robust Task Execution: Procedural and Model-based Mission Goals and Environment Constraints Temporal Planner Temporal Network Solver Projective Task Expansion Initial Conditions Temporal Plan Dynamic Scheduling

More information

Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections

Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections Safety Assessment of Installing Traffic Signals at High-Speed Expressway Intersections Todd Knox Center for Transportation Research and Education Iowa State University 2901 South Loop Drive, Suite 3100

More information

AN AUTONOMOUS DRIVER MODEL FOR THE OVERTAKING MANEUVER FOR USE IN MICROSCOPIC TRAFFIC SIMULATION

AN AUTONOMOUS DRIVER MODEL FOR THE OVERTAKING MANEUVER FOR USE IN MICROSCOPIC TRAFFIC SIMULATION AN AUTONOMOUS DRIVER MODEL FOR THE OVERTAKING MANEUVER FOR USE IN MICROSCOPIC TRAFFIC SIMULATION OMAR AHMAD oahmad@nads-sc.uiowa.edu YIANNIS E. PAPELIS yiannis@nads-sc.uiowa.edu National Advanced Driving

More information

Iteration: while, for, do while, Reading Input with Sentinels and User-defined Functions

Iteration: while, for, do while, Reading Input with Sentinels and User-defined Functions Iteration: while, for, do while, Reading Input with Sentinels and User-defined Functions This programming assignment uses many of the ideas presented in sections 6 and 7 of the course notes. You are advised

More information

Fast Floating Point Compression on the Cell BE Processor

Fast Floating Point Compression on the Cell BE Processor Fast Floating Point Compression on the Cell BE Processor Ajith Padyana, T.V. Siva Kumar, P.K.Baruah Sri Satya Sai University Prasanthi Nilayam - 515134 Andhra Pradhesh, India ajith.padyana@gmail.com, tvsivakumar@gmail.com,

More information

A SEMI-PRESSURE-DRIVEN APPROACH TO RELIABILITY ASSESSMENT OF WATER DISTRIBUTION NETWORKS

A SEMI-PRESSURE-DRIVEN APPROACH TO RELIABILITY ASSESSMENT OF WATER DISTRIBUTION NETWORKS A SEMI-PRESSURE-DRIVEN APPROACH TO RELIABILITY ASSESSMENT OF WATER DISTRIBUTION NETWORKS S. S. OZGER PhD Student, Dept. of Civil and Envir. Engrg., Arizona State Univ., 85287, Tempe, AZ, US Phone: +1-480-965-3589

More information

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS

#19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS #19 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR USING TRAFFIC CAMERAS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University November 25, 2018. Disclaimer

More information

Replay using Recomposition: Alignment-Based Conformance Checking in the Large

Replay using Recomposition: Alignment-Based Conformance Checking in the Large Replay using Recomposition: Alignment-Based Conformance Checking in the Large Wai Lam Jonathan Lee 2, H.M.W. Verbeek 1, Jorge Munoz-Gama 2, Wil M.P. van der Aalst 1, and Marcos Sepúlveda 2 1 Architecture

More information

Keywords: Rights-Based Management, Fisheries, Voluntary, Experiment, Chignik

Keywords: Rights-Based Management, Fisheries, Voluntary, Experiment, Chignik VOLUNTARY APPROACHES TO TRANSITIONING FROM COMPETIVE FISHERIES TO RIGHTS-BASED MANAGEMENT Gunnar Knapp, University of Alaska Anchorage, Gunnar.Knapp@uaa.alaska.edu James J. Murphy, University of Alaska

More information

Legendre et al Appendices and Supplements, p. 1

Legendre et al Appendices and Supplements, p. 1 Legendre et al. 2010 Appendices and Supplements, p. 1 Appendices and Supplement to: Legendre, P., M. De Cáceres, and D. Borcard. 2010. Community surveys through space and time: testing the space-time interaction

More information

Online Companion to Using Simulation to Help Manage the Pace of Play in Golf

Online Companion to Using Simulation to Help Manage the Pace of Play in Golf Online Companion to Using Simulation to Help Manage the Pace of Play in Golf MoonSoo Choi Industrial Engineering and Operations Research, Columbia University, New York, NY, USA {moonsoo.choi@columbia.edu}

More information

C. Mokkapati 1 A PRACTICAL RISK AND SAFETY ASSESSMENT METHODOLOGY FOR SAFETY- CRITICAL SYSTEMS

C. Mokkapati 1 A PRACTICAL RISK AND SAFETY ASSESSMENT METHODOLOGY FOR SAFETY- CRITICAL SYSTEMS C. Mokkapati 1 A PRACTICAL RISK AND SAFETY ASSESSMENT METHODOLOGY FOR SAFETY- CRITICAL SYSTEMS Chinnarao Mokkapati Ansaldo Signal Union Switch & Signal Inc. 1000 Technology Drive Pittsburgh, PA 15219 Abstract

More information

Purpose. Scope. Process flow OPERATING PROCEDURE 07: HAZARD LOG MANAGEMENT

Purpose. Scope. Process flow OPERATING PROCEDURE 07: HAZARD LOG MANAGEMENT SYDNEY TRAINS SAFETY MANAGEMENT SYSTEM OPERATING PROCEDURE 07: HAZARD LOG MANAGEMENT Purpose Scope Process flow This operating procedure supports SMS-07-SP-3067 Manage Safety Change and establishes the

More information

2600T Series Pressure Transmitters Plugged Impulse Line Detection Diagnostic. Pressure Measurement Engineered solutions for all applications

2600T Series Pressure Transmitters Plugged Impulse Line Detection Diagnostic. Pressure Measurement Engineered solutions for all applications Application Description AG/266PILD-EN Rev. C 2600T Series Pressure Transmitters Plugged Impulse Line Detection Diagnostic Pressure Measurement Engineered solutions for all applications Increase plant productivity

More information

Using Markov Chains to Analyze a Volleyball Rally

Using Markov Chains to Analyze a Volleyball Rally 1 Introduction Using Markov Chains to Analyze a Volleyball Rally Spencer Best Carthage College sbest@carthage.edu November 3, 212 Abstract We examine a volleyball rally between two volleyball teams. Using

More information

CONTINUING REVIEW CRITERIA FOR RENEWAL

CONTINUING REVIEW CRITERIA FOR RENEWAL 1. POLICY Steering Committee approved / Effective Date: 9/2/15 The IRB conducts continuing review of research taking place within its jurisdiction at intervals appropriate to the degree of risk, but not

More information

A SURVEY OF 1997 COLORADO ANGLERS AND THEIR WILLINGNESS TO PAY INCREASED LICENSE FEES

A SURVEY OF 1997 COLORADO ANGLERS AND THEIR WILLINGNESS TO PAY INCREASED LICENSE FEES Executive Summary of research titled A SURVEY OF 1997 COLORADO ANGLERS AND THEIR WILLINGNESS TO PAY INCREASED LICENSE FEES Conducted by USDA Forest Service Rocky Mountain Research Station Fort Collins,

More information

Lesson 14: Games of Chance and Expected Value

Lesson 14: Games of Chance and Expected Value Student Outcomes Students use expected payoff to compare strategies for a simple game of chance. Lesson Notes This lesson uses examples from the previous lesson as well as some new examples that expand

More information

Author s Name Name of the Paper Session. Positioning Committee. Marine Technology Society. DYNAMIC POSITIONING CONFERENCE September 18-19, 2001

Author s Name Name of the Paper Session. Positioning Committee. Marine Technology Society. DYNAMIC POSITIONING CONFERENCE September 18-19, 2001 Author s Name Name of the Paper Session PDynamic Positioning Committee Marine Technology Society DYNAMIC POSITIONING CONFERENCE September 18-19, 2001 POWER PLANT SESSION A New Concept for Fuel Tight DP

More information

Analysis of Shear Lag in Steel Angle Connectors

Analysis of Shear Lag in Steel Angle Connectors University of New Hampshire University of New Hampshire Scholars' Repository Honors Theses and Capstones Student Scholarship Spring 2013 Analysis of Shear Lag in Steel Angle Connectors Benjamin Sawyer

More information

CENG 466 Artificial Intelligence. Lecture 4 Solving Problems by Searching (II)

CENG 466 Artificial Intelligence. Lecture 4 Solving Problems by Searching (II) CENG 466 Artificial Intelligence Lecture 4 Solving Problems by Searching (II) Topics Search Categories Breadth First Search Uniform Cost Search Depth First Search Depth Limited Search Iterative Deepening

More information

APPLICATION OF TOOL SCIENCE TECHNIQUES TO IMPROVE TOOL EFFICIENCY FOR A DRY ETCH CLUSTER TOOL. Robert Havey Lixin Wang DongJin Kim

APPLICATION OF TOOL SCIENCE TECHNIQUES TO IMPROVE TOOL EFFICIENCY FOR A DRY ETCH CLUSTER TOOL. Robert Havey Lixin Wang DongJin Kim Proceedings of the 2011 Winter Simulation Conference S. Jain, R.R. Creasey, J. Himmelspach, K.P. White, and M. Fu, eds. APPLICATION OF TOOL SCIENCE TECHNIQUES TO IMPROVE TOOL EFFICIENCY FOR A DRY ETCH

More information

A REVIEW OF AGE ADJUSTMENT FOR MASTERS SWIMMERS

A REVIEW OF AGE ADJUSTMENT FOR MASTERS SWIMMERS A REVIEW OF ADJUSTMENT FOR MASTERS SWIMMERS Written by Alan Rowson Copyright 2013 Alan Rowson Last Saved on 28-Apr-13 page 1 of 10 INTRODUCTION In late 2011 and early 2012, in conjunction with Anthony

More information

Modeling of Hydraulic Hose Paths

Modeling of Hydraulic Hose Paths Mechanical Engineering Conference Presentations, Papers, and Proceedings Mechanical Engineering 9-2002 Modeling of Hydraulic Hose Paths Kurt A. Chipperfield Iowa State University Judy M. Vance Iowa State

More information

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Wenbing Zhao. Department of Electrical and Computer Engineering

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Wenbing Zhao. Department of Electrical and Computer Engineering EEC 686/785 Modeling & Performance Evaluation of Computer Systems Lecture 6 Department of Electrical and Computer Engineering Cleveland State University wenbing@ieee.org Outline 2 Review of lecture 5 The

More information

CHAPTER 1 INTRODUCTION TO RELIABILITY

CHAPTER 1 INTRODUCTION TO RELIABILITY i CHAPTER 1 INTRODUCTION TO RELIABILITY ii CHAPTER-1 INTRODUCTION 1.1 Introduction: In the present scenario of global competition and liberalization, it is imperative that Indian industries become fully

More information

Thank you for downloading. Help Guide For The Lay Back Horse Racing Backing Spreadsheet.

Thank you for downloading. Help Guide For The Lay Back Horse Racing Backing Spreadsheet. Thank you for downloading Help Guide For The Lay Back Horse Racing Backing Spreadsheet www.laybackandgetrich.com version 2.0 Please save this ebook to your computer now. CityMan Business Solutions Ltd

More information

Outline. Terminology. EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Steps in Capacity Planning and Management

Outline. Terminology. EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 6. Steps in Capacity Planning and Management EEC 686/785 Modeling & Performance Evaluation of Computer Systems Lecture 6 Department of Electrical and Computer Engineering Cleveland State University wenbing@ieee.org Outline Review of lecture 5 The

More information

Traffic circles. February 9, 2009

Traffic circles. February 9, 2009 Traffic circles February 9, 2009 Abstract The use of a traffic circle is a relatively common means of controlling traffic in an intersection. Smaller Traffic circles can be especially effective in routing

More information

Handicapping Process Series

Handicapping Process Series Frandsen Publishing Presents Favorite ALL-Ways TM Newsletter Articles Handicapping Process Series Part 1 of 6: Toolbox vs. Black Box Handicapping Plus Isolating the Contenders and the Most Likely Horses

More information

A study on the relation between safety analysis process and system engineering process of train control system

A study on the relation between safety analysis process and system engineering process of train control system A study on the relation between safety analysis process and system engineering process of train control system Abstract - In this paper, the relationship between system engineering lifecycle and safety

More information

ECE 697B (667) Spring 2003

ECE 697B (667) Spring 2003 ECE 667 - Synthesis & Verification - Lecture 2 ECE 697 (667) Spring 23 Synthesis and Verification of Digital Systems unctional Decomposition Slides adopted (with permission) from. Mishchenko, 23 Overview

More information

The Scrum Guide. The Definitive Guide to Scrum: The Rules of the Game. October Developed and sustained by Ken Schwaber and Jeff Sutherland

The Scrum Guide. The Definitive Guide to Scrum: The Rules of the Game. October Developed and sustained by Ken Schwaber and Jeff Sutherland The Scrum Guide The Definitive Guide to Scrum: The Rules of the Game October 2011 Developed and sustained by Ken Schwaber and Jeff Sutherland Table of Contents Purpose of the Scrum Guide... 3 Scrum Overview...

More information

An Application of Signal Detection Theory for Understanding Driver Behavior at Highway-Rail Grade Crossings

An Application of Signal Detection Theory for Understanding Driver Behavior at Highway-Rail Grade Crossings An Application of Signal Detection Theory for Understanding Driver Behavior at Highway-Rail Grade Crossings Michelle Yeh and Jordan Multer United States Department of Transportation Volpe National Transportation

More information

Opleiding Informatica

Opleiding Informatica Opleiding Informatica Determining Good Tactics for a Football Game using Raw Positional Data Davey Verhoef Supervisors: Arno Knobbe Rens Meerhoff BACHELOR THESIS Leiden Institute of Advanced Computer Science

More information

Questions & Answers About the Operate within Operate within IROLs Standard

Questions & Answers About the Operate within Operate within IROLs Standard Index: Introduction to Standard...3 Expansion on Definitions...5 Questions and Answers...9 Who needs to comply with this standard?...9 When does compliance with this standard start?...10 For a System Operator

More information

Illinois Institute of Technology Institutional Review Board Handbook of Procedures for the Protection of Human Research Subjects

Illinois Institute of Technology Institutional Review Board Handbook of Procedures for the Protection of Human Research Subjects Illinois Institute of Technology Institutional Review Board Handbook of Procedures for the Protection of Human Research Subjects The Basics of Human Subject Research Protection Federal regulation (Title

More information

/435 Artificial Intelligence Fall 2015

/435 Artificial Intelligence Fall 2015 Final Exam 600.335/435 Artificial Intelligence Fall 2015 Name: Section (335/435): Instructions Please be sure to write both your name and section in the space above! Some questions will be exclusive to

More information

Scheduling the Brazilian Soccer Championship. Celso C. Ribeiro* Sebastián Urrutia

Scheduling the Brazilian Soccer Championship. Celso C. Ribeiro* Sebastián Urrutia Scheduling the Brazilian Soccer Championship Celso C. Ribeiro* Sebastián Urrutia Motivation Problem statement Solution approach Summary Phase 1: create all feasible Ps Phase 2: assign Ps to elite teams

More information

Better Search Improved Uninformed Search CIS 32

Better Search Improved Uninformed Search CIS 32 Better Search Improved Uninformed Search CIS 32 Functionally PROJECT 1: Lunar Lander Game - Demo + Concept - Open-Ended: No One Solution - Menu of Point Options - Get Started NOW!!! - Demo After Spring

More information

Non Functional Requirement (NFR)

Non Functional Requirement (NFR) Non Functional Requirement (NFR) Balasubramanian Swaminathan PMP, ACP, CSM, CSP, SPC4.0, AHF Director Global Programs, Digital Operations [Enterprise Agile Coach and Leader] GE Healthcare Digital Copyright

More information

CMPUT680 - Winter 2001

CMPUT680 - Winter 2001 CMPUT680 - Winter 2001 Topic 6: Register Allocation and Instruction Scheduling José Nelson Amaral http://www.cs.ualberta.ca/~amaral/courses/680 CMPUT 680 - Compiler Design and Optimization 1 Reading List

More information

Software Reliability 1

Software Reliability 1 Software Reliability 1 Software Reliability What is software reliability? the probability of failure-free software operation for a specified period of time in a specified environment input sw output We

More information

Service & Support. Questions and Answers about the Proof Test Interval. Proof Test According to IEC FAQ August Answers for industry.

Service & Support. Questions and Answers about the Proof Test Interval. Proof Test According to IEC FAQ August Answers for industry. Cover sheet Questions and Answers about the Proof Test Interval Proof Test According to IEC 62061 FAQ August 2012 Service & Support Answers for industry. Contents This entry originates from the Siemens

More information

Safety assessments for Aerodromes (Chapter 3 of the PANS-Aerodromes, 1 st ed)

Safety assessments for Aerodromes (Chapter 3 of the PANS-Aerodromes, 1 st ed) Safety assessments for Aerodromes (Chapter 3 of the PANS-Aerodromes, 1 st ed) ICAO MID Seminar on Aerodrome Operational Procedures (PANS-Aerodromes) Cairo, November 2017 Avner Shilo, Technical officer

More information

Evaluating the Influence of R3 Treatments on Fishing License Sales in Pennsylvania

Evaluating the Influence of R3 Treatments on Fishing License Sales in Pennsylvania Evaluating the Influence of R3 Treatments on Fishing License Sales in Pennsylvania Prepared for the: Pennsylvania Fish and Boat Commission Produced by: PO Box 6435 Fernandina Beach, FL 32035 Tel (904)

More information

The Incidence of Daytime Road Hunting During the Dog and No-Dog Deer Seasons in Mississippi: Comparing Recent Data to Historical Data

The Incidence of Daytime Road Hunting During the Dog and No-Dog Deer Seasons in Mississippi: Comparing Recent Data to Historical Data The Incidence of Daytime Road Hunting During the Dog and No-Dog Deer Seasons in Mississippi: Comparing Recent Data to Historical Data Preston G. Sullivan, Coalition for Ethical Deer Hunting, ethicaldeerhunting@gmail.com

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Jason Corso SUNY at Buffalo 12 January 2009 J. Corso (SUNY at Buffalo) Introduction to Pattern Recognition 12 January 2009 1 / 28 Pattern Recognition By Example Example:

More information

A statistical model of Boy Scout disc golf skills by Steve West December 17, 2006

A statistical model of Boy Scout disc golf skills by Steve West December 17, 2006 A statistical model of Boy Scout disc golf skills by Steve West December 17, 2006 Abstract: In an attempt to produce the best designs for the courses I am designing for Boy Scout Camps, I collected data

More information

If you need to reinstall FastBreak Pro you will need to do a complete reinstallation and then install the update.

If you need to reinstall FastBreak Pro you will need to do a complete reinstallation and then install the update. Using this Beta Version of FastBreak Pro First, this new beta version (Version 6.X) will only work with users who have version 5.X of FastBreak Pro. We recommend you read this entire addendum before trying

More information

A IMPROVED VOGEL S APPROXIMATIO METHOD FOR THE TRA SPORTATIO PROBLEM. Serdar Korukoğlu 1 and Serkan Ballı 2.

A IMPROVED VOGEL S APPROXIMATIO METHOD FOR THE TRA SPORTATIO PROBLEM. Serdar Korukoğlu 1 and Serkan Ballı 2. Mathematical and Computational Applications, Vol. 16, No. 2, pp. 370-381, 2011. Association for Scientific Research A IMPROVED VOGEL S APPROXIMATIO METHOD FOR THE TRA SPORTATIO PROBLEM Serdar Korukoğlu

More information

Rescue Rover. Robotics Unit Lesson 1. Overview

Rescue Rover. Robotics Unit Lesson 1. Overview Robotics Unit Lesson 1 Overview In this challenge students will be presented with a real world rescue scenario. The students will need to design and build a prototype of an autonomous vehicle to drive

More information

SIL Safety Manual. ULTRAMAT 6 Gas Analyzer for the Determination of IR-Absorbing Gases. Supplement to instruction manual ULTRAMAT 6 and OXYMAT 6

SIL Safety Manual. ULTRAMAT 6 Gas Analyzer for the Determination of IR-Absorbing Gases. Supplement to instruction manual ULTRAMAT 6 and OXYMAT 6 ULTRAMAT 6 Gas Analyzer for the Determination of IR-Absorbing Gases SIL Safety Manual Supplement to instruction manual ULTRAMAT 6 and OXYMAT 6 ULTRAMAT 6F 7MB2111, 7MB2117, 7MB2112, 7MB2118 ULTRAMAT 6E

More information

CASE STUDY. Compressed Air Control System. Industry. Application. Background. Challenge. Results. Automotive Assembly

CASE STUDY. Compressed Air Control System. Industry. Application. Background. Challenge. Results. Automotive Assembly Compressed Air Control System Industry Automotive Assembly Application Savigent Platform and Industrial Compressed Air Systems Background This automotive assembly plant was using over 40,000 kilowatt hours

More information

Dealing with Dependent Failures in Distributed Systems

Dealing with Dependent Failures in Distributed Systems Dealing with Dependent Failures in Distributed Systems Ranjita Bhagwan, Henri Casanova, Alejandro Hevia, Flavio Junqueira, Yanhua Mao, Keith Marzullo, Geoff Voelker, Xianan Zhang University California

More information

TERMINATION FOR HYBRID TABLEAUS

TERMINATION FOR HYBRID TABLEAUS TERMINATION FOR HYBRID TABLEAUS THOMAS BOLANDER AND PATRICK BLACKBURN Abstract. This article extends and improves work on tableau-based decision methods for hybrid logic by Bolander and Braüner [5]. Their

More information

Lab 4: Root Locus Based Control Design

Lab 4: Root Locus Based Control Design Lab 4: Root Locus Based Control Design References: Franklin, Powell and Emami-Naeini. Feedback Control of Dynamic Systems, 3 rd ed. Addison-Wesley, Massachusetts: 1994. Ogata, Katsuhiko. Modern Control

More information

PEPSE Modeling of Triple Trains of Turbines Sending Extraction Steam to Triple Trains of Feedwater Heaters

PEPSE Modeling of Triple Trains of Turbines Sending Extraction Steam to Triple Trains of Feedwater Heaters PEPSE Modeling of Triple Trains of Turbines Sending Extraction Steam to Triple Trains of Feedwater Heaters Gene L. Minner, PhD SCIENTECH, Inc 440 West Broadway Idaho Falls, ID 83402 ABSTRACT This paper

More information

Lane changing and merging under congested conditions in traffic simulation models

Lane changing and merging under congested conditions in traffic simulation models Urban Transport 779 Lane changing and merging under congested conditions in traffic simulation models P. Hidas School of Civil and Environmental Engineering, University of New South Wales, Australia Abstract

More information

by Robert Gifford and Jorge Aranda University of Victoria, British Columbia, Canada

by Robert Gifford and Jorge Aranda University of Victoria, British Columbia, Canada Manual for FISH 4.0 by Robert Gifford and Jorge Aranda University of Victoria, British Columbia, Canada Brief Introduction FISH 4.0 is a microworld exercise designed by University of Victoria professor

More information

Analysis of the Article Entitled: Improved Cube Handling in Races: Insights with Isight

Analysis of the Article Entitled: Improved Cube Handling in Races: Insights with Isight Analysis of the Article Entitled: Improved Cube Handling in Races: Insights with Isight Michelin Chabot (michelinchabot@gmail.com) February 2015 Abstract The article entitled Improved Cube Handling in

More information

Basic CPM Calculations

Basic CPM Calculations Overview Core Scheduling Papers: #7 Basic CPM Calculations Time Analysis calculations in a Precedence Diagramming Method (PDM) Critical Path Method (CPM) network can be done in a number of different ways.

More information

Uninformed search methods II.

Uninformed search methods II. CS 2710 Foundations of AI Lecture 4 Uninformed search methods II. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Announcements Homework assignment 1 is out Due on Tuesday, September 12, 2017 before

More information

CRICKET ONTOLOGY. Project Description :- Instructor :- Prof: Navjyothi Singh

CRICKET ONTOLOGY. Project Description :- Instructor :- Prof: Navjyothi Singh Instructor :- Prof: Navjyothi Singh CRICKET ONTOLOGY Project Team :- PV Sai Krishna (200402036) Abhilash I (200501004) Phani Chaitanya (200501076) Kranthi Reddy (200502008) Vidyadhar Rao (200601100) Project

More information

Become Expert at Something

Become Expert at Something Frandsen Publishing Presents Favorite ALL-Ways TM Newsletter Articles Become Expert at Something To achieve reasonably consistent profitable play, it is critically important to become expert at handling

More information

The Safety Case. The safety case

The Safety Case. The safety case The Safety Case Structure of safety cases Safety argument notation Budapest University of Technology and Economics Department of Measurement and Information Systems The safety case Definition (core): The

More information

User Help. Fabasoft Scrum

User Help. Fabasoft Scrum User Help Fabasoft Scrum Copyright Fabasoft R&D GmbH, Linz, Austria, 2018. All rights reserved. All hardware and software names used are registered trade names and/or registered trademarks of the respective

More information

CS 4649/7649 Robot Intelligence: Planning

CS 4649/7649 Robot Intelligence: Planning CS 4649/7649 Robot Intelligence: Planning Roadmap Approaches Sungmoon Joo School of Interactive Computing College of Computing Georgia Institute of Technology S. Joo (sungmoon.joo@cc.gatech.edu) 1 *Slides

More information

MoPac South: Impact on Cesar Chavez Street and the Downtown Network

MoPac South: Impact on Cesar Chavez Street and the Downtown Network MoPac South: Impact on Cesar Chavez Street and the Downtown Network Prepared by: The University of Texas at Austin Center for Transportation Research Prepared for: Central Texas Regional Mobility Authority

More information

Kochi University of Technology Aca Study on Dynamic Analysis and Wea Title stem for Golf Swing Author(s) LI, Zhiwei Citation 高知工科大学, 博士論文. Date of 2015-03 issue URL http://hdl.handle.net/10173/1281 Rights

More information

Automating Injection Molding Simulation using Autonomous Optimization

Automating Injection Molding Simulation using Autonomous Optimization Automating Injection Molding Simulation using Autonomous Optimization Matt Proske, Rodrigo Gutierrez, & Gabriel Geyne SIGMASOFT Virtual Molding Autonomous optimization is coupled to injection molding simulation

More information

Ammonia Synthesis with Aspen Plus V8.0

Ammonia Synthesis with Aspen Plus V8.0 Ammonia Synthesis with Aspen Plus V8.0 Part 2 Closed Loop Simulation of Ammonia Synthesis 1. Lesson Objectives Review Aspen Plus convergence methods Build upon the open loop Ammonia Synthesis process simulation

More information

ISOLATION OF NON-HYDROSTATIC REGIONS WITHIN A BASIN

ISOLATION OF NON-HYDROSTATIC REGIONS WITHIN A BASIN ISOLATION OF NON-HYDROSTATIC REGIONS WITHIN A BASIN Bridget M. Wadzuk 1 (Member, ASCE) and Ben R. Hodges 2 (Member, ASCE) ABSTRACT Modeling of dynamic pressure appears necessary to achieve a more robust

More information

Lecture 1 Temporal constraints: source and characterization

Lecture 1 Temporal constraints: source and characterization Real-Time Systems Lecture 1 Temporal constraints: source and characterization Basic concepts about real-time Requirements of Real-Time Systems Adapted from the slides developed by Prof. Luís Almeida for

More information

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag

Decision Trees. Nicholas Ruozzi University of Texas at Dallas. Based on the slides of Vibhav Gogate and David Sontag Decision Trees Nicholas Ruozzi University of Texas at Dallas Based on the slides of Vibhav Gogate and David Sontag Announcements Course TA: Hao Xiong Office hours: Friday 2pm-4pm in ECSS2.104A1 First homework

More information

A Game Theoretic Study of Attack and Defense in Cyber-Physical Systems

A Game Theoretic Study of Attack and Defense in Cyber-Physical Systems The First International Workshop on Cyber-Physical Networking Systems A Game Theoretic Study of Attack and Defense in Cyber-Physical Systems ChrisY.T.Ma Advanced Digital Sciences Center Illinois at Singapore

More information

ECO 199 GAMES OF STRATEGY Spring Term 2004 Precept Materials for Week 3 February 16, 17

ECO 199 GAMES OF STRATEGY Spring Term 2004 Precept Materials for Week 3 February 16, 17 ECO 199 GAMES OF STRATEGY Spring Term 2004 Precept Materials for Week 3 February 16, 17 Illustration of Rollback in a Decision Problem, and Dynamic Games of Competition Here we discuss an example whose

More information

Helicopter Safety Recommendation Summary for Small Operators

Helicopter Safety Recommendation Summary for Small Operators Helicopter Safety Recommendation Summary for Small Operators Prepared by the International Helicopter Safety Team September 2009 Introduction This document is intended to provide a summary of the initial

More information

Primary Objectives. Content Standards (CCSS) Mathematical Practices (CCMP) Materials

Primary Objectives. Content Standards (CCSS) Mathematical Practices (CCMP) Materials ODDSBALLS When is it worth buying a owerball ticket? Mathalicious 204 lesson guide Everyone knows that winning the lottery is really, really unlikely. But sometimes those owerball jackpots get really,

More information

TOPIC 10: BASIC PROBABILITY AND THE HOT HAND

TOPIC 10: BASIC PROBABILITY AND THE HOT HAND TOPIC 0: BASIC PROBABILITY AND THE HOT HAND The Hot Hand Debate Let s start with a basic question, much debated in sports circles: Does the Hot Hand really exist? A number of studies on this topic can

More information

arxiv: v1 [math.co] 11 Apr 2018

arxiv: v1 [math.co] 11 Apr 2018 arxiv:1804.04504v1 [math.co] 11 Apr 2018 Scheduling Asynchronous Round-Robin Tournaments Warut Suksompong Abstract. We study the problem of scheduling asynchronous round-robin tournaments. We consider

More information