Kiểm thử tự động cho các ứng dụng đa phương thức tương tác

127 162 0
Kiểm thử tự động cho các ứng dụng đa phương thức tương tác

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

MINISTRY OF EDUCATION & TRAINING THE UNIVERSITY OF DANANG - - LE THANH LONG AUTOMATIC TESTING OF INTERACTIVE MULTIMODAL APPLICATIONS ENGINEERING DOCTORAL THESIS Da Nang, 11/2017 MINISTRY OF EDUCATION & TRAINING THE UNIVERSITY OF DANANG - - LE THANH LONG AUTOMATIC TESTING OF INTERACTIVE MULTIMODAL APPLICATIONS Major: Computer science Code of Major: 62 48 01 01 ENGINEERING DOCTORAL THESIS Supervisors: Prof Dr Ioannis Parissis Assoc Prof Dr Nguyễn Thanh Bình Da Nang, 11/2017 REASSURANCES I hereby certify this thesis done by my work, under the guidance of Prof Dr Ioannis Parissis and Assoc Prof Dr Nguyễn Thanh Bình I certify that the research results presented in the thesis are true and are not copied from any other documents All quotations origins are recorded clearly and completely PhD Student Lê Thanh Long TABLE OF CONTENTS TABLE OF CONTENTS INTRODUCTION Chapter INTERACTIVE MULTIMODAL APPLICATIONS 12 1.1 Multimodality 12 1.2 Features of Multimodal Interaction 13 1.3 Multimodal Fusion 14 1.4 Design Spaces and Theoretical Frameworks 14 1.4.1 The TYCOON Theoretical Framework 14 1.4.2 CASE Design Space 15 1.4.3 The CARE Properties 16 1.5 Intoduction to Software Testing 18 1.5.1 Model-Based Testing 18 1.5.2 Operational Profile-Based Testing 19 1.5.3 Requirement-Based Testing 20 1.6 Testing Interactive Multimodal Applications 21 1.6.1 ICO Method 21 1.6.2 Event B Method 26 1.6.3 Synchronous Approach 30 1.7 Conclusion 34 Chapter BACKGROUND OF A NEW TEST MODELING LANGUAGE 35 2.1 Task Trees 35 2.2 The Interactive Multimodal Application Memo 38 2.3 Operational Profiles 42 2.4 Probabilistic Finite State Machines 44 2.5 Setting a Level of Abstraction for Testing 45 2.6 Generation of Tests at Dialog Controller Level 46 2.7 Taking into Account Conditional Probabilities 49 2.8 Evaluation of the Results of Extending the CTT with Conditional Probabilities 55 2.9 Conclusion 57 Chapter TTT: A NEW TEST MODELING LANGUAGE FOR TESTING INTERACTIVE MULTIMODAL APPLICATIONS 59 3.1 Introduction 59 3.2 The User Actions Traces 61 3.3 Definition of the TTT Language 62 3.4 Basic Structure of a TTT Model 62 3.5 Supporting Conditional Probability Specifications for all the CTT Operators 65 3.6 Storing the Traces of the User Actions 73 3.7 Transformation Rules from CTT to Test Model by Using the TTT Language 76 3.8 Taking into Account Multimodality 77 3.8.1 Generating Tests for Multimodal Events 77 3.8.2 Checking the Validity of CARE Properties 80 3.8.2.1 Equivalence Property 81 3.8.2.2 Redundancy-Equivalence Property 82 3.8.2.3 Complementarity Property 84 3.9 Modeling the Interactive Multimodal Application Memo by the TTT Language 86 3.10 Advantages and disadvantages of the TTT Language 91 3.11 Conclusion 91 Chapter TTTEST: THE SUPPORT TOOL FOR TESTING INTERACTIVE MULTIMODAL APPLICATIONS 93 4.1 Introduction 93 4.2 Test Execution Environment 93 4.3 The TTTEST Tool 94 4.4 Translating TESTCTT Model into C Program 95 4.4.1 Translation Problems 95 4.4.2 Automatic Translation Solution 98 4.5 Experimentation 100 4.5.1 Modeling the NotePad Application by the TTT Language 100 4.5.2 Testing the Memo Application 103 4.5.3 Testing the Map Navigator Application 105 4.6 Evaluation of the Resulted Test Cases 112 CONCLUSIONS AND FUTURE WORKS 116 PUBLICATIONS 118 REFERENCES 119 ACRONYMS No Acronyms Meaning API Application Programming Interface AUT Application Under Test CARE Complementarity, Assignment, Redundancy and Equivalence CASE Concurrent, Alternate, Synergistic and Exclusive CTT ConcurTaskTrees DFA Deterministic Finite State Automaton FSMs Finite State Machines HMD Head Mounted Display IMA Interactive Multimodal Application 10 ICO Interactive Cooperative Objects 11 MBT Model-Based Testing 12 NFA Nondeterministic Finite State Automaton 13 ObCS Object Control Structure 14 OPBT Operational Profile-Based Testing 15 PFSM Probabilities Finite State Machine 16 RBT Requirement-Based Testing 17 SQL Structured Query Language 18 TW Temporal Window 19 UML Unified Modeling Language 20 TTT Task Tree – based Test 21 TTTEST Testing IMA by means of the TTT language LIST OF FIGURES Figure 1.1 The TYCOON Theoretical Framework for studying multimodality 15 Figure 1.2 The CASE design space 16 Figure 1.3 A window of Tuple editor 21 Figure 1.4 The class Editor 23 Figure 1.5 The ObCS of the class Editor 24 Figure 2.1 The interactive multimodal application "Memo" 38 Figure 2.2 The Memo application struture [4] 40 Figure 2.3 The CTT for the Memo application 42 Figure 2.4 The CTT with unconditional probabilities for the Memo application 43 Figure 2.5 A multimodal application organized along the PAC-Amodeus model 45 Figure 2.6 FSM Example for the Memo application [20] 48 Figure 2.7 The behavior of the choice operator 51 Figure 2.8 The behavior of the concurrency operator 51 Figure 2.9 The behavior of the deactivation operator 52 Figure 2.10 The behavior of the option operator 52 Figure 2.11 The behavior of the suspend-resume operator 53 Figure 2.12 The extended CTT with conditional probabilities for the Memo application 55 Figure 4.1 The TTTEST Testing Environment 93 Figure 4.2 The TTTEST tool interface 94 Figure 4.3 Transformation diagrams from TESTCTT model into C program 98 Figure 4.4 Translating TESTCTT to C program with Lex/Yacc 99 Figure 4.5 The extended CTT for application NotePad 101 Figure 4.6 Multimodal interaction with a map 105 Figure 4.7 The extended CTT for the Map navigator application 106 LIST OF TABLES Table 2.1 The CTT Operators [27] 36 Table 2.2 Test data generated with unconditional probabilities and conditional probabilities 57 Table 2.3 Extensions of CTT operators with conditional probabilities 57 Table 3.1 The TTT Syntax 62 Table 3.2 A basic structure of a TESTCTT 63 Table 3.3 A structure of a function 65 Table 3.4 The CTT Syntax 65 Table 3.5 The behavior of choice operator 67 Table 3.6 The behavior of Concurrency operator 68 Table 3.7 The behavior of Deactivation operator 70 Table 3.8 The behavior of Suspend-resume operator 71 Table 3.9 The behavior of Option operator 72 Table 3.10 The SQL-like syntax 74 Table 3.11 The conditional constructs syntax 74 Table 3.12 Transformation rules from augmented CTT to Test Model 76 Table 3.13 The semantics of modalities operator 77 Table 3.14 Events are generated by Modalities operator 80 Table 3.15 The behavior of TestEquivalence operator 81 Table 3.16 The result of TestEquivalence operator 82 Table 3.17 The behavior of TestRedundant_EquivalenceEarly operator 83 Table 3.18 The result of TestRedundant_EquivalenceEarly operator 84 Table 3.19 The behavior of TestcomplementaryEarly operator 85 Table 3.20 The result of TestComplementaryEarly operator 86 Table 3.21 TESTCTT model for the Memo application 87 Table 4.1 Lexical substitutions 95 Table 4.2 Syntactic transformations 96 Table 4.3 Transformation of choice operator 97 Table 4.4 Transformations from create table statement in the TTT language into the C language 97 Table 4.5 High-level NotePad model .102 Table 4.6 The result of TestEquivalence operator 103 Table 4.7 The result of TestRedundantEquivalenceEarly operator 103 Table 4.8 The result of TestComplementaryEarly operator 104 Table 4.9 The TESTCTT model for the Map Navigator application 107 Table 4.10 Multimodal Events are generated for the Map Navigator 112 Table 4.11 Results of the Experiment 113 Table 4.12 Results of the Experiment 113 Table 4.13 Results of the Experiment 113 116 117 function touch_nb() 118 begin 119 touch_nb = select count(*) from u_actions 120 where EM1 = "touch_point(x,y)"; 121 end In line 2, we declare state variables to store the states of the model Variables qi (i: 3) define the corresponding status of the application (line 7,8,9,10,11) If the application and the user have nothing to do, the state is q0 If the map size is normal or if state of model is q2 and the user zooms out the map, or if the state of the model is q3 and the user zooms in the map, the model is in state q1 If the state of model is q1 and the user zooms in the map, the model is in state q2 If the state of model is q1 and the user zooms out the map, the model is in state q3 If q1 is true then users can zoom in or zoom out the map If q2 is true then users can zoom out the map If q3 is true then users can zoom in the map The TESTCTT generates multimodal events such as Mouse_point(x,y), speed_zoom_in, Touch_point(x,y), balloom_zoom_in We create the u_actions to store the traces of user actions From line 30 to line 32 user actions are inserted into the u_actions table We create a function to count the number of zooms in as the number of times the users zoom the map The model issues multimodal events from its internal state These events are translated into input data for the Map Navigator application The Map Navigator application receives and processes input events and generates outputs The model receives that outputs, updates internal state variable of the model and continue We use the Testcomplementary operator to test the complementary properties of events Table 4.10 shows an extract of the execution trace and the result of Testcomplementary operator 111 Table 4.10 Multimodal Events are generated for the Map Navigator 34 EM1 EM2 EM3 EM4 TOUT Mouse_point(x,y) Mouse_point(x,y) Speed_Zoom_in Speed_zoom_in Touch_point(x,y) Touch_point(x,y) ballloom_zoom_out ballloom_zoom_out Zoom_in Zoom_out Zoom_in Zoom_out The test data in Table 5.10 are suitable to test the Map Navigator application For instance, when the user uses mouse to choose the point(x,y) (line 1) and speed zoom_in, the application merges two events {Mouse_point(x,y), Speed_Zoom_in} in order to generate the statement zoom for point (x,y) and the map is zooming in TESTCTT model generates input data {Mouse_point(x,y), ballloom_zoom_in} The map generates interactive commands zoom_in for point (x, y) and zoom the map (line 2) Model TESTCTT generates event Mouse_point(x,y) (line 3) and event ballloom_zoom_out (line 4) to zoom in the map The Map Application has merged two events {Mouse_point (x,y), ballloom_zoom_out} to create interactive zoom_out command and the application zoom out the map (line 4) Model TESTCTT calculates and determines the application state and the state is in q1 In state q1, the TESTCTT model generates events Touch_point (x, y) and Speed_zoom_in to enlarge the map (line 6) At this time, the map is enlarged and the user has zoomed out the map by two events multimodal Touch_point (x, y) and ballloom_zoom_out (Line 8) 4.6 Evaluation of the Resulted Test Cases The purpose of the following experiments is to compare and assess the time for testing of the Memo application manually, by using a C program and by a model written inthe TTT language.We compare the time of two different issues when testing the Memo application: (1) generating tests for multimodal events, and (2) checking the validity of the CA E properties 112 To create tests for multimodal events, the user actions such as move, turn, get, set, remove are repeated 100 times and sent to the Memo application To check the validity of CARE properties, user actions such as get, set, remove are done repeatedly in a TW and transferred to the Memo application We conducted three experiments as follows: Experiment 1: We test the Memo application manually.We collected the complete time of the manual testing The Table 4.11 is the result of experiment Experiment 2: We test the Memo application by writing the C program The C program generates test data and check the validity of the CARE properties automatically We collected the complete time of the testing The Table 4.12 is the result of experiment Experiment 3: We test the Memo application by writing the model in the TTT language The Model is translated into a C program that generates test data and checks the validity of the CARE properties automatically We collected the complete time of the testing The Table 4.13 is the result of experiment Table 4.11 Results of the Experiment 35 Completion Time (Minutes) Checking the validity of Care properties 123 Test Data Generation 30 Failure Analysis Report 35 Total 188 Table 4.12 Results of the Experiment 36 Completion Time (Minutes) Writing C Program 545 Run the C program Total 547 Table 4.13 Results of the Experiment Completion Time (Minutes) 37 Writing TestCTT model 173 Convert to C program Run the C program Total 178 113 In Table 4.11, the completion time of the experiment is 188 minutes, while in Table 4.12, the completion time of the experiment is 547 minutes In Table 4.13, the completion time of the experiment is 178 minutes In summary, the results of the three experiments confirmed that the TTT language may help the testers to test the Memo application faster Although testing with the TTT language seems only slightly faster than manual testing, it should be kept in mind that using the TTT language makes it possible to replay tests as many times as wanted (when debugging the application) without additional effort while manual testing requires testers to the work again Of course, for a better evaluation, this experiment should be repeated with other users In the future, we need more testers to compare the time they need to test the IMA manually, by writing C programs and by writing models in the TTT language 4.7 Comparing the TTTEST tool with some other available methods To evaluate the effectiveness of the TTTest testing tool, we compare it with some of the other available methods  The synchronous approach has been proposed to model and verify by model-checking some properties of IMA, but its applicability is limited to small pieces of software In contrast, applications tested with TTTEST can be of any size and complexity  The ICO approach uses several notations, inspired from existing modeling languages, to build test models The variety of notations makes the modeling process hard The TestCTT model is written in a single specification and modeling language making possible to express, in a single and consistent syntax scenarios and conditional operational profiles for multimodal applications, test oracles and expected properties of the application 114  The Event B method makes the specification process harder for users who are not familiar with formal languages Moreover, operational profiles cannot be defined using conditions (in general, an occurrence probability is assigned to an event according to a condition) The TestCTT model supports conditional operational profiles for interactive multimodal applications 4.8 Conclusion We built the TTTEST tool that supports automatically testing IMA The tool supports creating the TESTCTT model in the TTT language and translating the TESTCTT model into a C program We have described the features of TESTCTT and how the tool was built Given an IMA, we need to write a model in TTTEST with which the IMA interacts The model represents the target and the way we want to test IMA In three case studies, the models are built in steps: test target selection, definition of notations of testing activity, state variables and their types, and writing test scripts for each activity in TTTEST language The resulting test model makes possible to generate test data of multimodal events and to check the validity of CARE properties 115 CONCLUSIONS AND FUTURE WORKS The Thesis Results With the initial objectives, the thesis titled "Automatic testing of interactive multimodal applications" has achieved some results as follows:  On the basis of analyzing the characteristic of CTT operators, we proposed enhancing task tree with conditional operational profiles by assigning conditional probabilities to the user actions involved in the tree operators  We defined the TTT language, which is a new test modeling language for interactive applications, based on task trees and supporting automatic test generation and conditional operational profiles We have outlined the main characteristics and operational semantics of this test modeling language This language can support conditional probabilities specifications for all the CTT operators, store the “traces” of the user actions and support read-only functions on these traces and it is formally defined to support automatic test generation  We proposed the transformation rules from CTT into a test model in the TTT language and the solution to generate test data for interactive applications by means of the TTT language  We have extended the TTT language to solve two problems: generating test data and checking CARE properties We defined a new operator Modalities to generate tests for multimodal events The CARE properties are tested by the TestEquivalence, TestRedundant_EquivalenceEarly and TestcomplementaryEarly operators  We also build the tool TTTEST that supports automatically testing these applications We have described the features of the tool and its underlying algorithms 116 Perspectives and Future Work In addition to the results achieved in the thesis, some problems may be posted for further research  Although testing IMA by means of the TTT language achieved a high level of testing automation, testers usually don't regard themselves as programmers, so they would prefer to construct models using graphical notations So our future works are how to translate automatically from such notations into the TTT language  We make communication channels among test managers As future work, we intend to support the construction of reports for test management with test cost, test coverage, and defects found  TTT-based testing methods need more time to construct the model of the IMA So, we intend to derive techniques for reverse engineering existing IMA by automatic exploration, leading to a partial automatic generation of TESTCTT model  We will research and improve data generation algorithms and check the validity of the care properties that we proposed  The TTTEST tool developed in this work can be further extended in the future to study the test data generation to test safety properties of this kind of IMAs In summary, TTT - based testing methods can fully use to test the interactive multimodal applications to automatic data generation and test the validity of CARE properties The method contributes to improve the quality of the interactive multimodal application, reduces the workload for testers and reduces time and cost for testing process 117 PUBLICATIONS Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “A New Test Modeling Language for Interactive Applications Based on Task Trees”, In Proceedings of the 4th International Symposium on Information and Communication Technology (SoICT 2013), ACM Publisher, ISBN: 978-1-5403-2454-0, pp.285293 Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “TTT: A Test Modeling Language for Interactive Applications Based on Task Trees”, in Proceedings of 16th National Conference: Selected Problems about IT and Telecommunication (@ 2013), ISBN: 978-604-67-0251-1, pp.333-338 Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “A solution of generate test data for interactive applications”, In Proceedings of the 7th National Conference on Fundamental and Applied Information Technology Research (FAIR 2014), ISBN: 978-604-913-300-8, pp 134-143 Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “Model-to-C program translation in TTTEST”, in Proceedings of 17th National Conference: Selected Problems about IT And Telecommunication, Ho Chi Minh city (@ 2015), 0506/11/2015, ISBN: 978-604-67-0645-8, pp 142-149 Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “Testing Multimodal Interactive Applications By Means of The TTT Language”, Domain Specific Model-Based Approaches To Verification And Validation - Amaretto 2016, In conjunction with the 4th International Conference on Model-Driven Engineering and Software Development - MODELSWARD 2016, 19 February, 2016 - Rome, Italy, ISBN: 978-989-758-166-3, pp 23-32 Le Thanh Long, Nguyen Thanh Binh, Ioannis Parissis, “TTTEST : The Tool Support For Testing Interactive Multimodal Applications”, In Proceedings of the International Conference on Electronic, Information and Communication (ICEIC 2016) 27-30/01.2016, IEEE Publisher, pp 78-81 118 REFERENCES [1] Baptiste, C., Nicola, M., Atau, T., Frédéric, B., (2015), "Adaptive Gesture Recognition with Variation Estimation for Interactive Systems", in journal ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Activity Recognition for Interaction and Regular Article archive Volume Issue [2] Benoit, C., Martin, J.-C., Pelachaud, C., Schomaker, L., Suhm, B (2000), “Audio-visual and multimodal speech-based systems” Handbook of Multimodal and Spoken Dialogue Systems: Resources,Terminology and Product Evaluation, Kluwer, pp 102-203 [3] Boehm B., Gray T., and Seewaldt T (1984), “Prototyping versus specifying: A multiproject experiment”, IEEE Transactions on Software Engineering, SE-10(3):pp 290–303 [4] Bouchet, J., and Nigay, L (2004), “ICARE: a component-based approach for the design and development of multimodal interfaces”, In Extended abstracts of the 2004 Conference on Human Factors in Computing Systems, CHI 2004, Vienna, Austria, pp 1325–1328 [5] Bouchet, J., Madani, L., Nigay, L., Oriat, C and Parissis I (2007) “Formal testing of multimodal interactive systems” In EIS‟2007 Engineering Interactive Systems, Salamanca, Spain, pp 36-52 [6] Brooks, P A and Memon, A (2009) “Introducing a test suite similarity metric for event sequence-based test cases”, ICSM, IEEE International Conference on Software Maintenance, pp 243-252 [7] Bruno Dumas (2010), “Frameworks, description languages and fusion engines for multimodal interactive systems” Thesis N° 1695 UniPrint, Fribourg [8] Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., and Young, R M (1995), ”Four easy pieces for assessing the usability of multimodal 119 interaction: the care properties”, In INTERACT, Chapman & Hall, pp 115120 [9] Clémentine, N., Pickin, S., Yves, L., T and Jean-Marc, J., (2003), “Automated Requirements-based Generation of Test Cases for Product Families”, Proceedings of the 18th IEEE International Conference on Automated Software Engineering (ASE‟03) [10] Dietz, P and Leigh, D (2001), “DiamondTouch: a multi-user touch technology”, Proc of UIST‟01, ACM Press, pp 219-226 [11] Dittmar, A (2000), “More precise descriptions of temporal relations within task models”, In Interactive Systems: Design, Specification, and Verification, 7th International Workshop DSV-IS, Proceedings, Limerick, Ireland, 5-6 June, pp 151–168 [12] Du, B., L., Ouabdesselam, F., and Richier, J., L., (1998), “Expressing and implementing operational profiles for reactive software validation”, In proceeding of 9th International Symposium on Software Reliability Engineering, Paderborn, Germany [13] Fredy, C., Jan, V., D., B., Kris, L., Karin, C (2014), "A domain-specific textual language for rapid prototyping of multimodal interactive systems", Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing system Rome, Italy, pp 97-106 [14] Gomez-Barrero M., Galbally J., Fierrez J., Ortega-Garcia J (2013) Multimodal Biometric Fusion: A Study on Vulnerabilities to Indirect Attacks In Ruiz-Shulcloper J., Sanniti di Baja G (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications C Lecture Notes in Computer Science, vol 8259 Springer, Berlin, Heidelberg [15] Horstmann M., Prenninger W., and El-Ramly M (2005), “Model-Based Testing of Reactive Systems”, Springer-Verlag, pp 439–461 [16] Julien, E., Fang, C., Natalie, R., Eric, C., Asif, K., Ronnie, T., Bo, Y., Yang, W., (2012), "Multimodal behavior and interaction as indicators of cognitive 120 load", in Journal ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on highlights of the decade in interactive intelligent systems archive Volume Issue [17] Ismail A.,W., Sunar M., S., (2015) “Multimodal Fusion: Gesture and Speech Input in Augmented Reality Environment” In: Phon-Amnuaisuk S., Au T (eds) Computational Intelligence in Information Systems Advances in Intelligent Systems and Computing, vol 331 Springer, Cham [18] Ljuin K., Horiuchi, Y., Umata, I., Yamamoto, S (2015) “Eye Gaze Analyses in L1 and L2 Conversations: Difference in Interaction Structures” In: Lecture Notes in Computer Science, vol 9302 Springer, Cham [19] Madani, L., Oriat, C., Parissis, I., Bouchet, J., and Nigay, L (2005), “Synchronous testing of multimodal systems : An operational profile-based approach”, In ISSRE, IEEE Computer Society,pp 325-334 [20] Madani, L., and Parissis, I., (2009), “Automatically testing interactive applications using extended task trees” J Log Algebr Program., 78(6), pp 454-471 [21] Madani, L., Oriat, C., Parissis, I., Bouchet, J., and Nigay, L (2005), “Synchronous testing of multimodal systems: An operational profile-based approach”, In 16th International Symposium on Software Reliability Engineering (ISSRE 2005), Chicago, IL, USA, 8-11, pp 325–334 [22] Madani, L., and Parissis, I., (2011), “Automatically testing interactive multimodal systems using task trees and fusion models”, In 6th international workshop on Automation of software test (AST '11), Hawai, USA [23] Mariani, L., Pezzè, M., iganelli, O., and Santoro, M (2011) “AutoBlackTest: a tool for automatic black-box testing” In proceedings of the 33rd International Conference on Software Engineering (ICSE '11) ACM, New York, NY, USA, pp 1013-1015 [24] Mark, U., and Bruno, L., (2007), Practical model-based testing - a tools approach Morgan Kaufmann Publishers Inc San Francisco, CA, USA 121 [25] Martin, J., C., (1997), “TYCOON: Theoretical Framework and Software Tools for Multimodal Interfaces” In Intelligence and Multimodality in Multimedia Interfaces, AAAI Press [26] Martin, F., Dennis,W., Marc, E., L (2016), “Semantics-based Software Techniques for Maintainable Multimodal Input Processing in Real-time Interactive Systems”, in IEEE 9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), Greenville, SC, USA [27] Mori, G., Paternò, F and Santoro, (2002), “CTTE: Support for developing and analyzing task models for interactive system design”, IEEE Transactions on Software Engineering (TSE), 28(8):, pp 797–813 [28] Mohand-Oussaïd, L., Aït-Sadoune I., and Aït Ameur Y., (2011), “Modelling Information Fission in Output Multi-modal Interactive Systems Using EventB” In Proceedings of the 1st International Conference on Model and Data Engineering (MEDI'11), Óbidos, Portugal (L Bellatreche and F Mota Pinto, eds.), Lecture Notes in Computer Science 6918, Springer, Berlin, 200-213 [29] Mohand-Oussaïd, L., Aït-Sadoune, I., Aït Ameur, Y., and Ahmed-Nacer, M., (2015), “A formal model for output multimodal HCI - An Event-B formalization” Computing 97, , 713-740 [30] Musa, J., (1993), “Operational Profiles in Software-Reliability Engineering”, IEEE Software, pp 14–32 [31] Nigay, L., and Coutaz, J., (1995), “A generic platform for addressing the multimodal challenge”, In CHI, ACM Press, pp 98–105 [32] Nigay, L., Coutaz, J.A (1993), “Design space for multimodal systems: concurrent processing and data fusion”, In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems, ACM, New York, NY, pp 172-178 [33] Nigay, L., Coutaz, J.A (1993), “Conceptual Software Architecture Models for Interactive Systems”, ACM, New York, NY 122 [34] Nigay, L and Coutaz, J (1997), “Multifeature Systems: The CARE Properties and Their Impact on Software Design”, In Multimedia Interfaces: Research and Applications, chapter 9, AAAI Press [35] Oviatt, S., L., (2003), “Advances in Robust Multimodal Interface Design”, In: IEEE Computer Graphics and Applications, vol 23 [36] Oviatt, S., L., Cohen, P., R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D (2000), “Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions”, In: Human Computer Interaction, vol 15, no 4, pp 263-322 [37] Palanque, P., (1992), Modelling user-driven interfaces by means of Interactive Cooperative Objects, Ph.D Dissertation of University Toulouse (France) [38] Palanque, P., (1994), “Petri Net Based Design Of User-Driven Interfaces Using Interactive Cooperative Objects Formalism”, in Proceedings of the Design, Specification and Verification of Interactive Systems - DSV-IS'94, [39] Palanque P., Bastide R., Sibertin C., Dourte L (1993), “Design of UserDriven Interfaces using Petri nets and Objects”, In proceedings of 5th Conference on Advanced Information Systems Engineering (CAISE'93), Lecture Notes in Computer Science N685, Springer-Verlag [40] Perakakis, M., Potamianos, A., (2012) "Affective evaluation of a mobile multimodal dialogue system using brain signals", in Spoken Language Technology Workshop (SLT), IEEE [41] Pretschner, A., Prenninger ,W., Wagner, S., (2005), “One evaluation of model-based testing and its automation”, In Proceedings of the 27th International Conference on Software Engineering, ACM Press, pp 392– 401 123 [42] Ratzka, A., (2013), “User Interface Patterns for Multimodal Interaction” In Lecture Notes in Computer Science, vol 7840 Springer, Berlin, Heidelberg [43] Robinson H (2003), “Obstacles and opportunities for model-based testing in an industrial software environment”, In First European Conference on Model-Driven Software Engineering [44] Russ, G., Sallans, B., Hareter, H (2005), “Semantic Based Information Fusion in a Multimodal Interface”, International Conference on HumanComputer Interaction (HCI‟05), Las Vegas, Nevada, USA, 20–23 June, pp 94-100 [45] Simon, F., Helena, M., Pushmeet, K., Sebastian, N., (2012) “Instructing people for training gestural interactive systems”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, Texas, USA, pp 1737-1746 [46] Stéphane, C., Mathieu, M., Daniel, P., (2015) “Verification of properties of interactive components from their executable code” In proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Duisburg, Germany pp 276-285 [47] Serena, Z., Sergio, C., Antonio, R., Antonio, C., Gualtiero, V., (2013) "Entertaining listening by means of the Stanza Logo-Motoria: an Interactive Multimodal Environment", in Entertainment Computing Volume 4, Issue 3, Pages 213–220 [48] Urbain J., et al (2013) “Multimodal Analysis of Laughter for an Interactive System” In: Mancas M., d‟ Alessandro N., Siebert X., Gosselin B., Valderrama C., Dutoit T (eds) Intelligent Technologies for Interactive Entertainment Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 124 Springer, Cham [49] Whittaker, J., A., Thomason, M., G., (1994), “A Markov chain model for statistical software testing”, IEEE Trans Softw Eng 20(10), pp 812–824 124 [50] Yang, W., Houqiang, L., Tao, M., Jingdong, W., Shipeng, L.,(2013), "Interactive Multimodal Visual Search on Mobile Device", in IEEE Transactions on Multimedia pp: 594 – 607 [51] Yamine, A., Nadjet, K., (2004), “A Generic Formal Specification of Fusion of Modalities in a Multimodal Hci” In: Jacquart R (eds) Building the Information Society IFIP International Federation for Information Processing, vol 156 Springer, Boston, MA [52] Yuan, X., Cohen, M B., Memon, A (2011) “GUI Interaction Testing: Incorporating Event Context” IEEE Trans Software Eng 37(4): 559-574 125 ... to reach s' from s In contrast to equivalence, assignment expresses the absence of choice: either there is no choice at all to get from one state to another Assignment can be defined: Assignment... structure of a function 65 Table 3.4 The CTT Syntax 65 Table 3.5 The behavior of choice operator 67 Table 3.6 The behavior of Concurrency operator 68 Table 3.7 The... substitutions 95 Table 4.2 Syntactic transformations 96 Table 4.3 Transformation of choice operator 97 Table 4.4 Transformations from create table statement in the TTT language

Ngày đăng: 28/09/2017, 08:38

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan