The international journal of advanced manufacturing technology, tập 59, số 9 12, 2012

15 402 0
The international journal of advanced manufacturing technology, tập 59, số 9 12, 2012

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Int J Adv Manuf Technol (2012) 59:1245–1259 DOI 10.1007/s00170-011-3575-0 ORIGINAL ARTICLE Tangible user interface of digital products in multi-displays Jae Yeol Lee & Min Seok Kim & Jae Sung Kim & Sang Min Lee Received: 21 February 2011 / Accepted: August 2011 / Published online: September 2011 # Springer-Verlag London Limited 2011 Abstract Early attempts at supporting interaction with digital products for the design review were based on CAD and virtual reality (VR) systems However, it is not easy to build a virtual environment of fine quality and to acquire tangible and natural interactions with VR-based systems, which are expensive and inflexible to be adaptable to typical offices or collaboration rooms We present a new method for supporting tangible interactions with digital products in immersive and non-immersive multi-display environments with inexpensive and convenient optical tracking The provided environment is more intuitive and natural to help participants to review digital products through their functional behavior modeling and evaluation Although vision-based image processing has been widely used for interaction tracking, it cannot be effectively used under a low illumination condition since most of the collaborations and meetings take place in such conditions To overcome this problem, the proposed approach utilizes Wiimote™ for infrared (IR)-based optical tracking and for capturing user's interactions and intents Thus, users can easily manipulate and evaluate digital products with inexpensive tools called IR tangibles in more natural and user-friendly environments such as large displays, tabletops, and situational displays in typical offices and workspaces Furthermore, the multi-view manager is J Y Lee (*) : M S Kim Chonnam National University, 300 Yongbong-dong, Buk-gu, Gwangju 500-757, South Korea e-mail: jaeyeol@chonnam.ac.kr J S Kim : S M Lee KISTI, Daejeon, South Korea suggested to effectively support multi-views of digital products among participants by providing public and private views We will show the effectiveness and usefulness of the proposed approach by demonstrating several implementation results and by evaluating user study of the proposed approach Keywords Tangible user interface Human–computer interaction Multi-display Wiimote IR-based optical tracking Introduction Design review of digital products is required to test their functionalities and characteristics, which can result in higher stability, better maintainability, and less potential errors of the products before production Shortening of development cycles demands the use of an intelligent interface for testing human–computer interactions of digital products and an efficient method for evaluating their functional behaviors [1] Many attempts at supporting digital product design and evaluation were based on traditional visual environments such as virtual reality (VR) and cave automatic virtual environment (CAVE) However, these environments are very expensive and not flexible Meanwhile, as displays increase in size and resolution while decreasing in price, various types of inexpensive multi-displays will be soon available such as situated displays or tabletops providing high-resolution visual output These large and high-resolution displays will provide the possibility of working up close with detailed information in typical office or workplace environments [2] For example, when people work collaboratively with a 1246 digital product, its related information is often placed on a wall or tabletop, where it is easy to view, annotate, and organize The information can be rearranged, annotated, and refined in order to evaluate the functional behavior of the digital product to solve a problem For this reason, multi-displays have been considered to be used for collaboration, robot, engineering, and realistic visualization and interaction [3–7] Furthermore, the availability of smart devices and their interactions have increased dramatically over the last decade, which provides new possibilities of interacting techniques such as multi-touch and sensor-based interactions [8, 9] Usually, a single-user design task in a multi-display environment mainly requires the visualization capabilities of a large display and demands long hours Similarly, in a collaborative discussion where users gather around a large conference room table, various digital contents frequently need to be displayed on a large screen for others to see However, it is not sufficient to simply move existing graphical user interfaces onto multi-displays For example, large displays afford different types of interactions than workstations or desktops for several key reasons The large visual display can be used to work with large quantities of simultaneously visible material On the other hand, interaction is directly on the screen with a pen-like device or by touch rather than with a keyboard and an indirect pointing device Also, people often work together at a wall, interweaving social and computer interactions However, direct manipulation through pointing and clicking is still considerably the dominant interaction paradigm in conventional user interfaces [10, 11] Most of the proposals in the previous research works require expensive display and considerable space [12] In addition, they cannot be effectively applied to another type of display to interact with digital products in a typical office or workspace Furthermore, in order to support useroriented and tangible interactions, a vision-based tracking has been widely used But, the vision tracking cannot be effectively used under a low illumination condition Note that most of the collaborations and meetings take place in such conditions Another related problem with most vision algorithms is the difficulty experienced when segmenting objects under varying lighting conditions and shadows and requires a large amount of processing time [13] This paper presents a new method for supporting tangible interactions with digital products in immersive and non-immersive multi-display environments with inexpensive and convenient optical tracking It can be adaptable to a large set of multi-displays such as a projected display, a tabletop, and a situated remote display to perform multitouch interactions with digital products In addition, the proposed approach makes participants review and evaluate the functional behavior of digital products efficiently with Int J Adv Manuf Technol (2012) 59:1245–1259 infrared (IR) tangibles To overcome the generic problem of the vision-based image processing approach, the proposed approach utilizes Wiimote [14] for optical tracking and for capturing user's various interactions and intentions effectively This approach can support the tangible user interface of digital products in immersive and non-immersive environments where users can easily manipulate and evaluate digital products, providing more effective and user-friendly circumstances Moreover, the proposed approach can be easily set up to run various environments without any difficulty such as large displays, tabletops, and situational displays in typical offices and workspaces The multi-view manager is suggested to effectively support multi-views of digital products among participants, which provides public and private views of the shared digital product We will show the effectiveness and usefulness of the proposed approach by demonstrating several implementation results and by evaluating user study of the proposed approach Section presents previous work Section explains tangible user interactions in multidisplays with IR tangibles and IR tracking Section proposes how to effectively support interactions with digital products in multi-displays for design review Section presents implementation results Finally, Section concludes with some remarks Previous work Early attempts at supporting interactions with digital products for the design review were based on computeraided design (CAD) and VR systems Powerful and expensive tools including stereoscopic display systems, head-mounted display, data gloves, and haptic devices have been utilized and combined to construct virtual prototyping systems that provide realistic display of digital products and offer various interaction and evaluation methods[1, 15] Since it is not easy to build a virtual environment of fine quality and to acquire tangible and natural interaction with VR-based systems, many alternative solutions have been proposed Another type of VR known as augmented reality (AR) is considered to be an excellent user interface Interacting in AR environments can provide convincing feedback to the user by giving the impression of natural interaction since virtual scenes are superimposed on physical models in a realistic appearance Thus, AR is considered to complement VR by providing an intuitive interface to a 3D information space embedded within physical reality Lee et al [16] proposed how to provide car maintenance services using AR in ubiquitous and mobile environments Christian et al [17] suggested virtual and mixed reality interfaces for elearning which can be applied to aircraft maintenance Int J Adv Manuf Technol (2012) 59:1245–1259 Regenbrecht et al [18] proposed a collaborative augmented reality system that featured face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 3D desktop applications within the shared 3D space However, AR depends on the marker tracking such that a vision-based image processing is intensively used This is a severe drawback for supporting realistic visualization in a large display or tabletop display since the image processing deteriorates when the resolution increases In addition, it is very difficult to interact directly with digital objects in AR environments since it is not convenient and natural to interact with them through marker-based paddles which are widely used in AR applications Meanwhile, to support effective and natural interactions with digital objects in various displays, vision-based image processing techniques have been used [4, 5, 7] The approach in [7] tracks a laser pointer and uses it as an input device which facilitates interactions from a distance While the laser pointer provides a very intuitive way to randomly access any portion of the wall-sized display, the natural shaking of the human hand makes it difficult to use for precise target acquisition tasks, particularly for smaller targets The VisionWand [19] uses simple computer vision algorithms to track the colored tips of a simple plastic wand to interact with large wall displays both close up and from a distance A variety of postures and gestures are recognized in order to perform an array of interactions A number of other systems use vision to track bare, unmarked hands using one or more cameras, with simple hand gestures for arms-reach interactions [20, 21] Dynamo was proposed and implemented for a communal multiuser interactive surface [21] The surface supported the cooperative sharing and exchange of a wide range of media that can be brought to the surface by users outside of their familiar organizational settings Recently, mobile devices have been considered as complementing tools to interact with virtual objects in ubiquitous environments Much work has attempted to bridge the gap between personal devices and multi-displays Many researches tried to augment mobile devices of limited capabilities with enhanced sensing or communication capabilities such as remote controller [8, 9] However, this approach is hard to effectively provide visual information to multi-users To overcome this limitation and integrate the interaction between smartphones and multi-displays, it is considered to provide visually controlled interaction Few research work has dealt with how to effectively support visually controlled views to support individual and cooperative interactions for collaborative design review in multidisplay and smartphone environment Although various ways have been proposed to support the visualization and evaluation of digital products, more 1247 research is still needed in the following aspects The interaction should be more intuitive and natural to help participants in the digital product design to make a product of interest more complete and malfunction free before production The environment should be available at low cost without strong restriction of its accessibility and should be adaptable to various environments and displays Moreover, for effective evaluation of the digital product, we need to define its functional behavior through forms, functions, and interactions In this paper, we address these aspects by proposing a natural interaction approach in multi-displays using convenient tangible interfaces Note that the proposed approach can be easily adaptable to various environments with low cost and much convenience Proposed approach This section explains how to effectively support tangible interactions with digital products in multi-displays using low-cost IR tracking, which provides much convenience and effectiveness for the design review of digital products It overviews the proposed system Then, it explains tangible interfaces for directly interacting with digital products 3.1 System overview The proposed approach consists of four layers: (1) tangible interface layer, (2) resource layer, (3) collaboration and evaluation layer, and (4) visualization layer as shown in Fig The tangible interface layer supports tracking of IR tangibles and interpreting user intent through analyzing IR tangible inputs The result is used for natural and direct interactions with digital products in multi-displays such as large display, tabletop, and remote display For natural user interactions, IR tangibles are devised for the direct multitouch interfaces To transform the user space to the visualization space, the perspective transform is calculated The collaboration and evaluation layer manages multiviews among participants and generates graphic scenes adaptable to participants' devices and contexts In particular, it provides private and public views of the shared digital product among participants Each view is systematically generated by the multi-view manager which controls all the views of participants and generates adaptive views considering the display context and user's preference To support the design review of digital products, the finite-state machine (FSM)-based functional model is linked to the actions that occur during the interaction with digital products [1, 22] According to the user's actions and functional evaluation, the adaptive rendering of the digital product is executed and the generated scene is sent to the reviewer In addition, according to the actions related to 1248 Int J Adv Manuf Technol (2012) 59:1245–1259 Fig System overview FSM, the system loads and renders corresponding virtual objects or guides users to manipulate them on multi- displays All the necessary digital models and functional models are stored in the resource layer Thus, participants Fig Overall process for tangible interactions with digital products in multi-displays Int J Adv Manuf Technol (2012) 59:1245–1259 1249 Fig Wiimote and infrared camera: a Wiimote, b IR camera module in Wiimote can use multi-displays for collaborative and private interactions for the design review and discussion The overall process involves three stages: (1) digital product design, (2) tangible interaction in multi-displays, and (3) collaboration and evaluation In the digital product design stage, the product designer creates a product model using a commercial CAD system, and considers design specification and customers' requirements during the product design that correspond to the overall functional behavior and interface However, this consideration is limited and subjective, and therefore cannot guarantee the complete functional evaluation of the digital product When the product design is completed, it is used to construct the virtual product model in the interaction stage The tangible digital model consists of geometry, assembly relation, and other attributes for visualization and interaction In addition, its functional model and related multimedia contents are generated and created Eventually, the multimedia contents will be overlaid into the virtual model in multi-display environments The FSM-based functional model is linked to actions that occurred during the interaction with the product model Each action can be linked to a multimedia content visualization and interaction such as menu manipulation, movie and music playing, and color change Then, participant(s) can evaluate and simulate the functional behavior of the digital product by tangible interaction in multi-displays at the collaboration and evaluation stage To support a new interface that is able to directly manipulate virtual objects in multi-touch aspect IR tangibles are provided for cost-effective and convenient interactions Two Wiimotes are used to effectively track IR tangibles fast and robustly under low illumination conditions Moreover, multi-views are generated since participants have different views as well as common view of the digital product To evaluate the functional behavior of the product, an FSM is embedded into the tangible interaction and visualization The concrete execution according to each action or activity is conducted during the functional simulation Finally, participants from different areas share their ideas and collaborate to find design problems and revise the overall shape and its functional behavior according the result of the design review Figure shows a tangible interaction with a digital mobile phone and its functional evaluation in multidisplays based on the above overall process Firstly, the designer designs a digital product of a new mobile phone Then, its functional behavior is modeled with an FSM, and communications between the digital model and the FSM are made by IR tangible-based interaction Each interaction plays a role in linking an action in the FSM and the actual action in the digital model Finally, its virtual model is displayed in multi-displays as a private view or common view, and thus, the user can easily evaluate its functional Fig IR tangibles interfaces: a tangible wand, b tangible cube, c tangible ring 1250 Int J Adv Manuf Technol (2012) 59:1245–1259 Fig Multi-displays and tangible interactions: a large display: direct control, b tabletop, c large display: remote control behavior as well as the appearance on the prototype During the evaluation, corresponding virtual objects and multimedia contents are overlaid on the digital phone model to help the evaluation When the collaboration space is synchronized, participants perform collaboration and evaluate the functional properties using tangible interfaces Normally, the space shares the visualization model, functional model, and related multimedia contents Thus, the proposed tangible interface and visualization can make participants experiment more touchable and tangible feelings compared to existing virtual model visualization and its simulation [15, 23, 24] 3.2 IR tangibles and multi-displays Wiimote shown in Fig plays the main role in efficient and robust tracking of IR tangibles in multi-display environments It integrates an in-built infrared camera with on-chip processing and accelerometers, and supports Bluetooth communication This characteristic makes it possible to communicate with external hardware that supports Bluetooth, while several open-source libraries are available to capture and process the derived information In particular, the proposed approach is very flexible since it is easily adaptable to various displays such as large displays, tabletops, desktops, and situated displays Moreover, it is robust in a low illumination condition where most of the collaborations and discussions occur, since it utilizes infrared optical tracking (rather than vision-based tracking) Fig Interfaces for remote control: a reflective tape, b LED array which has the advantage of an enhanced sense of presence and increased interaction among participants Furthermore, the environment setup is quite simple as it is sufficient to mount two Wiimotes on a portable support in front of multi-displays in a typical office or workplace The information derived from the Wiimote camera is used to track an IR tangible and to generate graphics corresponding to the movements of the user These data are successively shared across multi-display environments An IR tangible can be easily created from IR lightemitting diodes (LEDs), or alternatively by shining IR light generated via an LED array on a reflective marker attached to the participant's hand Figure shows tangible interfaces made by IR LEDs which can be effectively used depending on the type of display and application The 3D coordinates of the IR tangible are calculated by a stereo vision technique Using a real-time optical tracking algorithm that simultaneously tracks multiple IR tangibles, we can explore techniques that allow for direct manipulation on multidisplays using multi-touch gestures By pointing a Wiimote at a projection screen or large display, we can create different types of interactive multidisplays as shown in Fig Since the Wiimote can track up to four points, up to four pens can be used In particular, using reflective tape and the LED array shown in Fig 6a and b, we can control and interact with digital products remotely This allows us to interact with various applications simply by waving one's hands in the air, similarly to the interaction as shown in Fig 5c Fig demonstrates a variety of interaction Int J Adv Manuf Technol (2012) 59:1245–1259 1251 find a transform that maps one arbitrary 2D quadrilateral into another [25] A property of the perspective transform is its ability to map straight lines to straight lines Thus, given the coordinates of the four corners of the first quadrilateral, and the coordinates of the four corners of the second quadrilateral, the task is to compute the perspective transform that maps a new point in the first quadrilateral onto the appropriate position on the second quadrilateral Let us assume that the perspective transform is written as X = Hx, where x is the vector of multi-display coordinates, and X is the vector of the virtual world of digital products We can write this form in more detail as: Fig Perspective transform techniques that exploit the affordability of the proposed approach, resulting in effective multi-displays such as large displays, table tops, and remote displays There are also circumstances where users cannot easily approach the display and can interact only from a distance Our work also investigates potential techniques for pointing and clicking from a distance using the proposed approach as shown in Fig 5c This eliminates issues related to acquiring a physical input device, and transitions very fluidly to up close touch screen interaction To support interactions with digital products through IR tangible interfaces, we need to convert the coordinates of the IR tangibles to those in the computer that actually manipulate objects For example, the coordinate (x, y) from the IR LED should be mapped to the coordinate (X, Y) in the virtual world of digital products by the perspective transform as shown in Fig In other words, we need to Fig Tangible user interfaces using IR tangibles: a select, b move, c rotate, d scale XW 32 x 76 e f 54 y where W ¼ gx ỵ hy ỵ 1 h a b c 6 YW ¼ d g W We can rewrite the above equations as follows: X ẳ ax ỵ by ỵ c dx ỵ ey þ f ;Y ¼ gx þ hy þ gx þ hy þ X ¼ ax þ by þ c gxX hXy Y ẳ dx ỵ ey þ f À gxY À hyY Since we need to find H which contains eight unknown variables, we need four points that are already known We a b c d 1252 Int J Adv Manuf Technol (2012) 59:1245–1259 need a calibration step to obtain four points that map (x, y) into (X, Y) from the user before interactions so that we can find all the unknown variables in H as follows: x1 X1 6 6Y1 7 6 X x2 6 6 6Y2 7 6 X ¼ x3 6 6 6Y3 7 6 X x4 4 Y4 y1 0 0 y2 0 x1 y1 0 x2 y2 1 y3 0 x3 y3 y4 0 0 x4 y4 system traces the location of IR tangibles while being moved as shown in Fig & 32 Àx1X ÀX 1y1 a 76 Àx1Y Ày1Y 76 b 76 7 Àx2X ÀX 2y2 76 c 76 Àx2Y Ày2Y 76 d 76 7 Àx3Y ÀX 3y3 76 e 76 7 Àx3Y Ày3Y 76 f 7 Àx4X ÀX 4y4 56 4g5 h Àx4Y Ày4Y & & & Select: When the IR tangible turns on from the off status and the location of the IR tangible is close to the display, it is considered that the user tries to select an object Rotate: When one of the two IR tangibles is rotating around the other, the selected object is rotated Scale: When the distance between the two IR tangibles becomes significant, the selected object is zoomed out On the other hand, when it is closer, it is zoomed in Translate: When the IR tangible that selects an object moves, the selected object moves along the IR tangible To support the design review and functional evaluation of a digital product, visualization information as well as its functional model and related multimedia contents are generated and visualized The multimedia contents will be overlaid into the digital product on multi-displays To 3.3 User interfaces with IR tangibles The user manipulates multiple IR tangibles to select, move, rotate, and scale multimedia and 3D digital products The a b c Fig Tangible user interface of digital products: a rotating, b changing attributes of digital product, c playing multi-media on the digital product Int J Adv Manuf Technol (2012) 59:1245–1259 evaluate the functional behavior of the product, an FSM is embedded into the tangible interaction and visualization The concrete execution according to each action or activity is conducted during the functional simulation Finally, participants from different areas share their ideas and collaborate to find design problems and revise the overall shape and its functional behavior according to the result of the design review As shown in Fig 9, interactions include playing the multimedia and changing the attribute of a digital product such as changing color or texture Based on the above manipulation operators, the user can perform design review tasks in immersive and non-immersive multi-display environments effectively Collaborative design review of digital products in multi-displays This section explains how the proposed approach can be utilized for the effective design review of digital products among participants in multi-displays The user can manipulate multiple IR tangibles in front of multi-displays for interacting with digital products The interaction is analyzed and fed into a design view and visualization of digital products in a large display, tabletop, and remote display Fig 10 State transition chart: a concept of state transition, b state transition of a mobile device 1253 4.1 Functional evaluation During the design review, modeling and simulation of digital products are essential to test their functionalities and characteristics, which can result in higher stability, better maintainability, and less potential errors of the products before manufacturing [23, 24] To effectively support the design review and collaboration in co-location, VR and AR have been widely used However, there is no cost-effective way to support a tangible user interface of digital products in multi-displays because most of the previous research work requires expensive VR systems and inflexible visualization environments [1] For this reason, we adopt FSM to simulate the functional behavior of a digital product [22] Every digital product has some part components that are involved in the interaction between the user and the product These include switches, buttons, sliders, indicators, displays, timers, and speakers They are called objects making up the basic building blocks of the functional simulation Every object has a pre-defined set of properties and functions that describe everything it can in a realtime situation The overall behavior of a digital product can be broken down into separate units of behavior, which are called states The state of the product can be changed to another state as shown in Fig 10a [15] This is called state 1254 Int J Adv Manuf Technol (2012) 59:1245–1259 Fig 11 Multi-view manager transition Every state transition is triggered by one or more events associated with it Some tasks called actions can be performed before transition to a new state In order to define the actual behavior of the product, all the tasks performed in each state are specified These tasks are called activities They only occur when their state becomes active Actions and activities are constructed using the objects' properties and functions Each action or activity consists of a set of statements Each statement can be the assignment of Fig 12 Interaction process for the design review in multi-displays some value to a variable, the calling of a function of an object, or a composite statement with a conditional statement The functional behavior model for tangible interactions is used to generate a state transition chart, which represents all the states and the possible state transitions between them Figure 10b shows a state-transition chart for a mobile phone [1] When the user creates an input event using IR tangibles in multi-displays, the proposed approach checks whether or not the event is related to the functional behavior of the Int J Adv Manuf Technol (2012) 59:1245–1259 1255 Fig 13 Tangible and direct interactions of digital products in projected display product If so, the FSM module refers to the functional behavior model of the product and determines if the event triggers state transition If the state transition is confirmed, the FSM module quits the activities of the current state, changes the state to a new one, and starts the activities of the new state Otherwise, it keeps conducting the activities of the current state These actions and activities include tasks such as changing the position and orientation of the components of the digital product and embedding multimedia into the digital product and playing the multimedia The execution of the actions and activities yields state-specific visual and auditory data 4.2 Multi-view management The collaborative or public interaction is considered as synchronized interaction, whereas the individual or private interaction is considered as asynchronized interaction The collaborative view is generated to synchronize all the views of multi-users On the other hand, the individual view is generated to provide a specific view to a specific user who is willing to a private action to the shared space The concept of both interactions is very useful for individual and multi-user interactions Our approach can make different displays be involved in the collaborative design review and interaction To support the multi-visualization interface, the system internally manages all the views of participants, and generates individual views, each of which corresponds to the private view of each user as well as the public view which can be shared among them as shown in Fig 11 For collaborative design review and sharing, the system renders each private view based on the scene graph of 3D objects and multimedia data When an individual interaction occurs, the system re-renders the specific view of the scene and transmits it to the corresponding user when the user interacts in a private space On the other hand, when a cooperative interaction occurs in a public space, the scene is sent to multi-users to keep them synchronized in the public view 4.3 Tangible interaction process Through the combination of output capabilities of multidisplays, participants can share digital products and perform the design review In particular, this approach has a potential to realize new interaction techniques between multi-displays Figure 12 shows an overall process of tangible and natural interactions in multi-displays According to the generated events, the proposed approach analyzes them and evaluates the functional behavior of the corresponding digital product Note that the interactive behavior of the digital product is defined through the three aspects of form, function, and interaction to effectively evaluate functional behaviors of the digital product In particular, a remote display plays two roles: remote controller and augmented visualizer As a remote controller, the display provides a set of icons and menus, each of which generates an event for interacting with a digital product in a shared multi-display On the other hand, as an augmented visualizer, the remote display provides the same view with that in the shared multi-display such that the user can directly manipulate the digital model The multi-view manager generates an adaptive view regarding the capability of the display Whenever the user touches and gives an action, Fig 14 Tangible and direct interactions of digital products in tabletop (LG LCD TV) 1256 Int J Adv Manuf Technol (2012) 59:1245–1259 Fig 15 Tangible and remote interactions in large displays an event is sent to the multi-view manager which analyzes the event and evaluates it regarding the functional behavior of the model Finally, the adaptive view is created and sent to the user In particular, the multi-view manager maintains all different views among participants for providing private and public views System implementation and evaluation This section explains how the proposed approach can support the tangible user interface of digital products in immersive and non-immersive multi-displays with cost effective, robust, and efficient optical tracking To illustrate the benefits of the proposed approach, we present several implementation results applied to multi-displays These multi-displays can be easily set up in typical office environments with a projector, TV, and Wiimote Furthermore, we will show a qualitative usability study which ensures the effectiveness and convenience of the proposed approach 5.1 System implementation We will show several case studies to demonstrate the visualization and review of digital products using simple but robust IR tangible interactions in multi-displays In this research, OpenSceneGraph [26] is used to support the realistic rendering of digital products Figure 13 shows the tangible user interface of digital products in a projected large display Two Wiimotes are mounted on a fixed location, while a user moves a set of IR tangibles in front of a display The IR tangible is captured by both Wiimote cameras and this information is transmitted to the virtual world of digital products which runs immersive and non-immersive applications in various multi-displays Figure 13 shows how to interact with the digital product of a smart phone The user can change its color or run various multimedia by tangible user interface Figure 14 shows a similar environment but it can run on a table-top display In this case, Wiimote is mounted on the ceiling of the office or workspace Figure 15 shows a remote interaction with digital products Using an LED array and some reflective tape, the user can track objects as fingers in 2D space This makes it possible to interact with digital products by waving one's hands in the air similarly to an interaction Figure 16 demonstrates another tangible user interaction with the digital product of a car and another smartphone As the figure shows, it is possible to multiple IR tangibles to interact with the digital model The user can rotate the digital model or a multimedia data with one or two IR tangibles shown in Fig 16a Similarly, the user can zoom in or out with two IR tangibles as shown in Fig 16b These Fig 16 Tangible user interface: a rotating the digital product of the car and multi-media with one or two IR tangibles, b scaling the size of a car and multi-media with two IR tangibles Int J Adv Manuf Technol (2012) 59:1245–1259 1257 Fig 17 Collaborative design review and view management interactions show the ease-of-use, tangibility, and cost effectiveness of the proposed approach Figure 17 shows how participants can collaborate with each other in different multi-displays The multi-view manager plays the main role in managing all the public and private views of the participants and generating adaptive scenes considering user's device context According to the type of interaction, the result is automatically updated to the participant's displays One of the main reasons to use Wiimote for interactions is that each Wiimote contains a 1,024×768 infrared camera with built-in hardware blob tracking of up to four points at 100 Hz, which demonstrates that the tracking speed is very fast and effective compared with vision-based image processing This significantly outperforms any webcam available today The 2D coordinates of the IR tangible as seen by each Wiimote camera were read by the perspective transform However, the transformed coordinate does not have depth information In other words, it is simply a 2D transformed coordinate rather than a 3D coordinate For this reason, we need to find another coordinate value or depth value Two Wiimotes are therefore used In this paper, a standard stereo-vision technique is applied to find the depth of the IR tangible in 3D as shown in Fig 18 Where the Z-axis is the direction towards which the cameras are pointing, D is the distance between the cameras, f is the focal length, and xl and xr are the x IR source coordinates on the left and right view planes, respectively [27] À Á T À x l À xr T fT ¼ ¼> Z ¼ l Z x À xr Z Àf To evaluate the accuracy of the depth value with two Wiimotes, we have measured the real data and calculated their depth values As shown in Fig 19, we have measured values at m (or 1,000 mm) and 2-m distances Mean values are 1,001.547 and 2007.15 mm Standard deviations are 12.12 and 14.76 mm We found that it is possible to utilize Wiimote for tangible interactions considering accuracy As a further test, we need to find better configurations to minimize errors by changing the distance between two Wiimotes Furthermore, it is necessary to calibrate Wiimotes [25, 27] 5.2 User study We performed a qualitative user study of the proposed approach The 12 participants were given a short introduction and performed several tasks in multi-displays After following the introduction and performing the tasks shown in Figs 13, 14, 15, and 16, a questionnaire was given that included questions concerning ease-of-use, tangibility, and usability as shown in Table All responses were scored on a 5-point scale (ranging from “5 strongly agree” to “1 strongly disagree”) with some comments The data collected were analyzed with the Statistical Package for Social Sciences (SPSS™) Firstly, we utilized Cronbach's alpha which is a coefficient of reliability It is commonly used as a measure of the internal consistency or reliability of statements in the questionnaire [28] It is generally agreed that it is internally reliable if Cronbach's alpha α>0.7 The calculated Chronbach's alpha was 0.841 α=0.841, which implies that the suggested statements in the questionnaire have consistency In addition, the mean, standard deviation, and significance were collected from the responses to analyze the usability of each statement (Fig 20) A t test was applied to analyze the participants' P xl Z xr f f T Ol D =xl - xr Fig 18 Stereo vision for finding the depth value of P Or 1258 Int J Adv Manuf Technol (2012) 59:1245–1259 Distance = 1m (1000mm) Distance = 2m (2000mm) Range(mm) Frequency 980 193 990 1220 1000 1314 1010 1259 1020 1739 1030 177 Mean Std Dev 1001.547 12.12 Range(mm) Frequency 1970 1990 293 2000 642 2010 566 2030 626 2050 201 2000 Mean Std Dev 2007.15 14.76 700 1800 1600 600 1400 500 1200 400 1000 Frequency Frequency 800 300 600 200 400 100 200 0 980 990 1000 1010 1020 1030 1970 1990 2000 2010 2030 2050 Fig 19 Evaluation of the accuracy of the depth value with two Wiimotes responses A significance level of p

Ngày đăng: 22/07/2016, 23:00

Từ khóa liên quan

Mục lục

  • Tangible user interface of digital products in multi-displays

Tài liệu cùng người dùng

Tài liệu liên quan