Industrial Robotics Theory Modelling and Control Part 14 pdf

60 276 0
Industrial Robotics Theory Modelling and Control Part 14 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Visual Conveyor tracking in High-speed Robotics Tasks 771 • pisizeshapenumbershapefinger ii , ,1,,,)(_ ==G , expresses the shape of the gripper in terms of its number p of fingers, the shape and dimensions of each finger. Rectangular-shaped fingers are considered; their size is given "width" and "height". • {} pirzyxlocationfingers icici , ,1,)(),(),()(_ == OOOOG, ,indicates the relative loca- tion of each finger with respect to the object's centre of mass and minimum in- ertia axis (MIA). At training time, this description is created for the object's model, and its updating will be performed at run time by the vision system for any recognized instance of the prototype. • ), 1,_(_ picontextposeviewingfingers i =G, indicating the way how "invisi- ble" fingers are to be treated; fingers are "invisible" if they are outside the field of view. • kgrip , ,1= are the k gripper-object )( OG,Gs_m distinct grasping models a priori trained, as possible alternatives to face at run time foreground context situations. A collision-free grasping transformation ),( OGs_mCF i will be selected at run time from one of the k grip parameters, after checking that all pixels belonging to i FGP_m (the projection of the gripper's fingerprints onto the image plane visvis yx , , in the O -grasping location) cover only background-coloured pix- els. To provide a secure, collision-free access to objects, the following robot- vision sequence must be executed: 1. Training k sets of parameters of the multiple fingerprints model O)MFGP_m(G, for G and object class O , relative to the k learned grasping styles ki , ,1),( =OG,Gs_m i . 2. Installing the multiple fingerprint model O)MFGP_m(G, defining the shape, position and interpretation (viewing) of the robot gripper for clear-grip tests, by including the model parameters in a data base available at run time. This must be done at the start of application programs prior to any image acquisi- tion and object locating. 3. Automatically performing the clear-grip test whenever a prototype is recognized and located at run time, and grips ki , 1, = i FGP_m have been a priori de- fined for it. 4. On line call of the grasping parameters trained in the )( OG,Gs_m i model, which corresponds to the first grip i FGP_m found to be clear. The first step in this robot-vision sequence prepares off line the data allowing to position at run time two Windows Region of Interest (WROI) around the cur- rent object, invariant to its visually computed location, corresponding to the 772 Industrial Robotics: Theory, Modelling and Control two gripper fingerprints. This data refers to the size, position and orientation of the gripper's fingerprints, and is based on: • the number and dimensions of the gripper's fingers: 2-parallel fingered grippers were considered, each one having a rectangular shape of dimensions gg htwd , ; • the grasping location of the fingers relative to the class model of objects of interest. This last information is obtained by learning any grasping transformation for a class of objects (e.g. "LA"), and is described by help of Fig. 9. The following frames and relative transformations are considered: • Frames: ),( 00 yx : in the robot's base (world); ),( visvis yx : attached to the image plane; ),( gg yx : attached to the gripper in its end-point T; ),( locloc yx : default object-attached frame, MIA≡ loc x (the part's minimum inertia axis); ),( objobj yx : rotated object-attached frame, with G)dir(C,≡ obj x , ),C( cc yx being the object's centre of mass and TprojG ),( visvis yx = ; • Relative transformations: to.cam[cam]: describes, for the given camera, the lo- cation of the vision frame with respect to the robot's base frame; vis.loc: de- scribes the location of the object-attached frame with respect to the vision frame; vis.obj: describes the location of the object-attached frame with respect to the vision frame; pt.rob: describes the location of the gripper frame with re- spect to the robot frame; pt.vis: describes the location of the gripper frame with respect to the vision frame. As a result of this learning stage, which uses vision and the robot's joint encod- ers as measuring devices, a grasping model {} rz_offz_offalphad.cg)( ,,,=LA""G,GP_m is derived, relative to the object's centre of mass C and minimum inertia axis MIA (C and MIA are also available at runtime): ))Gdir(C,,(_G),dist(T,_G)),dir(C,MIA,(G),dist(C,. g xoffrzoffzalphacgd ∠==∠== A clear grip test is executed at run time to check the collision-free grasping of a recognized and located object, by projecting the gripper's fingerprints onto the image plane, ),( visvis yx , and verifying whether they "cover" only background pix- els, which means that no other object exists close to the area where the gripper's fingers will be positioned by the current robot motion command. A negative result of this test will not authorize the grasping of the object. For the test purpose, two WROIs are placed in the image plane, exactly over the areas occupied by the projections of the gripper's fingerprints in the image plane for the desired, object-relative grasping location computed from )GP_m(G, LA"" ; the position (C) and orientation (MIA) of the recognized object must be available. From the invariant, part-related data: Visual Conveyor tracking in High-speed Robotics Tasks 773 cgdhtwdwdoffrzalpha gg .,,,,., LA , there will be first computed at run time the current coordinates GG , yx of the point G, and the current orientation angle graspangle. of the gripper slide axis relative to the vision frame. Figure 9. Frames and relative transformations used to teach the )GP_m(G, LA"" pa- rameters The part's orientation )MIA,(. vis xaimangle ∠= returned by vision is added to the learned alpha . alphaangle.aimxbeta vis +=∠= )G),(dir(C, (5) Once the part located, the coordinates CC , yx of its gravity centre C are avail- able from vision. Using them and beta, the coordinates GG , yx of the G are com- puted as follows: )sin(.),cos(. CGCG betacgdyybetacgdxx ⋅−=⋅−= (6) 774 Industrial Robotics: Theory, Modelling and Control Now, the value of ),(. visg xxgraspangle ∠= , for the object's current orientation and accounting for offrz. from the desired, learned grasping model, is obtained from offrzbetagraspangle += . Two image areas, corresponding to the projections of the two fingerprints on the image plane, are next specified using two WROI operations. Using the ge- ometry data from Fig. 9, and denoting by dg the offset between the end-tip point projection G, and the fingerprints centres 2,1,CW =∀ii , 2/2/ LA g wdwddg += , the positions of the rectangle image areas "covered" by the fingerprints projected on the image plane in the desired part-relative grasp- ing location are computed at run time according to (7). Their common orienta- tion in the image plane is given by graspangle. . ).cos( Gcw1 graspangledgxx ⋅−= ; ).cos( Gcw2 graspangledgxx ⋅+= (7) ).sin( Gcw1 graspangledgyy ⋅−= ; ).sin( Gcw2 graspangledgyy ⋅+= The type of image statistics is returned as the total number of non-zero (back- ground) pixels found in each one of the two windows, superposed onto the ar- eas covered by the fingerprints projections in the image plane, around the ob- ject. The clear grip test checks these values returned by the two WROI- generating operations, corresponding to the number of background pixels not occupied by other objects close to the current one (counted exactly in the grip- per's fingerprint projection areas), against the total number of pixels corre- sponding to the surfaces of the rectangle fingerprints. If the difference between the compared values is less than an imposed error er r for both fingerprints – windows, the grasping is authorized: If [] errfngprtar ≤− .4ar1 AND [] errfngprtar ≤− .4ar2 , clear grip of object is authorized; proceed object tracking by continu- ously altering its target location on the vision belt, until robot motion is com- pleted. Else another objects is too close to the current one, grasping is not author- ized. Here, XY_scale]pix.to.mm)/[(. 2 gg htwdfngprtar = is the fingerprint's area [raw pixels], using the camera-robot calibration data: pix.to.mm (no. of image pix- els/1 mm), and XY_scale ( yx / ratio of each pixel). Visual Conveyor tracking in High-speed Robotics Tasks 775 5. Conclusion The robot motion control algorithms with guidance vision for tracking and grasping objects moving on conveyor belts, modelled with belt variables and 1-d.o.f. robotic device, have been tested on a robot-vision system composed from a Cobra 600TT manipulator, a C40 robot controller equipped with EVI vi- sion processor from Adept Technology, a parallel two-fingered RIP6.2 gripper from CCMOP, a "large-format" stationary camera (1024x1024 pixels) down looking at the conveyor belt, and a GEL-209 magnetic encoder with 1024 pulses per revolution from Leonard Bauer. The encoder’s output is fed to one of the EJI cards of the robot controller, the belt conveyor being "seen" as an ex- ternal device. Image acquisition used strobe light in synchronous mode to avoid the acquisi- tion of blurred images for objects moving on the conveyor belt. The strobe light is triggered each time an image acquisition and processing operation is executed at runtime. Image acquisitions are synchronised with external events of the type: "a part has completely entered the belt window"; because these events generate on-off photocell signals, they trigger the fast digital-interrupt line of the robot controller to which the photocell is physically connected. Hence, the VPICTURE operations always wait on interrupt signals, which significantly improve the response time at external events. Because a fast line was used, the most unfavourable delay between the triggering of this line and the request for image acquisition is of only 0.2 milliseconds. The effects of this most unfavourable 0.2 milliseconds time delay upon the integrity of object images have been analysed and tested for two modes of strobe light triggering: • Asynchronous triggering with respect to the read cycle of the video camera, i.e. as soon as an image acquisition request appears. For a 51.2 cm width of the image field, and a line resolution of 512 pixels, the pixel width is of 1 mm. For a 2.5 m/sec high-speed motion of objects on the conveyor belt the most unfavourable delay of 0.2 milliseconds corresponds to a displacement of only one pixel (and hence one object-pixel might disappear during the dist travel above defined), as: (0.0002 sec) * (2500 mm/sec) / (1 mm/pixel) = 0.5 pixels. • Synchronous triggering with respect to the read cycle of the camera, induc- ing a variable time delay between the image acquisition request and the strobe light triggering. The most unfavourable delay was in this case 16.7 milliseconds, which may cause, for the same image field and belt speed a potential disappearance of 41.75 pixels from the camera's field of view (downstream the dwnstr_lim limit of the belt window). 776 Industrial Robotics: Theory, Modelling and Control Consequently, the bigger are the dimensions of the parts travelling on the con- veyor belt, the higher is the risk of disappearance of pixels situated in down- stream areas. Fig. 10 shows a statistics about the sum of: • visual locating errors: errors in object locating relative to the image frame ),( visvis yx ; consequently, the request for motion planning will then not be issued; • motion planning errors: errors in the robot's destinations evaluated during motion planning as being downstream downstr_lim, and hence not author- ised, function of the object's dimension (length long_max.obj along the minimal inertia axis) and of the belt speed (four high speed values have been considered: 0.5 m/sec, 1 m/sec, 2 m/sec and 3 m/sec). As can be observed, at the very high motion speed of 3 m/sec, for parts longer than 35 cm there was registered a percentage of more than 16% of unsuccessful object locating and of more than 7% of missed planning of robot destinations (which are outside the CBW) for visually located parts, from a total number of 250 experiments. The clear grip check method presented above was implemented in the V+ pro- gramming environment with AVI vision extension, and tested on the same ro- bot vision platform containing an Adept Cobra 600TT SCARA-type manipula- tor, a 3-belt flexible feeding system Adept FlexFeeder 250 and a stationary, down looking matrix camera Panasonic GP MF650 inspecting the vision belt. The vision belt on which parts were travelling and presented to the camera was positioned for a convenient robot access within a window of 460 mm. Experiments for collision-free part access on randomly populated conveyor belt have been carried out at several speed values of the transportation belt, in the range from 5 to 180 mm/sec. Table 1 shows the correspondence between the belt speeds and the maximum time intervals from the visual detection of a part and its collision-free grasping upon checking [#] sets of pre taught grasping models #., ,1),( =iOG,Gs_m i Visual Conveyor tracking in High-speed Robotics Tasks 777 0 4 8 12 16 20 Object locating errors [%] 5 15253545 long_max.obj [cm] 0.5 m/sec 1m/sec 2m/sec 3m/sec 0 4 8 12 16 20 planning errors [%] 5 15253545 long_max.obj [cm] 0.5 m/sec 1m/sec 2m/sec 3m/sec Figure 10. Error statistics for visual object locating and robot motion planning Belt speed [mm/sec] 5 10 30 50 100 180 Grasping time (max) [sec] 1.4 1.6 1.9 2.0 2.3 2.5 Clear grips checked[#] 4 4 4 4 2 1 Table 1. Corespondance between belt speed and collision-free part grasping time 778 Industrial Robotics: Theory, Modelling and Control 6. References Adept Technology Inc. (2001). AdeptVision User's Guide Version 14.0, Technical Publi- cations, Part Number 00964-03300, Rev. B, San Jose, CA Borangiu, Th. & Dupas, M. (2001). Robot – Vision. Mise en œuvre en V+, Romanian Academy Press & AGIR Press, Bucharest Borangiu, Th. (2002). Visual Conveyor Tracking for "Pick-On-The-Fly" Robot Motion Control, Proc. of the IEEE Conf. Advanced Motion Control AMC'02, pp. 317-322, Maribor Borangiu, Th. (2004). Intelligent Image Processing in Robotics and Manufacturing, Roma- nian Academy Press, ISBN 973-27-1103-5, Bucarest Borangiu, Th. & Kopacek, P. (2004). Proceedings Volume from the IFAC Workshop Intelli- gent Assembly and Disassembly - IAD'03 Bucharest, October 9-11, 2003, Elsevier Science, Pergamon Press, Oxford, UK Borangiu, Th. (2005). Guidance Vision for Robots and Part Inspection, Proceedings vol- ume of the 14th Int. Conf. Robotics in Alpe-Adria-Danube Region RAAD'05, pp. 27- 54, ISBN 973-718-241-3, May 2005, Bucharest Borangiu, Th.; Manu, M.; Anton, F D.; Tunaru, S. & Dogar, A. (2006). High-speed Ro- bot Motion Control under Visual Guidance, 12th International Power Electronics and Motion Control Conference - EPE-PEMC 2006, August 2006, Portoroz, SLO. Espiau, B.; Chaumette, F. & Rives, P. (1992). A new approach to visual servoing in ro- botics, IEEE Trans. Robot. Automat., vol. 8, pp. 313-326 Lindenbaum, M. (1997). An Integrated Model for Evaluating the Amount of Data Re- quired for Reliable Recognition, IEEE Trans. on Pattern Analysis & Machine In- tell. Hutchinson, S. A.; Hager, G.D. & Corke, P. (1996). A Tutorial on Visual Servo Control, IEEE Trans. on Robotics and Automation, vol. 12, pp. 1245-1266, October 1996 Schilling, R.J. (1990). Fundamentals of Robotics. Analysis and Control, Prentice-Hall, Englewood Cliffs, N.J. Zhuang, X.; Wang, T. & Zhang, P. (1992). A Highly Robust Estimator through Par- tially Likelihood Function Modelling and Its Application in Computer Vision, IEEE Trans. on Pattern Analysis and Machine Intelligence West, P. (2001). High Speed, Real-Time Machine Vision, CyberOptics – Imagenation, pp. 1-38 Portland, Oregon 779 28 Visual Feedback Control of a Robot in an Unknown Environment (Learning Control Using Neural Networks) Xiao Nan-Feng and Saeid Nahavandi 1. Introduction When a robot has no transcendental knowledge about an object to be traced and an operation environment, a vision sensor is needed to attach to the robot in order to recognize the object and the environment. On the other hand, it is also desirable that the robot has learning ability in order to improve effectively the trace operation in the unknown environment. Many methods (1)-(11) have been so far proposed to control a robot with a cam- era to trace an object so as to complete a non-contact operation in an unknown environment. e.g., in order to automate a sealing operation by a robot, Hosoda, K. (1) proposed a method to perform the sealing operation by the robot through off-line teaching beforehand. This method used a CCD camera and slit lasers to detect the sealing line taught beforehand and to correct on line the joint an- gles of the robot during the sealing operation. However, in those methods (1)-(3) , only one or two image feature points of the sealing were searched per image processing period and the goal trajectory of the robot was generated using an interpolation. Moreover, those methods must perform the tedious CCD camera calibration and the complicated coor- dinate transformations. Furthermore, the synchronization problem between the image processing system and the robot control system, and the influences of the disturbances caused by the joint friction and the gravity of the robot need to be solved. In this chapter, a visual feedback control method is presented for a robot to trace a curved line in an unknown environment. Firstly, the necessary condi- tions are derived for one-to-one mapping from the image feature domain of the curved line to the joint angle domain of the robot, and a multilayer neural network which will be abbreviated to NN hereafter is introduced to learn the mapping. Secondly, a method is proposed to generate on line the goal trajec- tory through computing the image feature parameters of the curved line. Thirdly, a multilayer neural network-based on-line learning algorithm is de- veloped for the present visual feedback control. Lastly, the present approach is applied to trace a curved line using a 6 DOF industrial robot with a CCD cam- 780 Industrial Robotics: Theory, Modelling and Control era installed in its end-effecter. The main advantage of the present approach is that it does not necessitate the tedious CCD camera calibration and the com- plicated coordinate transformations. Contact object Robot end-effector Workspace frame CCD camera Rigid tool Tangential direction x o y o z o O Ȉ Goal position G Initial position I Curved line Figure 1. Vision-based trace operation Robot end-effector Workspace frame x o y o z o O Ȉ x C y C C Ȉ x B z B B Ȉ y B Robot base frame z C Camera frame tc p c p Feature Imag feature domai n i ı ȟ Goal feature Initial featur e Feature of curved line Figure 2. Image features and mapping relation [...]... T & Kimura, H., (1996) Visual Servoing with Hand-Eye Manipulator-Optimal Control Approach, IEEE Journal of Robotics and Automation, Vol 12, No 5, pp 766-774 798 Industrial Robotics: Theory, Modelling and Control Wilson, J.W., Williams, H & Bell, G.S., (1996) Relative End-Effecter Control Using Cartesian Position Based Visual Servoing, IEEE Trans Robotics and Automation, Vol 12, No 5, pp 684-696 Ishikawa,... Mounted on a Robot: A Combination of Control and Vision, IEEE Journal of Robotics and Automation, Vol 9, No 1, pp 14- 35 Bernardino, A & Santos-Victor J, (1999) Binocular Tracking: Integrating Perception and Control, IEEE Journal Robotics and Automation, Vol 15, No 6, pp 1080-1093 Malis, E., Chaumette, F and Boudet, S., (1999) 2-1/2-D Visual Servoing, IEEE Journal of Robotics and Automation, Vol 15, No 2,... wljBC and w CD of NNc are given by random number between –0.5 ji and 0.5 n (k + 1) , n (k ) and n (k −1) are divided by their maximums before inputting to NNc, respectively K is a scalar constant which is specified by the experiments While NNc is learning, the elements of e(k) will become smaller and smaller 790 Industrial Robotics: Theory, Modelling and Control Figure 7 Block diagram of learning control. .. with an open architecture servo controller The control law allows the robot to follow a desired contact force through an impedance model in Cartesian space And, a fuzzy compliance control is also presented for an advanced joystick teaching system, which can provide the friction force acting 799 800 Industrial Robotics: Theory, Modelling and Control between the sanding tool and workpiece to the operator... gravity center of A j Linearizing Eq.(2) at a minute domain of ptc yields = Jf · δ p , tc (3) 782 Industrial Robotics: Theory, Modelling and Control and p tc are minute increments of and ptc , respectively, and Jf = ∂ ∂ p tc ∈ R6×6 is a feature sensitivity matrix and ∈ R6×1 are a joint angle vector of the robot and its Furthermore, let minute increment in the robot base coordinate frame B If we map from... ∈ ℜ 2 and ∈ ℜ 2 are the inclination angle and the angular velocity vectors, respec~ ~ ~ tively B J = diag B Jx , B Jy and K J = diag K Jx , K Jy are the virtual damper and J ( ) ( ) stiffness matrices of the joystick joints The subscripts x, y denotes x- and ydirectional components in Fig 3, respectively Figure 3 Coordinate system of a joystick 804 Industrial Robotics: Theory, Modelling and Control. .. synchronize (0) (im) , (1) (im) , (im) and (k ) by zero-order holder or 1st-order holder Otherwise, the robot will drastically accelerate or decelerate during the visual feedback control 792 Industrial Robotics: Theory, Modelling and Control In this section, (0) (im) and (1) (im) are processed by the 2nd-order holder Gh 2 ( z) For instance, ξ l( j ) is the lth element of ( j ) , and ξ l( j ) is compensated... function and reciprocal value of standard deviation, respectively Figure 4, Antecedent membership function for | Fx | and Table 1 Constant values in the consequent part In the sequal, the compensated stiffness matrix ΔK J is obtained from the weighted mean method given by ΔK J = diag L B ω i =1 xi i L ω i =1 i , L B ω i =1 yi i L ω i =1 i (11) 806 Industrial Robotics: Theory, Modelling and Control Figure... NNc, E* converges on 7.6×10-6 rad2, and the end- 794 Industrial Robotics: Theory, Modelling and Control effecter can correctly trace the curved line Figure 12(c) shows the trace errors of the end-effecter in x, y, z axis directions of O , and the maximum error is lower than 2 mm KP 1 2 3 4 5 6 KI KD N m/rad 25473 8748 11759 228 2664 795 i N m/rad 0.000039 0.000 114 0.000085 0.004380 0.000250 0.001260... diagram of the impedance model following force control in sdomain Profiling control is the basic strategy for sanding or polishing, and it is performed by both force control and position/orientation control However, it is very difficult to realize stable profiling control under such environments that have unknown dynamics or shape Undesirable oscillations and non-contact state tend to occur To reduce . speed and collision-free part grasping time 778 Industrial Robotics: Theory, Modelling and Control 6. References Adept Technology Inc. (2001). AdeptVision User's Guide Version 14. 0, Technical. yields ȟDž = J f · tc pδ , (3) 782 Industrial Robotics: Theory, Modelling and Control where ȟDž and p tc Dž are minute increments of ȟ and p tc , respectively, and J f = p tc Ǘ∂∂ ∈R 6×6 is a feature. visual feedback control. Lastly, the present approach is applied to trace a curved line using a 6 DOF industrial robot with a CCD cam- 780 Industrial Robotics: Theory, Modelling and Control era

Ngày đăng: 11/08/2014, 09:20

Tài liệu cùng người dùng

Tài liệu liên quan