Abstract:
A method of workflow monitoring and analysis includes: according to an image to generate at least one three-dimensional joint coordinate, and according to the three dimensional joint coordinate to generate at least one task posture information; according to a movement information to generate at least one three-dimensional track information, and according to the three dimensional track to generate at least one task track information; and according to a workpiece posture information, the task posture information, a workpiece movement information and the task track information to generate a task semanticist.
Abstract:
A full-range image detecting system including a planar light source, an image capturing device, a light sensing device, a processing unit and a measuring module is provided. The planar light source projects a photo image with periodical variations onto an object. The image capturing device captures a reflective photo image reflected from the object. The light sensing device detects the coordinates of at least three measuring points on the object for fitting a plane. The processing unit calculates a phase variation of the reflective photo image after phase shift, a relative altitude of the surface profile of the object according to the phase variation, and an absolute altitude of the surface profile of the object with respect to the plane to obtain an information of absolute coordinate. The measuring module detects the surface of the object according to the information of absolute coordinate of the object.
Abstract:
A safety monitoring system for human-machine symbiosis is provided, including a spatial image capturing unit, an image recognition unit, a human-robot-interaction safety monitoring unit, and a process monitoring unit. The spatial image capturing unit, disposed in a working area, acquires at least two skeleton images. The image recognition unit generates at least two spatial gesture images corresponding to the at least two skeleton images, based on information of changes in position of the at least two skeleton images with respect to time. The human-robot-interaction safety monitoring unit generates a gesture distribution based on the at least two spatial gesture images and a safety distance. The process monitoring unit determines whether the gesture distribution meets a safety criterion.
Abstract:
A three-dimensional (3D) interactive device and an operation method thereof are provided. The 3D interactive device includes a projection unit, an image capturing unit, and an image processing unit. The projection unit projects an interactive pattern to a surface of a body, so that a user performs an interactive trigger operation on the interactive pattern by a gesture. The image capturing unit captures a depth image within an image capturing range. The image processing unit receives the depth image and determines whether the depth image includes a hand region of the user. If yes, the image processing unit performs hand geometric recognition on the hand region to obtain gesture interactive semantics. According to the gesture interactive semantics, the image processing unit controls the projection unit and the image capturing unit. Accordingly, the disclosure provides a portable, contact-free 3D interactive device.
Abstract:
A detecting device includes a first coil, a third coil, a second coil, and a fourth coil. The first coil generates a first magnetic field on a to-be-measured object. The third coil generates a third magnetic field under the to-be measured object. The second coil generates a second magnetic field. After the fourth coil receives the second magnetic field, a voltage is induced. The induced voltage is amplified by an amplify circuit to drive the third coil. The directions of the currents generated by the first coil and the third coil, respectively, are the same.
Abstract:
A safety monitoring system for human-machine symbiosis is provided, including a spatial image capturing unit, an image recognition unit, a human-robot-interaction safety monitoring unit, and a process monitoring unit. The spatial image capturing unit, disposed in a working area, acquires at least two skeleton images. The image recognition unit generates at least two spatial gesture images corresponding to the at least two skeleton images, based on information of changes in position of the at least two skeleton images with respect to time. The human-robot-interaction safety monitoring unit generates a gesture distribution based on the at least two spatial gesture images and a safety distance. The process monitoring unit determines whether the gesture distribution meets a safety criterion.
Abstract:
A three-dimensional (3D) interactive device and an operation method thereof are provided. The 3D interactive device includes a projection unit, an image capturing unit, and an image processing unit. The projection unit projects an interactive pattern to a surface of a body, so that a user performs an interactive trigger operation on the interactive pattern by a gesture. The image capturing unit captures a depth image within an image capturing range. The image processing unit receives the depth image and determines whether the depth image includes a hand region of the user. If yes, the image processing unit performs hand geometric recognition on the hand region to obtain gesture interactive semantics. According to the gesture interactive semantics, the image processing unit controls the projection unit and the image capturing unit. Accordingly, the disclosure provides a portable, contact-free 3D interactive device.