Patent classifications
G05B2219/37567
Controlling a robot in the presence of a moving object
A method, system, and one or more computer-readable storage media for controlling a robot in the presence of a moving object are provided herein. The method includes capturing a number of frames from a three-dimensional camera system and analyzing a frame to identify a connected object. The frame is compared to a previous frame to identify a moving connected object (MCO). If an unexpected MCO is in the frame a determination is made if the unexpected MCO is in an actionable region. If so, the robot is instructed to take an action.
Method of controlling robot, method of teaching robot, and robot system
A robot system includes a robot, a vision sensor, a controller, and an input unit. The vision sensor configured to measure a feature point and obtain a measured coordinate value. The controller configured to control the robot. The input unit configured to receive an input from a user toward the controller. The controller obtains, via the input unit, setting information data on a determination point which is different from the feature point. The robot system uses a coordinate value of the determination point and the measured coordinate value, and determines whether the robot is taking a target position and orientation.
SYSTEM AND METHOD FOR AUGMENTING A VISUAL OUTPUT FROM A ROBOTIC DEVICE
A method for visualizing data generated by a robotic device is presented. The method includes displaying an intended path of the robotic device in an environment. The method also includes displaying a first area in the environment identified as drivable for the robotic device. The method further includes receiving an input to identify a second area in the environment as drivable and transmitting the second area to the robotic device.
VIRTUAL TEACH AND REPEAT MOBILE MANIPULATION SYSTEM
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
TRAINING METHODS FOR DEEP NETWORKS
A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.
AUTONOMOUS TASK PERFORMANCE BASED ON VISUAL EMBEDDINGS
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
Optimized Placement Of Product On Flat-Line Conveyor
A computer-controlled system of placing product on a conveyor belt entrance to an industrial tool is based upon the combination of a 3D vision system (used to capture image data defining the surface area of a next product(s) to be placed) and a processor that is configured to determine an optimum location for placing that next product(s) on the conveyor belt. The processor then instructs a robotic arm to pick up and place the product at the processor-defined optimum location. Depending on the specific task to be performed by the industrial tool, the detailed analysis used to determine the optimum location will differ.
MACHINE LEARNING DEVICE, ROBOT SYSTEM, AND MACHINE LEARNING METHOD FOR LEARNING WORKPIECE PICKING OPERATION
A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of workpieces placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each workpiece, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the workpiece by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the workpiece, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.
Machine learning device, robot system, and machine learning method for learning workpiece picking operation
A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of workpieces placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each workpiece, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the workpiece by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the workpiece, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.
CALIBRATION METHOD AND DEVICE FOR ROBOTIC ARM SYSTEM
A calibration method for a robotic arm system is provided. The method includes: capturing an image of a calibration object fixed to a front end of the robotic arm by a visual device, wherein a pedestal of the robotic arm has a pedestal coordinate system, and the front end of the robotic arm has a first relative relationship with the pedestal, the front end of the robotic arm has a second relative relationship with the calibration object; receiving the image and obtaining three-dimensional feature data of the calibration object according to the image by a computing device; and computing a third relative relationship between the visual device and the pedestal according to the three-dimensional feature data, the first relative relationship, and the second relative relationship to calibrate a position error between a physical location of the calibration object and a predictive positioning-location generated by the visual device.