Vision
This file presents the different Vision - Command functions, Vision - Enums, Vision - Niryo Topics & Vision - Namedtuple available with the Vision API
Vision - Command functions
This section reference all existing functions to control your robot arm, which include
Getting camera image
Detecting objects
Managing workspaces
All functions to control the robot are accessible via an instance of the class NiryoRobot
robot = NiryoRobot(<robot_ip_address>)
robot.vision.vision_pick("workspace_1", 0.0, ObjectShape.ANY, ObjectColor.ANY)
robot.vision.detect_object("workspace_1", ObjectShape.ANY, ObjectColor.ANY)
...
See examples on Examples Section
List of functions subsections:
- class Vision(client, arm=None, tool=None)[source]
Vision robot functions
Example:
ros_instance = NiryoRos("10.10.10.10") # Hotspot vision_interface = Vision(ros_instance)
- Parameters
client (NiryoRos) – Niryo ROS client
Camera functions
- property Vision.get_img_compressed
Get image from video stream in a compressed format. Use
uncompress_image
from the vision package to uncompress itExamples:
import pyniryo img_compressed = vision.get_img_compressed() camera_info = vision.get_camera_intrinsics() img = pyniryo.uncompress_image(img_compressed) img = pyniryo.undistort_image(img, camera_info.intrinsics, camera_info.distortion)
- Returns
string containing a JPEG compressed image
- Return type
- property Vision.get_camera_intrinsics
Get calibration object: camera intrinsics, distortions coefficients The topic return a namedtuple(intrinsics: list[list[float]], distortion: list[list[float]])
Examples:
vision.get_camera_intrinsics() vision.get_camera_intrinsics().value def camera_info_callback(camera_info): print(camera_info.intrinsics) print(camera_info.distortion) vision.get_camera_intrinsics.unsubscribe() vision.get_camera_intrinsics.subscribe(camera_info_callback)
- Return type
- property Vision.get_image_parameters
Return the NiryoTopic to get last stream image parameters: Brightness factor, Contrast factor, Saturation factor. The topic return a namedtuple(brightness_factor: float, contrast_factor: float, saturation_factor: float)
Brightness factor: How much to adjust the brightness. 0.5 will give a darkened image, 1 will give the original image while 2 will enhance the brightness by a factor of 2.
Contrast factor: A factor of 1 gives original image. Making the factor towards 0 makes the image greyer, while factor>1 increases the contrast of the image.
Saturation factor: 0 will give a black and white image, 1 will give the original image while 2 will enhance the saturation by a factor of 2.
Examples:
vision.get_image_parameters() vision.get_image_parameters.value def image_parameters_callback(image_parameters): print(image_parameters.brightness_factor) print(image_parameters.contrast_factor) print(image_parameters.saturation_factor) vision.get_image_parameters.unsubscribe() vision.get_image_parameters.subscribe(image_parameters_callback)
- Returns
ImageParameters namedtuple containing the brightness factor, contrast factor and saturation factor.
- Return type
Detection functions
- Vision.get_target_pose_from_cam(workspace_name, height_offset=0.0, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]
First detects the specified object using the camera and then returns the robot pose in which the object can be picked with the current tool
- Parameters
- Returns
object_found, object_pose, object_shape, object_color
- Return type
(bool, PoseObject, ObjectShape, ObjectColor)
- Vision.vision_pick(workspace_name, height_offset=0.0, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]
Picks the specified object from the workspace. This function has multiple phases:
1. detect object using the camera2. prepare the current tool for picking3. approach the object4. move down to the correct picking pose5. actuate the current tool6. lift the objectExample:
robot = NiryoRobot(ip_address="x.x.x.x") robot.arm.calibrate_auto() robot.arm.move_pose(<observation_pose>) obj_found, shape_ret, color_ret = robot.vision.vision_pick(<workspace_name>, height_offset=0.0, shape=ObjectShape.ANY, color=ObjectColor.ANY)
- Parameters
- Returns
object_found, object_shape, object_color
- Return type
(bool, ObjectShape, ObjectColor)
- Vision.move_to_object(workspace_name, height_offset, shape, color)[source]
Same as get_target_pose_from_cam but directly moves to this position
- Parameters
- Returns
object_found, object_shape, object_color
- Return type
(bool, ObjectShape, ObjectColor)
- Vision.detect_object(workspace_name, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]
Detect object in workspace and return its pose and characteristics
- Parameters
workspace_name (str) – name of the workspace
shape (ObjectShape) – shape of the target
color (ObjectColor) – color of the target
- Returns
object_found, object_pose, object_shape, object_color
- Return type
(bool, PoseObject, str, str)
Workspace functions
- Vision.get_target_pose_from_rel(workspace_name, height_offset, x_rel, y_rel, yaw_rel)[source]
Given a pose (x_rel, y_rel, yaw_rel) relative to a workspace, this function returns the robot pose in which the current tool will be able to pick an object at this pose.
The height_offset argument (in m) defines how high the tool will hover over the workspace. If height_offset = 0, the tool will nearly touch the workspace.
- Parameters
- Returns
target_pose
- Return type
- Vision.save_workspace_from_robot_poses(workspace_name, pose_origin, pose_2, pose_3, pose_4)[source]
Save workspace by giving the poses of the robot to point its 4 corners with the calibration Tip. Corners should be in the good order. Markers’ pose will be deduced from these poses
Poses should be either a list [x, y, z, roll, pitch, yaw] or a PoseObject
- Parameters
workspace_name (str) – workspace name
pose_origin (Union[list[float], PoseObject]) –
pose_2 (Union[list[float], PoseObject]) –
pose_3 (Union[list[float], PoseObject]) –
pose_4 (Union[list[float], PoseObject]) –
- Return type
- Vision.save_workspace_from_points(workspace_name, point_origin, point_2, point_3, point_4)[source]
Save workspace by giving the points of worskpace’s 4 corners. Points are written as [x, y, z] Corners should be in the good order.
Vision - Niryo Topics
The use of these functions is explained in the NiryoTopic section. They allow the acquisition of data in real time by callbacks or by direct call.
Name |
Function |
Return type |
---|---|---|
|
|
|
|
|
Vision - Enums
List of enums:
ObjectColor
ObjectShape
ManageWorkspace
- class ObjectColor(value)[source]
Enumeration of Colors available for image processing
- RED = 'RED'
- BLUE = 'BLUE'
- GREEN = 'GREEN'
- ANY = 'ANY'
- class ObjectShape(value)[source]
Enumeration of Shapes available for image processing
- SQUARE = 'SQUARE'
- CIRCLE = 'CIRCLE'
- ANY = 'ANY'
- class ManageWorkspace(value)[source]
Enumeration of actions available for workspaces management
- SAVE = 1
- SAVE_WITH_POINTS = 2
- DELETE = -1
Vision - Namedtuple
- class CameraInfo(intrinsics, distortion)
Create new instance of CameraInfo(intrinsics, distortion)
- distortion
Alias for field number 1
- intrinsics
Alias for field number 0
- class ImageParameters(brightness_factor, contrast_factor, saturation_factor)
Create new instance of ImageParameters(brightness_factor, contrast_factor, saturation_factor)
- brightness_factor
Alias for field number 0
- contrast_factor
Alias for field number 1
- saturation_factor
Alias for field number 2