Documentation ¶
Overview ¶
Package vision is the service that allows you to access various computer vision algorithms (like detection, segmentation, tracking, etc) that usually only require a camera or image input. For more information, see the vision service docs.
Index ¶
- Constants
- Variables
- func Named(name string) resource.Name
- func NewRPCServiceServer(coll resource.APIResourceCollection[Service]) interface{}
- type Properties
- type Service
- func FromDependencies(deps resource.Dependencies, name string) (Service, error)
- func FromRobot(r robot.Robot, name string) (Service, error)
- func NewClientFromConn(ctx context.Context, conn rpc.ClientConn, remoteName string, ...) (Service, error)
- func NewService(name resource.Name, r robot.Robot, c func(ctx context.Context) error, ...) (Service, error)
Constants ¶
const SubtypeName = "vision"
SubtypeName is the name of the type of service.
Variables ¶
var API = resource.APINamespaceRDK.WithServiceType(SubtypeName)
API is a variable that identifies the vision service resource API.
Functions ¶
func NewRPCServiceServer ¶ added in v0.2.36
func NewRPCServiceServer(coll resource.APIResourceCollection[Service]) interface{}
NewRPCServiceServer constructs a vision gRPC service server. It is intentionally untyped to prevent use outside of tests.
Types ¶
type Properties ¶ added in v0.28.0
type Properties struct { ClassificationSupported bool DetectionSupported bool ObjectPCDsSupported bool }
Properties returns various information regarding the current vision service, specifically, which vision tasks are supported by the resource.
type Service ¶
type Service interface { resource.Resource // DetectionsFromCamera returns a list of detections from the next image from a specified camera using a configured detector. DetectionsFromCamera(ctx context.Context, cameraName string, extra map[string]interface{}) ([]objectdetection.Detection, error) // Detections returns a list of detections from a given image using a configured detector. Detections(ctx context.Context, img image.Image, extra map[string]interface{}) ([]objectdetection.Detection, error) // ClassificationsFromCamera returns a list of classifications from the next image from a specified camera using a configured classifier. ClassificationsFromCamera( ctx context.Context, cameraName string, n int, extra map[string]interface{}, ) (classification.Classifications, error) // Classifications returns a list of classifications from a given image using a configured classifier. Classifications( ctx context.Context, img image.Image, n int, extra map[string]interface{}, ) (classification.Classifications, error) // GetObjectPointClouds returns a list of 3D point cloud objects and metadata from the latest 3D camera image using a specified segmenter. GetObjectPointClouds(ctx context.Context, cameraName string, extra map[string]interface{}) ([]*viz.Object, error) // properties GetProperties(ctx context.Context, extra map[string]interface{}) (*Properties, error) // CaptureAllFromCamera returns the next image, detections, classifications, and objects all together, given a camera name. Used for // visualization. CaptureAllFromCamera(ctx context.Context, cameraName string, opts viscapture.CaptureOptions, extra map[string]interface{}, ) (viscapture.VisCapture, error) }
A Service implements various computer vision algorithms like detection and segmentation. For more information, see the vision service docs.
DetectionsFromCamera example:
myDetectorService, err := vision.FromRobot(machine, "my_detector") if err != nil { logger.Error(err) return } // Get detections from the camera output detections, err := myDetectorService.DetectionsFromCamera(context.Background(), "my_camera", nil) if err != nil { logger.Fatalf("Could not get detections: %v", err) } if len(detections) > 0 { logger.Info(detections[0]) }
Detections example:
myCam, err := camera.FromRobot(machine, "my_camera") if err != nil { logger.Error(err) return } // Get the stream from a camera camStream, err := myCam.Stream(context.Background()) // Get an image from the camera stream img, release, err := camStream.Next(context.Background()) defer release() myDetectorService, err := vision.FromRobot(machine, "my_detector") if err != nil { logger.Error(err) return } // Get the detections from the image detections, err := myDetectorService.Detections(context.Background(), img, nil) if err != nil { logger.Fatalf("Could not get detections: %v", err) } if len(detections) > 0 { logger.Info(detections[0]) }
ClassificationsFromCamera example:
myClassifierService, err := vision.FromRobot(machine, "my_classifier") if err != nil { logger.Error(err) return } // Get the 2 classifications with the highest confidence scores from the camera output classifications, err := myClassifierService.ClassificationsFromCamera(context.Background(), "my_camera", 2, nil) if err != nil { logger.Fatalf("Could not get classifications: %v", err) } if len(classifications) > 0 { logger.Info(classifications[0]) }
Classifications example:
myCam, err := camera.FromRobot(machine, "my_camera") if err != nil { logger.Error(err) return } // Get the stream from a camera camStream, err := myCam.Stream(context.Background()) if err!=nil { logger.Error(err) return } // Get an image from the camera stream img, release, err := camStream.Next(context.Background()) defer release() myClassifierService, err := vision.FromRobot(machine, "my_classifier") if err != nil { logger.Error(err) return } // Get the 2 classifications with the highest confidence scores from the image classifications, err := myClassifierService.Classifications(context.Background(), img, 2, nil) if err != nil { logger.Fatalf("Could not get classifications: %v", err) } if len(classifications) > 0 { logger.Info(classifications[0]) }
GetObjectPointClouds example:
mySegmenterService, err := vision.FromRobot(machine, "my_segmenter") if err != nil { logger.Error(err) return } // Get the objects from the camera output objects, err := mySegmenterService.GetObjectPointClouds(context.Background(), "my_camera", nil) if err != nil { logger.Fatalf("Could not get point clouds: %v", err) } if len(objects) > 0 { logger.Info(objects[0]) }
CaptureAllFromCamera example:
// The data to capture and return from the camera captOpts := viscapture.CaptureOptions{ ReturnImage: true, ReturnDetections: true, } // Get the captured data for a camera capture, err := visService.CaptureAllFromCamera(context.Background(), "my_camera", captOpts, nil) if err != nil { logger.Fatalf("Could not get capture data from vision service: %v", err) } image := capture.Image detections := capture.Detections classifications := capture.Classifications objects := capture.Objects
func FromDependencies ¶ added in v0.2.47
func FromDependencies(deps resource.Dependencies, name string) (Service, error)
FromDependencies is a helper for getting the named vision service from a collection of dependencies.
func NewClientFromConn ¶
func NewClientFromConn( ctx context.Context, conn rpc.ClientConn, remoteName string, name resource.Name, logger logging.Logger, ) (Service, error)
NewClientFromConn constructs a new Client from connection passed in.
func NewService ¶ added in v0.2.36
func NewService( name resource.Name, r robot.Robot, c func(ctx context.Context) error, cf classification.Classifier, df objectdetection.Detector, s3f segmentation.Segmenter, ) (Service, error)
NewService wraps the vision model in the struct that fulfills the vision service interface.
Directories ¶
Path | Synopsis |
---|---|
Package colordetector uses a heuristic based on hue and connected components to create bounding boxes around objects of a specified color.
|
Package colordetector uses a heuristic based on hue and connected components to create bounding boxes around objects of a specified color. |
Package detectionstosegments uses a 2D segmenter and a camera that can project its images to 3D to project the bounding boxes to 3D in order to created a segmented point cloud.
|
Package detectionstosegments uses a 2D segmenter and a camera that can project its images to 3D to project the bounding boxes to 3D in order to created a segmented point cloud. |
Package fake implements a fake vision service which always returns the user specified detections/classifications.
|
Package fake implements a fake vision service which always returns the user specified detections/classifications. |
Package mlvision uses an underlying model from the ML model service as a vision model, and wraps the ML model with the vision service methods.
|
Package mlvision uses an underlying model from the ML model service as a vision model, and wraps the ML model with the vision service methods. |
Package obstaclesdepth uses an underlying depth camera to fulfill GetObjectPointClouds, projecting its depth map to a point cloud, an then applying a point cloud clustering algorithm
|
Package obstaclesdepth uses an underlying depth camera to fulfill GetObjectPointClouds, projecting its depth map to a point cloud, an then applying a point cloud clustering algorithm |
Package obstaclesdistance uses an underlying camera to fulfill vision service methods, specifically GetObjectPointClouds, which performs several queries of NextPointCloud and returns a median point.
|
Package obstaclesdistance uses an underlying camera to fulfill vision service methods, specifically GetObjectPointClouds, which performs several queries of NextPointCloud and returns a median point. |
Package obstaclespointcloud uses the 3D radius clustering algorithm as defined in the RDK vision/segmentation package as vision model.
|
Package obstaclespointcloud uses the 3D radius clustering algorithm as defined in the RDK vision/segmentation package as vision model. |
Package register registers all relevant vision models and also API specific functions
|
Package register registers all relevant vision models and also API specific functions |