Documentation ¶
Index ¶
- func ElementTypeSliceToTensor(data [][]ElementType, shape []int64) (*tf.Tensor, error)
- func Float32SliceToTensor(data [][]float32, shape []int64) (*tf.Tensor, error)
- func Float64SliceToTensor(data [][]float64, shape []int64) (*tf.Tensor, error)
- func Int16SliceToTensor(data [][]int16, shape []int64) (*tf.Tensor, error)
- func Int32SliceToTensor(data [][]int32, shape []int64) (*tf.Tensor, error)
- func Int64SliceToTensor(data [][]int64, shape []int64) (*tf.Tensor, error)
- func Int8SliceToTensor(data [][]int8, shape []int64) (*tf.Tensor, error)
- func NewGeneralPredictor(model dlframework.ModelManifest, os ...options.Option) (common.Predictor, error)
- func Uint16SliceToTensor(data [][]uint16, shape []int64) (*tf.Tensor, error)
- func Uint32SliceToTensor(data [][]uint32, shape []int64) (*tf.Tensor, error)
- func Uint64SliceToTensor(data [][]uint64, shape []int64) (*tf.Tensor, error)
- func Uint8SliceToTensor(data [][]uint8, shape []int64) (*tf.Tensor, error)
- type Device
- type ElementType
- type GeneralPredictor
- func (p *GeneralPredictor) Close() error
- func (p *GeneralPredictor) Load(ctx context.Context, model dlframework.ModelManifest, opts ...options.Option) (common.Predictor, error)
- func (p *GeneralPredictor) Modality() (dlframework.Modality, error)
- func (p *GeneralPredictor) Predict(ctx context.Context, data interface{}, opts ...options.Option) error
- func (p *GeneralPredictor) ReadPredictedFeaturesAsMap(ctx context.Context) (map[string]interface{}, error)
- func (p *GeneralPredictor) Reset(ctx context.Context) error
- func (p *GeneralPredictor) SetDesiredOutput(modality dlframework.Modality)
- type Graph
- type Operation
- type Output
- type Session
- type SessionOptions
- type Tensor
- type Trace
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ElementTypeSliceToTensor ¶
func ElementTypeSliceToTensor(data [][]ElementType, shape []int64) (*tf.Tensor, error)
func Float32SliceToTensor ¶
func Float64SliceToTensor ¶
func Int16SliceToTensor ¶
func Int32SliceToTensor ¶
func Int64SliceToTensor ¶
func NewGeneralPredictor ¶ added in v1.1.0
func NewGeneralPredictor(model dlframework.ModelManifest, os ...options.Option) (common.Predictor, error)
NewGeneralPredictor ...
func Uint16SliceToTensor ¶
func Uint32SliceToTensor ¶
func Uint64SliceToTensor ¶
Types ¶
type Device ¶
Device structure contains information about a device associated with a session, as returned by ListDevices()
type ElementType ¶
type GeneralPredictor ¶ added in v1.1.0
GeneralPredictor ...
func (*GeneralPredictor) Load ¶ added in v1.1.0
func (p *GeneralPredictor) Load(ctx context.Context, model dlframework.ModelManifest, opts ...options.Option) (common.Predictor, error)
Load ...
func (*GeneralPredictor) Modality ¶ added in v1.1.0
func (p *GeneralPredictor) Modality() (dlframework.Modality, error)
Modality ...
func (*GeneralPredictor) Predict ¶ added in v1.1.0
func (p *GeneralPredictor) Predict(ctx context.Context, data interface{}, opts ...options.Option) error
Predict ...
func (*GeneralPredictor) ReadPredictedFeaturesAsMap ¶ added in v1.1.0
func (p *GeneralPredictor) ReadPredictedFeaturesAsMap(ctx context.Context) (map[string]interface{}, error)
ReadPredictedFeaturesAsMap ...
func (*GeneralPredictor) Reset ¶ added in v1.1.0
func (p *GeneralPredictor) Reset(ctx context.Context) error
Reset ...
func (*GeneralPredictor) SetDesiredOutput ¶ added in v1.2.1
func (p *GeneralPredictor) SetDesiredOutput(modality dlframework.Modality)
This allows postprocess to use different output formats, however, the model has to output in a format that the desired modality postprocess can handle
type Session ¶
type Session struct {
// contains filtered or unexported fields
}
Session drives a TensorFlow graph computation.
When a Session is created with a given target, a new Session object is bound to the universe of resources specified by that target. Those resources are available to this session to perform computation described in the GraphDef. After creating the session with a graph, the caller uses the Run() API to perform the computation and potentially fetch outputs as Tensors. A Session allows concurrent calls to Run().
func NewSession ¶
func NewSession(graph *Graph, options *SessionOptions) (*Session, error)
NewSession creates a new execution session with the associated graph. options may be nil to use the default options.
func (*Session) Close ¶
Close a session. This contacts any other processes associated with this session, if applicable. Blocks until all previous calls to Run have returned.
func (*Session) ListDevices ¶
ListDevices returns the list of devices associated with a Session.
type SessionOptions ¶
type SessionOptions struct { // Target indicates the TensorFlow runtime to connect to. // // If 'target' is empty or unspecified, the local TensorFlow runtime // implementation will be used. Otherwise, the TensorFlow engine // defined by 'target' will be used to perform all computations. // // "target" can be either a single entry or a comma separated list // of entries. Each entry is a resolvable address of one of the // following formats: // local // ip:port // host:port // ... other system-specific formats to identify tasks and jobs ... // // NOTE: at the moment 'local' maps to an in-process service-based // runtime. // // Upon creation, a single session affines itself to one of the // remote processes, with possible load balancing choices when the // "target" resolves to a list of possible processes. // // If the session disconnects from the remote process during its // lifetime, session calls may fail immediately. Target string // Config is a binary-serialized representation of the // tensorflow.ConfigProto protocol message // (https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto). Config []byte }
SessionOptions contains configuration information for a session.
type Trace ¶
type Trace struct {
// contains filtered or unexported fields
}
func NewTrace ¶
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/client/timeline.py
func (*Trace) Publish ¶
func (t *Trace) Publish(ctx context.Context, opts ...opentracing.StartSpanOption) error
Notes about start and end time from the NodeExecStats proto: For GPU, there is no difference between op_end_rel_micros and all_end_rel_micros. All are kernel times. For CPU, op_end_rel is the kernel time, while all_end_rel_micros includes some post-processing. Besides, currently, there is no way to measure the execution time of async ops accurately.