Documentation
¶
Index ¶
- type Alternative
- type AudioEvent
- type AudioStream
- type AudioStreamMemberAudioEvent
- type AudioStreamMemberConfigurationEvent
- type BadRequestException
- type CallAnalyticsEntity
- type CallAnalyticsItem
- type CallAnalyticsLanguageCode
- type CallAnalyticsTranscriptResultStream
- type CallAnalyticsTranscriptResultStreamMemberCategoryEvent
- type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent
- type CategoryEvent
- type ChannelDefinition
- type CharacterOffsets
- type ConfigurationEvent
- type ConflictException
- type ContentIdentificationType
- type ContentRedactionOutput
- type ContentRedactionType
- type Entity
- type InternalFailureException
- type IssueDetected
- type Item
- type ItemType
- type LanguageCode
- type LanguageWithScore
- type LimitExceededException
- type MediaEncoding
- type MedicalAlternative
- type MedicalContentIdentificationType
- type MedicalEntity
- type MedicalItem
- type MedicalResult
- type MedicalTranscript
- type MedicalTranscriptEvent
- type MedicalTranscriptResultStream
- type MedicalTranscriptResultStreamMemberTranscriptEvent
- type PartialResultsStability
- type ParticipantRole
- type PointsOfInterest
- type PostCallAnalyticsSettings
- type Result
- type Sentiment
- type ServiceUnavailableException
- type Specialty
- type TimestampRange
- type Transcript
- type TranscriptEvent
- type TranscriptResultStream
- type TranscriptResultStreamMemberTranscriptEvent
- type Type
- type UnknownUnionMember
- type UtteranceEvent
- type VocabularyFilterMethod
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Alternative ¶
type Alternative struct { // Contains entities identified as personally identifiable information (PII) in // your transcription output. Entities []Entity // Contains words, phrases, or punctuation marks in your transcription output. Items []Item // Contains transcribed text. Transcript *string // contains filtered or unexported fields }
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items , Entities , or Transcript .
type AudioEvent ¶
type AudioEvent struct { // An audio blob that contains the next part of the audio that you want to // transcribe. The maximum audio chunk size is 32 KB. AudioChunk []byte // contains filtered or unexported fields }
A wrapper for your audio chunks. Your audio stream consists of one or more audio events, which consist of one or more audio chunks.
For more information, see Event stream encoding.
type AudioStream ¶
type AudioStream interface {
// contains filtered or unexported methods
}
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see Transcribing streaming audio.
The following types satisfy this interface:
AudioStreamMemberAudioEvent AudioStreamMemberConfigurationEvent
Example (OutputUsage) ¶
package main import ( "e.coding.net/g-nnjn4981/aito/aws-sdk-go-v2/service/transcribestreaming/types" "fmt" ) func main() { var union types.AudioStream // type switches can be used to check the union value switch v := union.(type) { case *types.AudioStreamMemberAudioEvent: _ = v.Value // Value is types.AudioEvent case *types.AudioStreamMemberConfigurationEvent: _ = v.Value // Value is types.ConfigurationEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type AudioStreamMemberAudioEvent ¶
type AudioStreamMemberAudioEvent struct { Value AudioEvent // contains filtered or unexported fields }
A blob of audio from your application. Your audio stream consists of one or more audio events.
For more information, see Event stream encoding.
type AudioStreamMemberConfigurationEvent ¶
type AudioStreamMemberConfigurationEvent struct { Value ConfigurationEvent // contains filtered or unexported fields }
Contains audio channel definitions and post-call analytics settings.
type BadRequestException ¶
type BadRequestException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
One or more arguments to the StartStreamTranscription , StartMedicalStreamTranscription , or StartCallAnalyticsStreamTranscription operation was not valid. For example, MediaEncoding or LanguageCode used not valid values. Check the specified parameters and try your request again.
func (*BadRequestException) Error ¶
func (e *BadRequestException) Error() string
func (*BadRequestException) ErrorCode ¶
func (e *BadRequestException) ErrorCode() string
func (*BadRequestException) ErrorFault ¶
func (e *BadRequestException) ErrorFault() smithy.ErrorFault
func (*BadRequestException) ErrorMessage ¶
func (e *BadRequestException) ErrorMessage() string
type CallAnalyticsEntity ¶
type CallAnalyticsEntity struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the identified entity. BeginOffsetMillis *int64 // The category of information identified. For example, PII . Category *string // The confidence score associated with the identification of an entity in your // transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words that represent the identified entity. Content *string // The time, in milliseconds, from the beginning of the audio stream to the end of // the identified entity. EndOffsetMillis *int64 // The type of PII identified. For example, NAME or CREDIT_DEBIT_NUMBER . Type *string // contains filtered or unexported fields }
Contains entities identified as personally identifiable information (PII) in your transcription output, along with various associated attributes. Examples include category, confidence score, content, type, and start and end times.
type CallAnalyticsItem ¶
type CallAnalyticsItem struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the identified item. BeginOffsetMillis *int64 // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The time, in milliseconds, from the beginning of the audio stream to the end of // the identified item. EndOffsetMillis *int64 // If partial result stabilization is enabled, Stable indicates whether the // specified item is stable ( true ) or if it may change when the segment is // complete ( false ). Stable *bool // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // Indicates whether the specified item matches a word in the vocabulary filter // included in your Call Analytics request. If true , there is a vocabulary filter // match. VocabularyFilterMatch bool // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your Call Analytics transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type CallAnalyticsLanguageCode ¶
type CallAnalyticsLanguageCode string
const ( CallAnalyticsLanguageCodeEnUs CallAnalyticsLanguageCode = "en-US" CallAnalyticsLanguageCodeEnGb CallAnalyticsLanguageCode = "en-GB" CallAnalyticsLanguageCodeEsUs CallAnalyticsLanguageCode = "es-US" CallAnalyticsLanguageCodeFrCa CallAnalyticsLanguageCode = "fr-CA" CallAnalyticsLanguageCodeFrFr CallAnalyticsLanguageCode = "fr-FR" CallAnalyticsLanguageCodeEnAu CallAnalyticsLanguageCode = "en-AU" CallAnalyticsLanguageCodeItIt CallAnalyticsLanguageCode = "it-IT" CallAnalyticsLanguageCodeDeDe CallAnalyticsLanguageCode = "de-DE" CallAnalyticsLanguageCodePtBr CallAnalyticsLanguageCode = "pt-BR" )
Enum values for CallAnalyticsLanguageCode
func (CallAnalyticsLanguageCode) Values ¶
func (CallAnalyticsLanguageCode) Values() []CallAnalyticsLanguageCode
Values returns all known values for CallAnalyticsLanguageCode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type CallAnalyticsTranscriptResultStream ¶
type CallAnalyticsTranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your Call Analytics streaming session. These details are provided in the UtteranceEvent and CategoryEvent objects.
The following types satisfy this interface:
CallAnalyticsTranscriptResultStreamMemberCategoryEvent CallAnalyticsTranscriptResultStreamMemberUtteranceEvent
Example (OutputUsage) ¶
package main import ( "e.coding.net/g-nnjn4981/aito/aws-sdk-go-v2/service/transcribestreaming/types" "fmt" ) func main() { var union types.CallAnalyticsTranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.CallAnalyticsTranscriptResultStreamMemberCategoryEvent: _ = v.Value // Value is types.CategoryEvent case *types.CallAnalyticsTranscriptResultStreamMemberUtteranceEvent: _ = v.Value // Value is types.UtteranceEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type CallAnalyticsTranscriptResultStreamMemberCategoryEvent ¶
type CallAnalyticsTranscriptResultStreamMemberCategoryEvent struct { Value CategoryEvent // contains filtered or unexported fields }
Provides information on matched categories that were used to generate real-time supervisor alerts.
type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent ¶
type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent struct { Value UtteranceEvent // contains filtered or unexported fields }
Contains set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to channel definitions, partial result stabilization, sentiment, issue detection, and other transcription-related data.
type CategoryEvent ¶
type CategoryEvent struct { // Lists the categories that were matched in your audio segment. MatchedCategories []string // Contains information about the matched categories, including category names and // timestamps. MatchedDetails map[string]PointsOfInterest // contains filtered or unexported fields }
Provides information on any TranscriptFilterType categories that matched your transcription output. Matches are identified for each segment upon completion of that segment.
type ChannelDefinition ¶
type ChannelDefinition struct { // Specify the audio channel you want to define. // // This member is required. ChannelId int32 // Specify the speaker you want to define. Omitting this parameter is equivalent // to specifying both participants. // // This member is required. ParticipantRole ParticipantRole // contains filtered or unexported fields }
Makes it possible to specify which speaker is on which audio channel. For example, if your agent is the first participant to speak, you would set ChannelId to 0 (to indicate the first channel) and ParticipantRole to AGENT (to indicate that it's the agent speaking).
type CharacterOffsets ¶
type CharacterOffsets struct { // Provides the character count of the first character where a match is // identified. For example, the first character associated with an issue or a // category match in a segment transcript. Begin *int32 // Provides the character count of the last character where a match is identified. // For example, the last character associated with an issue or a category match in // a segment transcript. End *int32 // contains filtered or unexported fields }
Provides the location, using character count, in your transcript where a match is identified. For example, the location of an issue or a category match within a segment.
type ConfigurationEvent ¶
type ConfigurationEvent struct { // Indicates which speaker is on which audio channel. ChannelDefinitions []ChannelDefinition // Provides additional optional settings for your Call Analytics post-call // request, including encryption and output locations for your redacted and // unredacted transcript. PostCallAnalyticsSettings *PostCallAnalyticsSettings // contains filtered or unexported fields }
Allows you to set audio channel definitions and post-call analytics settings.
type ConflictException ¶
type ConflictException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A new stream started with the same session ID. The current stream has been terminated.
func (*ConflictException) Error ¶
func (e *ConflictException) Error() string
func (*ConflictException) ErrorCode ¶
func (e *ConflictException) ErrorCode() string
func (*ConflictException) ErrorFault ¶
func (e *ConflictException) ErrorFault() smithy.ErrorFault
func (*ConflictException) ErrorMessage ¶
func (e *ConflictException) ErrorMessage() string
type ContentIdentificationType ¶
type ContentIdentificationType string
const (
ContentIdentificationTypePii ContentIdentificationType = "PII"
)
Enum values for ContentIdentificationType
func (ContentIdentificationType) Values ¶
func (ContentIdentificationType) Values() []ContentIdentificationType
Values returns all known values for ContentIdentificationType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ContentRedactionOutput ¶
type ContentRedactionOutput string
const ( ContentRedactionOutputRedacted ContentRedactionOutput = "redacted" ContentRedactionOutputRedactedAndUnredacted ContentRedactionOutput = "redacted_and_unredacted" )
Enum values for ContentRedactionOutput
func (ContentRedactionOutput) Values ¶
func (ContentRedactionOutput) Values() []ContentRedactionOutput
Values returns all known values for ContentRedactionOutput. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ContentRedactionType ¶
type ContentRedactionType string
const (
ContentRedactionTypePii ContentRedactionType = "PII"
)
Enum values for ContentRedactionType
func (ContentRedactionType) Values ¶
func (ContentRedactionType) Values() []ContentRedactionType
Values returns all known values for ContentRedactionType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Entity ¶
type Entity struct { // The category of information identified. The only category is PII . Category *string // The confidence score associated with the identified PII entity in your audio. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words identified as PII. Content *string // The end time, in milliseconds, of the utterance that was identified as PII. EndTime float64 // The start time, in milliseconds, of the utterance that was identified as PII. StartTime float64 // The type of PII identified. For example, NAME or CREDIT_DEBIT_NUMBER . Type *string // contains filtered or unexported fields }
Contains entities identified as personally identifiable information (PII) in your transcription output, along with various associated attributes. Examples include category, confidence score, type, stability score, and start and end times.
type InternalFailureException ¶
type InternalFailureException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A problem occurred while processing the audio. Amazon Transcribe terminated processing.
func (*InternalFailureException) Error ¶
func (e *InternalFailureException) Error() string
func (*InternalFailureException) ErrorCode ¶
func (e *InternalFailureException) ErrorCode() string
func (*InternalFailureException) ErrorFault ¶
func (e *InternalFailureException) ErrorFault() smithy.ErrorFault
func (*InternalFailureException) ErrorMessage ¶
func (e *InternalFailureException) ErrorMessage() string
type IssueDetected ¶
type IssueDetected struct { // Provides the timestamps that identify when in an audio segment the specified // issue occurs. CharacterOffsets *CharacterOffsets // contains filtered or unexported fields }
Lists the issues that were identified in your audio segment.
type Item ¶
type Item struct { // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The end time, in milliseconds, of the transcribed item. EndTime float64 // If speaker partitioning is enabled, Speaker labels the speaker of the specified // item. Speaker *string // If partial result stabilization is enabled, Stable indicates whether the // specified item is stable ( true ) or if it may change when the segment is // complete ( false ). Stable *bool // The start time, in milliseconds, of the transcribed item. StartTime float64 // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // Indicates whether the specified item matches a word in the vocabulary filter // included in your request. If true , there is a vocabulary filter match. VocabularyFilterMatch bool // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type ItemType ¶
type ItemType string
type LanguageCode ¶
type LanguageCode string
const ( LanguageCodeEnUs LanguageCode = "en-US" LanguageCodeEnGb LanguageCode = "en-GB" LanguageCodeEsUs LanguageCode = "es-US" LanguageCodeFrCa LanguageCode = "fr-CA" LanguageCodeFrFr LanguageCode = "fr-FR" LanguageCodeEnAu LanguageCode = "en-AU" LanguageCodeItIt LanguageCode = "it-IT" LanguageCodeDeDe LanguageCode = "de-DE" LanguageCodePtBr LanguageCode = "pt-BR" LanguageCodeJaJp LanguageCode = "ja-JP" LanguageCodeKoKr LanguageCode = "ko-KR" LanguageCodeZhCn LanguageCode = "zh-CN" LanguageCodeHiIn LanguageCode = "hi-IN" LanguageCodeThTh LanguageCode = "th-TH" )
Enum values for LanguageCode
func (LanguageCode) Values ¶
func (LanguageCode) Values() []LanguageCode
Values returns all known values for LanguageCode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type LanguageWithScore ¶
type LanguageWithScore struct { // The language code of the identified language. LanguageCode LanguageCode // The confidence score associated with the identified language code. Confidence // scores are values between zero and one; larger values indicate a higher // confidence in the identified language. Score float64 // contains filtered or unexported fields }
The language code that represents the language identified in your audio, including the associated confidence score. If you enabled channel identification in your request and each channel contained a different language, you will have more than one LanguageWithScore result.
type LimitExceededException ¶
type LimitExceededException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Your client has exceeded one of the Amazon Transcribe limits. This is typically the audio length limit. Break your audio stream into smaller chunks and try your request again.
func (*LimitExceededException) Error ¶
func (e *LimitExceededException) Error() string
func (*LimitExceededException) ErrorCode ¶
func (e *LimitExceededException) ErrorCode() string
func (*LimitExceededException) ErrorFault ¶
func (e *LimitExceededException) ErrorFault() smithy.ErrorFault
func (*LimitExceededException) ErrorMessage ¶
func (e *LimitExceededException) ErrorMessage() string
type MediaEncoding ¶
type MediaEncoding string
const ( MediaEncodingPcm MediaEncoding = "pcm" MediaEncodingOggOpus MediaEncoding = "ogg-opus" MediaEncodingFlac MediaEncoding = "flac" )
Enum values for MediaEncoding
func (MediaEncoding) Values ¶
func (MediaEncoding) Values() []MediaEncoding
Values returns all known values for MediaEncoding. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalAlternative ¶
type MedicalAlternative struct { // Contains entities identified as personal health information (PHI) in your // transcription output. Entities []MedicalEntity // Contains words, phrases, or punctuation marks in your transcription output. Items []MedicalItem // Contains transcribed text. Transcript *string // contains filtered or unexported fields }
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items , Entities , or Transcript .
type MedicalContentIdentificationType ¶
type MedicalContentIdentificationType string
const (
MedicalContentIdentificationTypePhi MedicalContentIdentificationType = "PHI"
)
Enum values for MedicalContentIdentificationType
func (MedicalContentIdentificationType) Values ¶
func (MedicalContentIdentificationType) Values() []MedicalContentIdentificationType
Values returns all known values for MedicalContentIdentificationType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalEntity ¶
type MedicalEntity struct { // The category of information identified. The only category is PHI . Category *string // The confidence score associated with the identified PHI entity in your audio. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words identified as PHI. Content *string // The end time, in milliseconds, of the utterance that was identified as PHI. EndTime float64 // The start time, in milliseconds, of the utterance that was identified as PHI. StartTime float64 // contains filtered or unexported fields }
Contains entities identified as personal health information (PHI) in your transcription output, along with various associated attributes. Examples include category, confidence score, type, stability score, and start and end times.
type MedicalItem ¶
type MedicalItem struct { // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The end time, in milliseconds, of the transcribed item. EndTime float64 // If speaker partitioning is enabled, Speaker labels the speaker of the specified // item. Speaker *string // The start time, in milliseconds, of the transcribed item. StartTime float64 // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type MedicalResult ¶
type MedicalResult struct { // A list of possible alternative transcriptions for the input audio. Each // alternative may contain one or more of Items , Entities , or Transcript . Alternatives []MedicalAlternative // Indicates the channel identified for the Result . ChannelId *string // The end time, in milliseconds, of the Result . EndTime float64 // Indicates if the segment is complete. // // If IsPartial is true , the segment is not complete. If IsPartial is false , the // segment is complete. IsPartial bool // Provides a unique identifier for the Result . ResultId *string // The start time, in milliseconds, of the Result . StartTime float64 // contains filtered or unexported fields }
The Result associated with a .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type MedicalTranscript ¶
type MedicalTranscript struct { // Contains a set of transcription results from one or more audio segments, along // with additional information per your request parameters. This can include // information relating to alternative transcriptions, channel identification, // partial result stabilization, language identification, and other // transcription-related data. Results []MedicalResult // contains filtered or unexported fields }
The MedicalTranscript associated with a .
MedicalTranscript contains Results , which contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type MedicalTranscriptEvent ¶
type MedicalTranscriptEvent struct { // Contains Results , which contains a set of transcription results from one or // more audio segments, along with additional information per your request // parameters. This can include information relating to alternative transcriptions, // channel identification, partial result stabilization, language identification, // and other transcription-related data. Transcript *MedicalTranscript // contains filtered or unexported fields }
The MedicalTranscriptEvent associated with a MedicalTranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type MedicalTranscriptResultStream ¶
type MedicalTranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your streaming session.
The following types satisfy this interface:
MedicalTranscriptResultStreamMemberTranscriptEvent
Example (OutputUsage) ¶
package main import ( "e.coding.net/g-nnjn4981/aito/aws-sdk-go-v2/service/transcribestreaming/types" "fmt" ) func main() { var union types.MedicalTranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.MedicalTranscriptResultStreamMemberTranscriptEvent: _ = v.Value // Value is types.MedicalTranscriptEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type MedicalTranscriptResultStreamMemberTranscriptEvent ¶
type MedicalTranscriptResultStreamMemberTranscriptEvent struct { Value MedicalTranscriptEvent // contains filtered or unexported fields }
The MedicalTranscriptEvent associated with a MedicalTranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type PartialResultsStability ¶
type PartialResultsStability string
const ( PartialResultsStabilityHigh PartialResultsStability = "high" PartialResultsStabilityMedium PartialResultsStability = "medium" PartialResultsStabilityLow PartialResultsStability = "low" )
Enum values for PartialResultsStability
func (PartialResultsStability) Values ¶
func (PartialResultsStability) Values() []PartialResultsStability
Values returns all known values for PartialResultsStability. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ParticipantRole ¶
type ParticipantRole string
const ( ParticipantRoleAgent ParticipantRole = "AGENT" ParticipantRoleCustomer ParticipantRole = "CUSTOMER" )
Enum values for ParticipantRole
func (ParticipantRole) Values ¶
func (ParticipantRole) Values() []ParticipantRole
Values returns all known values for ParticipantRole. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type PointsOfInterest ¶
type PointsOfInterest struct { // Contains the timestamp ranges (start time through end time) of matched // categories and rules. TimestampRanges []TimestampRange // contains filtered or unexported fields }
Contains the timestamps of matched categories.
type PostCallAnalyticsSettings ¶
type PostCallAnalyticsSettings struct { // The Amazon Resource Name (ARN) of an IAM role that has permissions to access // the Amazon S3 bucket that contains your input files. If the role that you // specify doesn’t have the appropriate permissions to access the specified Amazon // S3 location, your request fails. // // IAM role ARNs have the format // arn:partition:iam::account:role/role-name-with-path . For example: // arn:aws:iam::111122223333:role/Admin . For more information, see [IAM ARNs]. // // [IAM ARNs]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns // // This member is required. DataAccessRoleArn *string // The Amazon S3 location where you want your Call Analytics post-call // transcription output stored. You can use any of the following formats to specify // the output location: // // - s3://DOC-EXAMPLE-BUCKET // // - s3://DOC-EXAMPLE-BUCKET/my-output-folder/ // // - s3://DOC-EXAMPLE-BUCKET/my-output-folder/my-call-analytics-job.json // // This member is required. OutputLocation *string // Specify whether you want only a redacted transcript or both a redacted and an // unredacted transcript. If you choose redacted and unredacted, two JSON files are // generated and stored in the Amazon S3 output location you specify. // // Note that to include ContentRedactionOutput in your request, you must enable // content redaction ( ContentRedactionType ). ContentRedactionOutput ContentRedactionOutput // The KMS key you want to use to encrypt your Call Analytics post-call output. // // If using a key located in the current Amazon Web Services account, you can // specify your KMS key in one of four ways: // // - Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab // . // // - Use an alias for the KMS key ID. For example, alias/ExampleAlias . // // - Use the Amazon Resource Name (ARN) for the KMS key ID. For example, // arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab . // // - Use the ARN for the KMS key alias. For example, // arn:aws:kms:region:account-ID:alias/ExampleAlias . // // If using a key located in a different Amazon Web Services account than the // current Amazon Web Services account, you can specify your KMS key in one of two // ways: // // - Use the ARN for the KMS key ID. For example, // arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab . // // - Use the ARN for the KMS key alias. For example, // arn:aws:kms:region:account-ID:alias/ExampleAlias . // // Note that the user making the request must have permission to use the specified // KMS key. OutputEncryptionKMSKeyId *string // contains filtered or unexported fields }
Allows you to specify additional settings for your streaming Call Analytics post-call request, including output locations for your redacted and unredacted transcript, which IAM role to use, and, optionally, which encryption key to use.
ContentRedactionOutput , DataAccessRoleArn , and OutputLocation are required fields.
type Result ¶
type Result struct { // A list of possible alternative transcriptions for the input audio. Each // alternative may contain one or more of Items , Entities , or Transcript . Alternatives []Alternative // Indicates which audio channel is associated with the Result . ChannelId *string // The end time, in milliseconds, of the Result . EndTime float64 // Indicates if the segment is complete. // // If IsPartial is true , the segment is not complete. If IsPartial is false , the // segment is complete. IsPartial bool // The language code that represents the language spoken in your audio stream. LanguageCode LanguageCode // The language code of the dominant language identified in your stream. // // If you enabled channel identification and each channel of your audio contains a // different language, you may have more than one result. LanguageIdentification []LanguageWithScore // Provides a unique identifier for the Result . ResultId *string // The start time, in milliseconds, of the Result . StartTime float64 // contains filtered or unexported fields }
The Result associated with a .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type Sentiment ¶
type Sentiment string
type ServiceUnavailableException ¶
type ServiceUnavailableException struct { // contains filtered or unexported fields }
The service is currently unavailable. Try your request later.
func (*ServiceUnavailableException) Error ¶
func (e *ServiceUnavailableException) Error() string
func (*ServiceUnavailableException) ErrorCode ¶
func (e *ServiceUnavailableException) ErrorCode() string
func (*ServiceUnavailableException) ErrorFault ¶
func (e *ServiceUnavailableException) ErrorFault() smithy.ErrorFault
func (*ServiceUnavailableException) ErrorMessage ¶
func (e *ServiceUnavailableException) ErrorMessage() string
type Specialty ¶
type Specialty string
type TimestampRange ¶
type TimestampRange struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the category match. BeginOffsetMillis *int64 // The time, in milliseconds, from the beginning of the audio stream to the end of // the category match. EndOffsetMillis *int64 // contains filtered or unexported fields }
Contains the timestamp range (start time through end time) of a matched category.
type Transcript ¶
type Transcript struct { // Contains a set of transcription results from one or more audio segments, along // with additional information per your request parameters. This can include // information relating to alternative transcriptions, channel identification, // partial result stabilization, language identification, and other // transcription-related data. Results []Result // contains filtered or unexported fields }
The Transcript associated with a .
Transcript contains Results , which contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type TranscriptEvent ¶
type TranscriptEvent struct { // Contains Results , which contains a set of transcription results from one or // more audio segments, along with additional information per your request // parameters. This can include information relating to alternative transcriptions, // channel identification, partial result stabilization, language identification, // and other transcription-related data. Transcript *Transcript // contains filtered or unexported fields }
The TranscriptEvent associated with a TranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type TranscriptResultStream ¶
type TranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your streaming session.
The following types satisfy this interface:
TranscriptResultStreamMemberTranscriptEvent
Example (OutputUsage) ¶
package main import ( "e.coding.net/g-nnjn4981/aito/aws-sdk-go-v2/service/transcribestreaming/types" "fmt" ) func main() { var union types.TranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.TranscriptResultStreamMemberTranscriptEvent: _ = v.Value // Value is types.TranscriptEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type TranscriptResultStreamMemberTranscriptEvent ¶
type TranscriptResultStreamMemberTranscriptEvent struct { Value TranscriptEvent // contains filtered or unexported fields }
Contains Transcript , which contains Results . The object contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type UnknownUnionMember ¶
type UnknownUnionMember struct { Tag string Value []byte // contains filtered or unexported fields }
UnknownUnionMember is returned when a union member is returned over the wire, but has an unknown tag.
type UtteranceEvent ¶
type UtteranceEvent struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the UtteranceEvent . BeginOffsetMillis *int64 // The time, in milliseconds, from the beginning of the audio stream to the start // of the UtteranceEvent . EndOffsetMillis *int64 // Contains entities identified as personally identifiable information (PII) in // your transcription output. Entities []CallAnalyticsEntity // Indicates whether the segment in the UtteranceEvent is complete ( FALSE ) or // partial ( TRUE ). IsPartial bool // Provides the issue that was detected in the specified segment. IssuesDetected []IssueDetected // Contains words, phrases, or punctuation marks that are associated with the // specified UtteranceEvent . Items []CallAnalyticsItem // Provides the role of the speaker for each audio channel, either CUSTOMER or // AGENT . ParticipantRole ParticipantRole // Provides the sentiment that was detected in the specified segment. Sentiment Sentiment // Contains transcribed text. Transcript *string // The unique identifier that is associated with the specified UtteranceEvent . UtteranceId *string // contains filtered or unexported fields }
Contains set of transcription results from one or more audio segments, along with additional information about the parameters included in your request. For example, channel definitions, partial result stabilization, sentiment, and issue detection.
type VocabularyFilterMethod ¶
type VocabularyFilterMethod string
const ( VocabularyFilterMethodRemove VocabularyFilterMethod = "remove" VocabularyFilterMethodMask VocabularyFilterMethod = "mask" VocabularyFilterMethodTag VocabularyFilterMethod = "tag" )
Enum values for VocabularyFilterMethod
func (VocabularyFilterMethod) Values ¶
func (VocabularyFilterMethod) Values() []VocabularyFilterMethod
Values returns all known values for VocabularyFilterMethod. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.