System.Speech
Enumerates values that describe the bits-per-sample characteristic of an audio format.
The audio format has 8 bits per sample.
The audio format has 16 bits per sample.
Enumerates values that indicate the number of channels in the audio format.
The audio format has one channel.
The audio format has two channels.
Enumerates values that describe the encoding format of audio.
The encoding format of the audio is ALaw.
The encoding format of the audio is Pulse Code Modulation (PCM).
The encoding format of the audio is ULaw.
Represents information about an audio format.
Initializes a new instance of the class and specifies the samples per second, bits per sample, and the number of channels.
The value for the samples per second.
The value for the bits per sample.
A member of the enumeration (indicating Mono or Stereo).
Initializes a new instance of the class and specifies the encoding format, samples per second, bits per sample, number of channels, average bytes per second, block alignment value, and an array containing format-specific data.
The encoding format.
The value for the samples per second.
The value for the bits per sample.
The value for the channel count.
The value for the average bytes per second.
The value for the BlockAlign.
A byte array containing the format-specific data.
Gets the average bytes per second of the audio.
The value for the average bytes per second.
Gets the bits per sample of the audio.
The value for the bits per sample.
Gets or sets the block alignment in bytes.
The value for the block alignment.
Gets the channel count of the audio.
The value for the channel count.
Gets the encoding format of the audio.
The encoding format of the audio.
Returns whether a given object is an instance of and equal to the current instance of .
The object to be compared.
Returns if the current instance of and that obtained from the argument are equal, otherwise returns .
Returns the format-specific data of the audio format.
A byte array containing the format-specific data.
Returns the hash code of the audio format.
The value for the hash code.
Gets the samples per second of the audio format.
The value for the samples per second.
Provides data for the event of the or the class.
Gets the new level of audio input after the or the event is raised.
The new level of audio input.
Contains a list of possible problems in the audio signal coming in to a speech recognition engine.
No problems with audio input.
Audio input is not detected.
Audio input is too fast.
Audio input is too loud.
Audio input has too much background noise.
Audio input is too slow.
Audio input is too quiet.
Provides data for the AudioSignalProblemOccurred event of a or a .
Gets the audio level associated with the event.
The level of audio input when the AudioSignalProblemOccurred event was raised.
Gets the position in the input device's audio stream that indicates where the problem occurred.
The position in the input device's audio stream when the AudioSignalProblemOccurred event was raised.
Gets the audio signal problem.
The audio signal problem that caused the AudioSignalProblemOccurred event to be raised.
Gets the position in the audio input that the recognizer has received that indicates where the problem occurred.
The position in the audio input that the recognizer has received when the AudioSignalProblemOccurred event was raised.
Contains a list of possible states for the audio input to a speech recognition engine.
Receiving silence or non-speech background noise.
Receiving speech input.
Not processing audio input.
Provides data for the event of the or the class.
Gets the new state of audio input to the recognizer.
The state of audio input after a or a event is raised.
Represents a set of alternatives in the constraints of a speech recognition grammar.
Initializes a new instance of the class that contains an empty set of alternatives.
Initializes a new instance of the class from an array containing one or more objects.
An array containing the set of alternatives.
Initializes a new instance of the class from an array containing one or more objects.
An array containing the set of alternatives.
Adds an array containing one or more objects to the set of alternatives.
The objects to add to this object.
Adds an array containing one or more objects to the set of alternatives.
The strings to add to this object.
Returns a object from this object.
A that matches this object.
Represents a speech recognition grammar used for free text dictation.
Initializes a new instance of the class for the default dictation grammar provided by Windows Desktop Speech Technology.
Initializes a new instance of the class with a specific dictation grammar.
An XML-compliant Universal Resource Identifier (URI) that specifies the dictation grammar, either grammar:dictation or grammar:dictation#spelling.
Adds a context to a dictation grammar that has been loaded by a or a object.
Text that indicates the start of a dictation context.
Text that indicates the end of a dictation context.
Lists the options that the object can use to specify white space for the display of a word or punctuation mark.
The item has no spaces preceding it.
The item does not specify how white space is handled.
The item has one space following it.
The item has two spaces following it.
The item has no spaces following it.
Provides data for the event of the and classes.
Gets the results of emulated recognition.
Detailed information about the results of recognition, or if an error occurred.
A runtime object that references a speech recognition grammar, which an application can use to define the constraints for speech recognition.
Initializes a new instance of the class
Initializes a new instance of the class from a .
A stream that describes a speech recognition grammar in a supported format.
describes a grammar that does not contain a root rule.
is .
The stream does not contain a valid description of a grammar, or describes a grammar that contains a rule reference that cannot be resolved.
Initializes a new instance of the class from a and specifies a root rule.
A stream that describes a speech recognition grammar in a supported format.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
cannot be resolved or is not public, or is and the grammar description does not define a root rule.
is .
The stream does not contain a valid description or describes a grammar that contains a rule reference that cannot be resolved.
Initializes a new instance of the class from a and specifies a root rule.
A connected to an input/output object (including files, VisualStudio Resources, and DLLs) that contains a grammar specification.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
Parameters to be passed to the initialization handler specified by the property for the entry point or the root rule of the to be created. This parameter may be null.
is connected to a grammar that:
Does not contain the rule specified in
Requires initialization parameters different from those specified in
Contains a relative rule reference that cannot be resolved by the default base rule for grammars
Initializes a new instance of the class from a stream, specifies a root rule, and defines a base Uniform Resource Identifier (URI) to resolve relative rule references.
A stream that describes a speech recognition grammar in a supported format.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
The base URI to use to resolve any relative rule reference in the grammar description, or .
cannot be resolved or is not public, or is and the grammar description does not define a root rule.
is .
The stream does not contain a valid description or describes a grammar that contains a rule reference that cannot be resolved.
Initializes a new instance of the class a and specifies a root rule and a base URI to resolve relative references.
A connected to an input/output object (including files, VisualStudio Resources, and DLLs) that contains a grammar specification.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
The base URI to use to resolve any relative rule reference in the grammar description, or .
Parameters to be passed to the initialization handler specified by the property for the entry point or the root rule of the to be created. This parameter may be null.
Any of the parameters contain an invalid value.
The is connected to a grammar that does not contain the rule specified by .
The contents of the array parameters do not match the arguments of any of the rule's initialization handlers.
The grammar contains a relative rule reference that cannot be resolved by the default base rule for grammars or the URI supplied by .
Initializes a new instance of the class from a object.
An instance of that contains the constraints for the speech recognition grammar.
Initializes a new instance of a class from an object.
The constraints for the speech recognition grammar.
does not contain a root rule.
is .
contains a rule reference that cannot be resolved.
Initializes a new instance of a class from an object and specifies a root rule.
The constraints for the speech recognition grammar.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the .
cannot be resolved or is not public, or is and does not contain a root rule.
is .
contains a rule reference that cannot be resolved.
Initializes a new instance of the class from an instance of , and specifies the name of a rule to be the entry point to the grammar.
An instance of that contains the constraints for the speech recognition grammar.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
Parameters to be passed to the initialization handler specified by the property for the entry point or the root rule of the to be created. This parameter may be null.
Any of the parameters contain an invalid value.
The specified by does not contain the rule specified by .
The contents of the array parameters do not match the arguments of any of the rule's initialization handlers.
Initializes a new instance of a class from an object, specifies a root rule, and defines a base Uniform Resource Identifier (URI) to resolve relative rule references.
The constraints for the speech recognition grammar.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the .
The base URI to use to resolve any relative rule reference in the , or .
cannot be resolved or is not public, or is and does not contain a root rule.
is .
contains a rule reference that cannot be resolved.
Initializes a new instance of the class from an instance of , and specifies the name of a rule to be the entry point to the grammar and a base URI to resolve relative references.
An instance of that contains the constraints for the speech recognition grammar.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
The base URI to use to resolve any relative rule reference in the grammar description, or .
Parameters to be passed to the initialization handler specified by the property for the entry point or the root rule of the to be created.This parameter may be null.
Any of the parameters contain an invalid value.
The specified by does not contain the rule specified in .
The contents of the array parameters do not match the arguments of any of the rule's initialization handlers.
The grammar has a relative rule reference that cannot be resolved by the default base rule for grammars or the URI supplied by .
Initializes a new instance of the class from a file.
The path of the file that describes a speech recognition grammar in a supported format.
contains the empty string (""), or the file describes a grammar that does not contain a root rule.
is .
The file does not contain a valid description, or describes a grammar that contains a rule reference that cannot be resolved.
Initializes a new instance of the class from a file and specifies a root rule.
The path of the file that describes a speech recognition grammar in a supported format.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
cannot be resolved or is not public, is the empty string (""), or is and the grammar description does not define a root rule.
is .
The file does not contain a valid description or describes a grammar that contains a rule reference that cannot be resolved.
Initializes a new instance of the class from a file that contains a grammar definition, and specifies the name of a rule to be the entry point to the grammar.
The path to a file, including DLLs, that contains a grammar specification.
The identifier of the rule to use as the entry point of the speech recognition grammar, or to use the default root rule of the grammar description.
Parameters to be passed to the initialization handler specified by the property for the entry point or the root rule of the to be created. This parameter may be null.
Any of the parameters contain an invalid value.
The file specified by does not contain a valid grammar or the rule specified in .
The contents of the array parameters do not match the arguments of any of the rule's initialization handlers.
The grammar has a relative rule reference that cannot be resolved by the default base rule for grammars.
Gets or sets a value that controls whether a can be used by a speech recognizer to perform recognition.
The property returns if a speech recognizer can perform recognition using the speech recognition grammar; otherwise the property returns . The default is .
Gets whether a grammar is strongly typed.
The property returns if the grammar is strongly-typed; otherwise the property returns .
Gets whether a has been loaded by a speech recognizer.
The property returns if the referenced speech recognition grammar is currently loaded in a speech recognizer; otherwise the property returns . The default is .
The method returns a localized instance of a object derived from .
In an assembly, the of an object based on .
Parameters to be passed to an initialization method of the localizedobject based on . This parameter may be null.
The method returns a valid object based on , or if there has been an error.
Gets or sets the name of a object.
The property returns the name of the object. The default is .
Gets or sets the priority value of a object.
The property returns an integer value that represents the relative priority of a specific . The range is from -128 to 127 inclusive. The default is 0.
Gets or sets a value with the name of a binary resource that was used to load the current .
The property returns the name of the binary resource from which the strongly-typed grammar, used by , was loaded.
Gets the name of the root rule or entry point of a object.
The property returns the identifier for the root rule of the referenced speech recognition grammar. The default is .
Raised when a speech recognizer performs recognition using the object.
The method initializes a strongly-typed grammar.
Parameters to be passed to initialize the strongly-typed grammar.This parameter may be null.
Gets or sets the weight value of a object.
The property returns a floating point value indicating the relative weight that a recognition engine instance should assign to the grammar when processing speech input. The range is from 0.0 to 1.0 inclusive. The default is 1.0.
Provides a mechanism for programmatically building the constraints for a speech recognition grammar.
Initializes a new, empty instance of the class.
Initializes a new instance of the class from a set of alternatives.
The set of alternatives.
Initializes a new instance of the class from a repeated element.
The repeated element.
The minimum number of times that input matching the element defined by must occur to constitute a match.
The maximum number of times that input matching the element defined by can occur to constitute a match.
Initializes a new instance of the class from a semantic key.
The semantic key.
Initializes a new instance of the class from a semantic value.
The semantic value or name/value pair.
Initializes a new instance of the class from a sequence of words.
The sequence of words.
Initializes a new instance of the class from the sequence of words in a and specifies how many times the can be repeated.
The repeated sequence of words.
The minimum number of times that input matching the phrase must occur to constitute a match.
The maximum number of times that input matching the phrase can occur to constitute a match.
Initializes a new instance of the class for a subset of a sequence of words.
The sequence of words.
The matching mode the speech recognition grammar uses to recognize the phrase.
Creates a new that contains a object followed by a object.
The first grammar element, which represents a set of alternatives.
The second grammar element.
A for the sequence of the element followed by the element.
Creates a new that contains a object followed by a object.
The first grammar element.
The second grammar element, which represents a set of alternatives.
A for the sequence of the element followed by the element.
Creates a new that contains a sequence of two objects.
The first grammar element.
The second grammar element.
A for the sequence of the element followed by the element.
Creates a new that contains a object followed by a phrase.
The first grammar element.
The second grammar element, which represents a sequence of words.
A for the sequence of the element followed by the element.
Creates a new that contains a phrase followed by a object.
The first grammar element, which represents a sequence of words.
The second grammar element.
A for the sequence of the element followed by the element.
Appends a set of alternatives to the current sequence of grammar elements.
The set of alternatives to append.
Appends a grammar element to the current sequence of grammar elements.
The grammar element to append.
Appends a repeated grammar element to the current sequence of grammar elements.
The repeated grammar element to append.
The minimum number of times that input matching the element defined by must occur to constitute a match.
The maximum number of times that input matching the element defined by can occur to constitute a match.
Appends a semantic key to the current sequence of grammar elements.
The semantic key to append.
Appends a semantic value to the current sequence of grammar elements.
The semantic value to append.
Appends a phrase to the current sequence of grammar elements.
The sequence of words to append.
Appends a repeated phrase to the current sequence of grammar elements.
The repeated sequence of words to append.
The minimum number of times that input matching must occur to constitute a match.
The maximum number of times that input matching can occur to constitute a match.
Appends an element for a subset of a phrase to the current sequence of grammar elements.
The sequence of words to append.
The matching mode the grammar uses to recognize the phrase.
Appends the default dictation grammar to the current sequence of grammar elements.
Appends the specified dictation grammar to the current sequence of grammar elements.
The category of the dictation grammar to append.
Appends a grammar definition file to the current sequence of grammar elements.
The path or Universal Resource Identifier (URI) of the file that describes a speech recognition grammar in a supported format.
Appends the specified rule of a grammar definition file to the current sequence of grammar elements.
The file path or Universal Resource Identifier (URI) of the file that describes a speech recognition grammar in a supported format.
The identifier of the rule to append, or to append the default root rule of the grammar file.
Appends a recognition grammar element that matches any input to the current sequence of grammar elements.
Gets or sets the culture of the speech recognition grammar.
The culture of the . The default is the executing thread's property.
Gets a string that shows the contents and structure of the grammar contained by the .
The current content and structure of the .
Creates a new that contains a object followed by a object.
The first grammar element, which represents a set of alternatives.
The second grammar element.
Returns a for the sequence of the parameter followed by the parameter.
Creates a new that contains a followed by a .
The first grammar element.
The second grammar element, which represents a set of alternative elements.
Returns a for the sequence of the parameter followed by the parameter.
Creates a new that contains a sequence of two objects.
The first grammar element.
The second grammar element.
Returns a for the sequence of the parameter followed by the parameter.
Creates a new that contains a followed by a phrase.
The first grammar element.
The second grammar element, which represents a sequence of words.
Returns a for the sequence of the parameter followed by the parameter.
Creates a new that contains a phrase followed by a .
The first grammar element, which represents a sequence of words.
The second grammar element.
Returns a for the sequence of the parameter followed by the parameter.
Converts a object to a object.
The set of alternatives to convert.
The converted object.
Converts a object to a object.
The semantic key to convert.
The converted object.
Converts a object to a object.
The object to convert.
The converted object.
Converts a string to a object.
The string to convert.
The converted string.
Provides data for the event of a or object.
The object that has completed loading.
The that was loaded by the recognizer.
Provides information about speech recognition events.
Gets the recognition result data associated with the speech recognition event.
The property returns the that contains the information about the recognition.
Contains detailed information about input that was recognized by instances of or .
Gets the collection of possible matches for input to the speech recognizer.
A read-only collection of the recognition alternates.
Gets the audio associated with the recognition result.
The audio associated with the recognition result or if the recognizer generated the result from a call to the or methods of a or instance.
Gets a section of the audio that is associated with a specific range of words in the recognition result.
The first word in the range.
The last word in the range.
The section of audio associated with the word range.
The recognizer generated the result from a call to or methods of the or objects.
Populates a instance with the data needed to serialize the target object.
The object to populate with data.
The destination for the serialization.
Provides data for the event raised by a or a object.
Gets the location in the input device's audio stream associated with the event.
The location in the input device's audio stream associated with the event.
Gets a value that indicates whether a babble timeout generated the event.
if the has detected only background noise for longer than was specified by its property; otherwise
Gets a value that indicates whether an initial silence timeout generated the event.
if the has detected only silence for a longer time period than was specified by its property; otherwise
Gets a value indicating whether the input stream ended.
if the recognizer no longer has audio input; otherwise, .
Gets the recognition result.
The recognition result if the recognition operation succeeded; otherwise, .
Represents audio input that is associated with a .
Gets the location in the input audio stream for the start of the recognized audio.
The location in the input audio stream for the start of the recognized audio.
Gets the duration of the input audio stream for the recognized audio.
The duration within the input audio stream for the recognized audio.
Gets the format of the audio processed by a recognition engine.
The format of the audio processed by the speech recognizer.
Selects and returns a section of the current recognized audio as binary data.
The starting point of the audio data to be returned.
The length of the segment to be returned.
Returns a subsection of the recognized audio, as defined by and .
and define a segment of audio outside the range of the current segment.
The current recognized audio contains no data.
Gets the system time at the start of the recognition operation.
The system time at the start of the recognition operation.
Writes the entire audio to a stream as raw data.
The stream that will receive the audio data.
Writes audio to a stream in Wave format.
The stream that will receive the audio data.
Contains detailed information, generated by the speech recognizer, about the recognized input.
Gets a value, assigned by the recognizer, that represents the likelihood that a matches a given input.
A relative measure of the certainty of correct recognition of a phrase. The value is from 0.0 to 1.0, for low to high confidence, respectively.
Returns a semantic markup language (SML) document for the semantic information in the object.
Returns an SML description of the semantics of the as an XPath navigable object.
Gets the that the speech recognizer used to return the .
The grammar object that the speech recognizer used to identify the input.
Gets the identifier for the homophone group for the phrase.
The identifier for the homophone group for the phrase.
Gets a collection of the recognition alternates that have the same pronunciation as this recognized phrase.
A read-only collection of the recognition alternates that have the same pronunciation as this recognized phrase.
Gets information about the text that the speech recognizer changed as part of speech-to-text normalization.
A collection of objects that describe sections of text that the speech recognizer replaced when it normalized the recognized input.
Gets the semantic information that is associated with the recognized phrase.
The semantic information associated with the recognized phrase.
Gets the normalized text generated by a speech recognizer from recognized input.
The normalized text generated by a speech recognizer from recognized input.
Gets the words generated by a speech recognizer from recognized input.
The collection of objects generated by a speech recognizer for recognized input.
Provides the atomic unit of recognized speech.
Initializes a new instance of the class.
The normalized text for a recognized word.
This value can be , "", or .
A value from 0.0 through 1.0 indicating the certainty of word recognition.
The phonetic spelling of a recognized word.
This value can be , "", or .
The unnormalized text for a recognized word.
This argument is required and may not be , "", or .
Defines the use of white space to display recognized words.
The location of the recognized word in the audio input stream.
This value can be .
The length of the audio input corresponding to the recognized word.
This value can be .
Gets a value, assigned by the recognizer, that represents the likelihood that a recognized word matches a given input.
A relative measure of the certainty of correct recognition for a word. The value is from 0.0 to 1.0, for low to high confidence, respectively.
Gets formatting information used to create the text output from the current instance.
Specifies the use of white space to display of the contents of a object.
Gets the unnormalized text of a recognized word.
Returns a containing the text of a recognized word, without any normalization.
Gets the phonetic spelling of a recognized word.
A string of characters from a supported phonetic alphabet, such as the International Phonetic Alphabet (IPA) or the Universal Phone Set (UPS).
Gets the normalized text for a recognized word.
A string that contains the normalized text output for a given input word.
Enumerates values of the recognition mode.
Specifies that recognition does not terminate after completion.
Specifies that recognition terminates after completion.
Represents information about a or instance.
Gets additional information about a or instance.
Returns an instance of containing information about the configuration of a or object.
Gets the culture supported by a or instance.
Returns information about the culture supported by a given or instance.
Gets the description of a or instance.
Returns a that describes the configuration for a specific or instance.
Disposes the RecognizerInfo object.
Gets the identifier of a or instance.
Returns the identifier for a specific or instance.
Gets the friendly name of a or instance.
Returns the friendly name for a specific or instance.
Gets the audio formats supported by a or instance.
Returns a list of audio formats supported by a specific or instance.
Enumerates values of the recognizer's state.
The recognition engine is available to receive and analyze audio input.
The recognition engine is not receiving or analyzing audio input.
Returns data from a or a event.
Gets the audio position associated with the event.
Returns the location within the speech buffer of a or a when it pauses and raises a RecognizerUpdateReached event.
Gets the UserToken passed to the system when an application calls or .
Returns an object that contains the UserToken.
Contains information about a speech normalization procedure that has been performed on recognition results.
Gets the number of recognized words replaced by the speech normalization procedure.
Returns the number of recognized words replaced by the speech normalization procedure.
Gets information about the leading and trailing spaces for the text replaced by the speech normalization procedure.
Returns a object that specifies the use of white space to display text replaced by normalization.
Gets the location of the first recognized word replaced by the speech normalization procedure.
Returns the location of the first recognized word replaced by the speech normalization procedure.
Gets the recognized text replaced by the speech normalization procedure.
Returns the recognized text replaced by the speech normalization procedure.
Associates a key string with values to define objects.
Assigns a semantic key to one or more objects used to create a speech recognition grammar.
The tag to be used as a semantic key to access the instance associated with the objects specified by the argument.
An array of grammar components that will be associated with a object accessible with the tag defined in .
Assigns a semantic key to one or more instances used to create a speech recognition grammar.
The tag to be used access the instance associated with the objects specified by the argument.
One or more objects, whose concatenated text will be associated with a object accessible with the tag defined in .
Returns an instance of constructed from the current instance.
Represents a semantic value and optionally associates the value with a component of a speech recognition grammar.
Initializes a new instance of the class and specifies a semantic value.
The value managed by . Must be of type , , , or .
Initializes a new instance of the class and associates a semantic value with a object.
A grammar component to be used in recognition.
The value managed by . Must be of type , , , or .
Initializes a new instance of the class and associates a semantic value with a object.
A phrase to be used in recognition.
The value managed by . Must be of type , , , or .
Returns an instance of constructed from the current instance.
Returns an instance of constructed from the current instance.
Represents the semantic organization of a recognized phrase.
Initializes a new instance of the class and specifies a semantic value.
The information to be stored in the object.
Initializes a new instance of the class and specifies a semantic value, a key name, and a confidence level.
A key that can be used to reference this instance.
An object containing information to be stored in the object.
A containing an estimate of the certainty of semantic analysis.
Returns a relative measure of the certainty as to the correctness of the semantic parsing that returned the current instance of .
Returns a that is a relative measure of the certainty of semantic parsing that returned the current instance of .
Indicates whether the current instance collection contains a specific key and a specific instance of expressed as a key/value pair.
An instance of instantiated for a given value of a key string and a instance.
Returns a which is if the current contains an instance of KeyValuePair<String, SemanticValue> for a specified value of the key string and the . Otherwise, is returned.
Indicates whether the current instance collection contains a child instance with a given key string.
containing the key string used to identify a child instance of under the current .
Returns a , if a child instance tagged with the string is found, if not.
Returns the number of child objects under the current instance.
The number of child objects under the current .
Determines whether a specified object is an instance of SemanticValue and equal to the current instance of SemanticValue.
The object to evaluate.
if the specified Object is equal to the current Object; otherwise, .
Provides a hash code for a SemanticValue object.
A hash code for the current object.
Returns child instances that belong to the current .
A key for a contained in the current instance of .
Returns a child of the current that can be indexed as part of a key value pair: KeyValuePair<String,SemanticValue>.
Thrown if no child member of the current instance of has the key matching the parameter.
Thrown if code attempts to change the at a given index.
Adds the specified key and to the collection.
A key for a .
Removes all key/value pairs from the collection.
Copies a key/value pair to a specific location in a targeted array.
The array of key/value pairs that is the target of the operation.
An integer that specifies the location in the array to which the key/value pair will be copied.
Gets a value that indicates whether the collection is read-only.
Returns a value that indicates whether the collection is read-only.
Removes the specified key and from the collection.
A key for a .
if the key/value pair was successfully removed from the collection; otherwise, . This method also returns if the key/value pair is not found in the collection.
Adds the specified key and to the dictionary.
A key for a .
The to add.
Gets a collection that contains the keys from a dictionary of key/value pairs.
A collection that contains the keys from a dictionary of key/value pairs.
Removes the specified key and from the dictionary.
A key for a .
if the key/value pair was successfully removed from the dictionary; otherwise, . This method also returns if the key/value pair is not found in the dictionary.
Gets the associated with the specified key.
A key for a .
The to get.
if the dictionary contains a key/value pair with the specified key; otherwise, .
Gets a collection that contains the values from a dictionary of key/value pairs.
A collection that contains the values from a dictionary of key/value pairs.
Returns an enumerator that iterates through a collection.
An enumerator that iterates through a collection.
Returns an enumerator that iterates through a collection.
Returns an enumerator that iterates through a collection.
A read-only property that returns the information contained in the current .
Returns an instance containing the information stored in the current instance.
Returns data from or events.
Gets the position in the audio stream where speech was detected.
Returns the location of a detected phrase within a recognition engine's speech buffer.
Returns notification from or events.
This class supports the .NET Framework infrastructure and is not intended to be used directly from application code.
Provides the means to access and manage an in-process speech recognition engine.
Initializes a new instance of the class using the default speech recognizer for the system.
Initializes a new instance of the class using the default speech recognizer for a specified locale.
The locale that the speech recognizer must support.
None of the installed speech recognizers support the specified locale, or is the invariant culture.
is .
Initializes a new instance of the using the information in a object to specify the recognizer to use.
The information for the specific speech recognizer.
Initializes a new instance of the class with a string parameter that specifies the name of the recognizer to use.
The token name of the speech recognizer to use.
No speech recognizer with that token name is installed, or is the empty string ("").
is .
Gets the format of the audio being received by the .
The format of audio at the input to the instance, or if the input is not configured or set to the null input.
Gets the level of the audio being received by the .
The audio level of the input to the speech recognizer, from 0 through 100.
Raised when the reports the level of its audio input.
Gets the current location in the audio stream being generated by the device that is providing input to the .
The current location in the audio stream being generated by the input device.
Raised when the detects a problem in the audio signal.
Gets the state of the audio being received by the .
The state of the audio input to the speech recognizer.
Raised when the state changes in the audio being received by the .
Gets or sets the time interval during which a accepts input containing only background noise, before finalizing recognition.
The duration of the time interval.
This property is set to less than 0 seconds.
Disposes the object.
Disposes the object and releases resources used during the session.
to release both managed and unmanaged resources; to release only unmanaged resources.
Emulates input of specific words to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
An array of word units that contains the input for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The result for the recognition operation, or if the operation is not successful or the recognizer is not enabled.
The recognizer has no speech recognition grammars loaded.
is .
contains one or more elements.
contains the , , or flag.
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition.
The input for the recognition operation.
The result for the recognition operation, or if the operation is not successful or the recognizer is not enabled.
The recognizer has no speech recognition grammars loaded.
is .
is the empty string ("").
Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
The input phrase for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The result for the recognition operation, or if the operation is not successful or the recognizer is not enabled.
The recognizer has no speech recognition grammars loaded.
is .
is the empty string ("").
contains the , , or flag.
Emulates input of specific words to the speech recognizer, using an array of objects in place of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
An array of word units that contains the input for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.
is .
contains one or more elements.
contains the , , or flag.
Emulates input of a phrase to the speech recognizer, using text in place of audio for asynchronous speech recognition.
The input for the recognition operation.
The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.
is .
is the empty string ("").
Emulates input of a phrase to the speech recognizer, using text in place of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
The input phrase for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.
is .
is the empty string ("").
contains the , , or flag.
Raised when the finalizes an asynchronous recognition operation of emulated input.
Gets or sets the interval of silence that the will accept at the end of unambiguous input before finalizing a recognition operation.
The duration of the interval of silence.
This property is set to less than 0 seconds or greater than 10 seconds.
Gets or sets the interval of silence that the will accept at the end of ambiguous input before finalizing a recognition operation.
The duration of the interval of silence.
This property is set to less than 0 seconds or greater than 10 seconds.
Gets a collection of the objects that are loaded in this instance.
The collection of objects.
Gets or sets the time interval during which a accepts input containing only silence before finalizing recognition.
The duration of the interval of silence.
This property is set to less than 0 seconds.
Returns information for all of the installed speech recognizers on the current system.
A read-only collection of the objects that describe the installed recognizers.
Synchronously loads a object.
The grammar object to load.
is .
is not in a valid state.
Asynchronously loads a speech recognition grammar.
The speech recognition grammar to load.
is .
is not in a valid state.
The asynchronous operation was canceled.
Raised when the finishes the asynchronous loading of a object.
Gets or sets the maximum number of alternate recognition results that the returns for each recognition operation.
The number of alternate results to return.
is set to a value less than 0.
Returns the values of settings for the recognizer.
The name of the setting to return.
The value of the setting.
is .
is the empty string ("").
The recognizer does not have a setting by that name.
Performs a synchronous speech recognition operation.
The recognition result for the input, or if the operation is not successful or the recognizer is not enabled.
Performs a synchronous speech recognition operation with a specified initial silence timeout period.
The interval of time a speech recognizer accepts input containing only silence before finalizing recognition.
The recognition result for the input, or if the operation is not successful or the recognizer is not enabled.
Performs a single, asynchronous speech recognition operation.
Performs one or more asynchronous speech recognition operations.
Indicates whether to perform one or multiple recognition operations.
Terminates asynchronous recognition without waiting for the current recognition operation to complete.
Stops asynchronous recognition after the current recognition operation completes.
Raised when the finalizes an asynchronous recognition operation.
Gets the current location of the in the audio input that it is processing.
The position of the recognizer in the audio input that it is processing.
Gets information about the current instance of .
Information about the current speech recognizer.
Raised when a running pauses to accept modifications.
Requests that the recognizer pauses to update its state.
Requests that the recognizer pauses to update its state and provides a user token for the associated event.
User-defined information that contains information for the operation.
Requests that the recognizer pauses to update its state and provides an offset and a user token for the associated event.
User-defined information that contains information for the operation.
The offset from the current to delay the request.
Configures the object to receive input from an audio stream.
The audio input stream.
The format of the audio input.
Configures the object to receive input from the default audio device.
Disables the input to the speech recognizer.
Configures the object to receive input from a Waveform audio format (.wav) file.
The path of the file to use as input.
Configures the object to receive input from a stream that contains Waveform audio format (.wav) data.
The stream containing the audio data.
Raised when the detects input that it can identify as speech.
Raised when the has recognized a word or words that may be a component of multiple complete phrases in a grammar.
Raised when the receives input that does not match any of its loaded and enabled objects.
Raised when the receives input that matches any of its loaded and enabled objects.
Unloads all objects from the recognizer.
Unloads a specified object from the instance.
The grammar object to unload.
is .
The grammar is not loaded in this recognizer, or this recognizer is currently loading the grammar asynchronously.
Updates the specified setting for the with the specified integer value.
The name of the setting to update.
The new value for the setting.
is .
is the empty string ("").
The recognizer does not have a setting by that name.
Updates the specified speech recognition engine setting with the specified string value.
The name of the setting to update.
The new value for the setting.
is .
is the empty string ("").
The recognizer does not have a setting by that name.
Provides information for the and events.
Provides information for the , , and events.
Provides access to the shared speech recognition service available on the Windows desktop.
Initializes a new instance of the class.
Gets the format of the audio being received by the speech recognizer.
The audio input format for the speech recognizer, or if the input to the recognizer is not configured.
Gets the level of the audio being received by the speech recognizer.
The audio level of the input to the speech recognizer, from 0 through 100.
Occurs when the shared recognizer reports the level of its audio input.
Gets the current location in the audio stream being generated by the device that is providing input to the speech recognizer.
The current location in the speech recognizer's audio input stream through which it has received input.
Occurs when the recognizer encounters a problem in the audio signal.
Gets the state of the audio being received by the speech recognizer.
The state of the audio input to the speech recognizer.
Occurs when the state changes in the audio being received by the recognizer.
Disposes the object.
Disposes the object and releases resources used during the session.
to release both managed and unmanaged resources; to release only unmanaged resources.
Emulates input of specific words to the shared speech recognizer, using text instead of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
An array of word units that contains the input for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The recognition result for the recognition operation, or , if the operation is not successful or Windows Speech Recognition is in the Sleeping state.
Emulates input of a phrase to the shared speech recognizer, using text instead of audio for synchronous speech recognition.
The input for the recognition operation.
The recognition result for the recognition operation, or , if the operation is not successful or Windows Speech Recognition is in the Sleeping state.
Emulates input of a phrase to the shared speech recognizer, using text instead of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
The input phrase for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
The recognition result for the recognition operation, or , if the operation is not successful or Windows Speech Recognition is in the Sleeping state.
Emulates input of specific words to the shared speech recognizer, using text instead of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.
An array of word units that contains the input for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
Emulates input of a phrase to the shared speech recognizer, using text instead of audio for asynchronous speech recognition.
The input for the recognition operation.
Emulates input of a phrase to the shared speech recognizer, using text instead of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.
The input phrase for the recognition operation.
A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.
Occurs when the shared recognizer finalizes an asynchronous recognition operation for emulated input.
Gets or sets a value that indicates whether this object is ready to process speech.
if this object is performing speech recognition; otherwise, .
Gets a collection of the objects that are loaded in this instance.
A collection of the objects that the application loaded into the current instance of the shared recognizer.
Loads a speech recognition grammar.
The speech recognition grammar to load.
Asynchronously loads a speech recognition grammar.
The speech recognition grammar to load.
Occurs when the recognizer finishes the asynchronous loading of a speech recognition grammar.
Gets or sets the maximum number of alternate recognition results that the shared recognizer returns for each recognition operation.
The maximum number of alternate results that the speech recognizer returns for each recognition operation.
Gets or sets a value that indicates whether the shared recognizer pauses recognition operations while an application is handling a event.
if the shared recognizer waits to process input while any application is handling the event; otherwise, .
Gets the current location of the recognizer in the audio input that it is processing.
The position of the recognizer in the audio input that it is processing.
Gets information about the shared speech recognizer.
Information about the shared speech recognizer.
Occurs when the recognizer pauses to synchronize recognition and other operations.
Requests that the shared recognizer pause and update its state.
Requests that the shared recognizer pause and update its state and provides a user token for the associated event.
User-defined information that contains information for the operation.
Requests that the shared recognizer pause and update its state and provides an offset and a user token for the associated event.
User-defined information that contains information for the operation.
The offset from the current to delay the request.
Occurs when the recognizer detects input that it can identify as speech.
Occurs when the recognizer has recognized a word or words that may be a component of multiple complete phrases in a grammar.
Occurs when the recognizer receives input that does not match any of the speech recognition grammars it has loaded.
Occurs when the recognizer receives input that matches one of its speech recognition grammars.
Gets the state of a object.
The state of the object.
Occurs when the running state of the Windows Desktop Speech Technology recognition engine changes.
Unloads all speech recognition grammars from the shared recognizer.
Unloads a specified speech recognition grammar from the shared recognizer.
The grammar to unload.
Provides text and status information on recognition operations to be displayed in the Speech platform user interface.
Sends status and descriptive text to the Speech platform user interface about the status of a recognition operation.
A valid instance.
A containing a comment about the recognition operation that produced the .
A indicating whether the application deemed the recognition operation a success.
if the information provided to the method (, and ) was successfully made available to the Speech platform user interface, and if the operation failed.
Defines a design-time object that is used to build strongly-typed runtime grammars that conform to the Speech Recognition Grammar Specification (SRGS) Version 1.0.
Initializes a new instance of the class.
Initializes a new instance of the class from a object.
The object used to create the instance.
is .
Initializes a new instance of the class and specifies an object to be the root rule of the grammar.
The in the object.
is .
Initializes a new instance of the class specifying the location of the XML document that is used to fill in the instance.
The location of the SRGS XML file.
is .
is an empty string.
Initializes a new instance of the class from an instance of that references an XML-format grammar file.
The object that was created with the XML instance.
is .
Gets the assembly reference information for the instance.
The property returns a string collection containing a list of the assembly references.
Gets the code-behind information for the instance.
The property returns a string collection that contains a list of the code-behind documents.
Gets or sets the culture information for the instance.
A object that contains the current culture information for .
The value being assigned to is .
The value being assigned to is .
Gets or sets whether line numbers should be added to inline scripts.
The property returns if line numbers should be added for debugging purposes; otherwise the property returns .
Gets the related namespaces for the current instance.
The property returns a string collection that contains a list of the related namespaces in the instance.
Gets or sets the programming language used for inline code in the class.
The property returns the programming language to which is currently set.
Gets or sets the mode for the class.
The recognition mode of the .
Gets or sets the namespace of the class.
The property returns the namespace for the current .
Gets or sets the phonetic alphabet of the class.
Returns the phonetic alphabet that must be used to specify custom pronunciations in the object.
Gets or sets the root rule of the class.
Returns the rule that is designated as the root rule of the .
Gets the collection of rules that are currently defined for the class.
Returns the rules defined for the object.
Gets or sets the .NET scripting language for the class.
The property returns the current .NET scripting language for the class.
An attempt is made to set the property to null.
An attempt is made to set the property to an empty string.
Writes the contents of the object to an XML-format grammar file that conforms to the Speech Recognition Grammar Specification (SRGS) Version 1.0.
The that is used to write the instance.
is .
Gets or sets the base URI of the class.
The current base URI of .
Defines the base class for classes in the namespace that correspond to the elements in an SRGS grammar.
Initializes a new instance of the class.
Compiles and XML-format grammar files into binary grammar files with the .cfg extension and sends the output to a stream.
Compiles an object into a binary grammar file with the .cfg extension and sends the output to a stream.
The grammar to compile.
The stream that receives the results of compilation.
is .
is .
Compiles an XML-format grammar file into a binary grammar file with the .cfg extension and sends the output to a stream.
The path of the file to compile.
The stream that receives the results of compilation.
is .
is .
is an empty string.
Compiles data for an XML-format grammar file referenced by an into a binary grammar file with the .cfg extension and sends the output to a stream.
The that reads the grammar. The grammar can reside in a physical file or in memory.
The stream that will receive the results of compilation.
is .
is .
Compiles an SRGS document into a DLL.
The that contains the grammar to compile.
The path of the output DLL.
A list of the assemblies referenced from the input grammars.
The name of the file that contains a pair of keys, thereby enabling the output DLL to be signed.
is .
is .
is an empty string.
Compiles multiple SRGS grammars into a DLL.
A list of the grammars to compile.
The path of the output DLL.
A list of the assemblies referenced from the input grammars.
The name of the file that contains a pair of keys, thereby enabling the output DLL to be signed.
is .
is .
is an empty string.
Any element of the array is .
Compiles an SRGS grammar into a DLL.
The that reads the grammar.
The path of the output DLL.
A list of the assemblies referenced from the input grammars.
The name of the file that contains a pair of keys, thereby enabling the output DLL to be signed.
is .
is .
is an empty string.
Indicates the type of input that the grammar, defined by the , will match.
The object will match DTMF tones similar to those found on a telephone, instead of speech.
The object will match speech input.
Represents a grammar element that contains phrases or other entities that a user can speak to produce a successful recognition.
Initializes a new instance of the class.
Initializes a new instance of the class and specifies the number of times that its contents must be spoken.
The number of times that the item must be spoken.
is negative or is larger than 255.
Initializes a new instance of the class and specifies minimum and maximum repetition counts.
The minimum number of times that the text in the item must be repeated.
The maximum number of times that the text in the item can be repeated.
is negative or larger than 255.
is negative or larger than 255.
is larger than .
Initializes a new instance of the class, specifies an array of objects to add to this instance, and sets minimum and maximum repetition counts.
The minimum number of times that the contents of the object must be repeated.
The maximum number of times that the contents of the object can be repeated.
The array of objects to add to the instance.
is .
Any member of the array is .
Initializes a new instance of the class, specifies the text associated with the item, and specifies minimum and maximum repetition counts.
The minimum number of times that the item must be repeated.
The maximum number of times that the item can be repeated.
The text associated with the item.
is negative or larger than 255.
is negative or larger than 255.
is larger than .
Initializes a new instance of the class and specifies an array of objects to add to this instance.
The array of objects to add to the instance.
is .
Any member of the array is .
Initializes a new instance of the class and specifies its textual contents.
The text associated with the item.
is .
is an empty string.
Adds an object to the collection of objects contained in this instance.
The object to add.
is .
Gets the collection of objects contained by the instance.
The collection of objects contained by the instance.
Gets the maximum number of times that a user can speak the contents of the .
The maximum number of times that a user can speak the contents of the item.
Gets the minimum number of times that a user must speak the contents of the .
The minimum number of times that a user can speak the contents of the item.
Gets or sets the probability that a user will repeat the contents of this instance.
The probability, as a floating point value, that the contents of this item will be repeatedly spoken.
An attempt is made to set to a value that is negative or larger than 1.0.
Sets the number of times that the contents of an must be spoken.
The number of times that the item must be spoken.
is less than 0 or greater than 255.
Sets the minimum number of times and the maximum number of times that an item can be spoken.
The minimum number of times that the item must be spoken.
The maximum number of times that the item can be spoken.
is less than zero or larger than 255.
is less than zero or larger than 255.
is larger than .
Gets or sets a multiplying factor that adjusts the likelihood that an in a object will be spoken.
A floating point value that adjusts the likelihood of this item being spoken.
An attempt is made to set to a negative value.
Represents an element for associating a semantic value with a phrase in a grammar.
Initializes a new instance of the class.
Initializes a new instance of the class, specifying a value for the instance.
The value used to set the property.
is .
Initializes a new instance of the class, specifying a name and a value for the instance.
The string used to set the property on the object.
The object used to set the property on the object.
is .
is .
is an empty string.
Gets or sets the name of the instance.
A string that contains the name of the instance.
An attempt is made to set to .
An attempt is made to set to an empty string.
Gets or sets the value contained in the instance.
The value contained in the instance.
An attempt is made to set to .
An attempt is made to set to an invalid type.
Represents a list of alternative words or phrases, any one of which may be used to match speech input.
Initializes a new instance of the class.
Initializes a new instance of the class from an array of objects.
The alternative items to add.
is .
Any element in the array is .
Initializes a new instance of the class from an array of objects.
The alternative items to add.
is .
Any element in the array is .
Adds an containing a word or a phrase to the list of alternatives.
The item to add to the list of alternatives.
is .
Gets the list of all the alternatives contained in the element.
Returns the list of alternatives.
Enumerates the supported phonetic alphabets.
International Phonetic Alphabet phoneme set.
Speech API phoneme set.
Universal Phone Set phoneme set, which is ASCII encoding of phonemes for IPA.
Represents a grammar rule.
Initializes a new instance of the class and specifies the identifier for the rule.
The identifier of the rule.
is .
is empty.
is not a proper rule identifier.
Initializes a new instance of the class from an array of objects.
The identifier of the rule.
An array of elements.
is .
is .
is empty.
is not a proper rule identifier.
Adds an to an object.
An object that inherits from and specifies what can be recognized.
is .
This property is not currently supported.
Not supported.
Gets the collection of objects in the instance.
The collection of elements in the rule.
Gets or sets the identifier for the rule.
The identifier for the rule.
An attempt is made to set to an invalid value.
This property is not currently supported.
Not supported.
This property is not currently supported.
Not supported.
This property is not currently supported.
Not supported.
This property is not currently supported.
Not supported.
Gets or sets whether a rule can be activated for recognition and when the rule can be referenced by other rules.
A value the sets the scope for the rule.
This property is not currently supported.
Not supported.
Represents the grammar element that specifies a reference to a rule.
Initializes a new instance of the class and specifies the rule to reference.
The object to reference.
is .
Initializes a new instance of the class, specifying the rule to reference and a string that contains a semantic key.
The object to reference.
The semantic key.
Initializes a new instance of the class, specifying the rule to reference, the string alias of the semantic dictionary, and initialization parameters.
The object to reference.
The semantic key.
The initialization parameters for a object.
Initializes a new instance of the class and specifies the location of the external grammar file to reference.
The location of a grammar file outside the containing grammar.
is .
Initializes a new instance of the class, specifying the location of the external grammar file and the identifier of the rule to reference.
The location of a grammar file outside the containing grammar.
The identifier of the rule to reference.
is .
is .
is empty.
Initializes a new instance of the class, specifying the location of the external grammar file, the identifier of the rule, and the string alias of the semantic dictionary.
The location of a grammar file outside the containing grammar.
The identifier of the rule to reference.
An alias string for the semantic dictionary.
is .
is .
is empty.
Initializes a new instance of the class, specifying the location of the external grammar file, the identifier of the rule, the string alias of the semantic dictionary, and initialization parameters.
The location of a grammar file outside the containing grammar.
The identifier of the rule to reference.
The semantic key.
The initialization parameters for a object.
Defines a rule that can match spoken input as defined by the dictation topic associated with this grammar.
Defines a rule that can match any speech up to the next rule match, the next token, or until the end of spoken input.
Indicates that speech input can contain spelled-out letters of a word, and that spelled-out letters can be recognized as a word.
Defines a rule that is automatically matched in the absence of any audio input.
Gets the initialization parameters for a element.
The initialization parameters for a element.
Gets an alias string for the semantic dictionary.
An alias string for the semantic dictionary.
Gets the URI for the rule that this element references.
The location of the rule to reference.
Defines a rule that can never be spoken. Inserting VOID into a sequence automatically makes that sequence unspeakable.
Represents a collection of objects.
Initializes a new instance of the class.
Adds the contents of an array of objects to the object.
The array of rule objects to add to the object.
is .
Any object in the array is .
Enumerates values for the scope of a object.
The rule cannot be the target of a rule reference from an external grammar unless it is the root rule of its containing grammar.
The rule can be the target of a rule reference from an external grammar, which can use the rule to perform recognition. A public rule can always be activated for recognition.
Represents a tag that contains ECMAScript that is run when the rule is matched.
Creates an instance of the class.
Creates an instance of the class, specifying the script contents of the tag.
A string that contains the ECMAScript for the tag.
is .
Gets or sets the ECMAScript for the tag.
A string that contains the semantic interpretation script for the tag.
An attempt is made to set Script to .
Defines methods and properties that can be used to match a given string with a spoken phrase.
Initializes a new instance of the class, specifying the portion of the phrase to be matched.
The portion of the phrase to be matched.
is .
Initializes a new instance of the class, specifying the portion to be matched and the mode in which the text should be matched.
The portion of the phrase to be matched.
The mode in which should be matched with the spoken phrase.
is .
is empty.
contains only white space characters (that is, ' ', '\t', '\n', '\r').
is set to a value in the enumeration.
Gets or sets the matching mode for the subset.
A member of the enumeration.
An attempt is made to set to a value that is not a member of the enumeration.
Gets or sets as string that contains the portion of a spoken phrase to be matched.
A string that contains the portion of a spoken phrase to be matched.
An attempt is made to set to or to an empty string.
An attempt is made to set using a string that contains only white space characters (' ', '\t', '\n', '\r').
Represents the textual content of grammar elements defined by the World Wide Web Consortium (W3C) Speech Recognition Grammar Specification (SRGS) Version 1.0.
Initializes a new instance of the class.
Initializes a new instance of the class, specifying the text of the instance.
The value used to set the property on the instance.
is .
Gets or sets the text contained within the class instance.
The text contained within the instance.
An attempt is made to set to .
Represents a word or short phrase that can be recognized.
Initializes a new instance of the class and specifies the text to be recognized.
The text of the new class instance.
is .
is empty.
Gets or sets the display form of the text to be spoken.
A representation of the token as it should be displayed.
An attempt is made to set to .
An attempt is made to assign an empty string to .
Gets or sets the string that defines the pronunciation for the token.
Returns a string containing phones from the phonetic alphabet specified in .
An attempt is made to set to .
An attempt is made to assign an empty string to .
Gets or sets the written form of the word that should be spoken.
The text contained within the class instance.
An attempt is made to set to .
An attempt is made to assign an empty string to .
An attempt is made to assign a string that contains a quotation mark (") to .
Returns data from the event.
Gets the current state of the shared speech recognition engine in Windows.
A instance that indicates whether the state of a shared speech recognition engine is or .
Enumerates values of subset matching mode.
Indicates that subset matching mode is OrderedSubset.
Indicates that subset matching mode is OrderedSubsetContentRequired.
Indicates that subset matching mode is Subsequence.
Indicates that subset matching mode is SubsequenceContentRequired.
Returns data from the event.
Gets the time offset at which the bookmark was reached.
Returns the location in the audio input stream of a synthesis engine when the event was raised.
Gets the name of the bookmark that was reached.
Returns a value for the name of the bookmark.
Represents a prompt created from a file.
Creates a new instance of the class, and specifies the path to the file and its media type.
The path of the file containing the prompt content.
The media type of the file.
Creates a new instance of the class, and specifies the location of the file and its media type.
The URI of the file containing the prompt content.
The media type of the file.
Contains information about a speech synthesis voice installed in Windows.
Gets or sets whether a voice can be used to generate speech.
Returns a that represents the enabled state of the voice.
Determines if a given object is an instance of and equal to the current instance of .
An object that can be cast to an instance of .
Returns if the current instance of and that obtained from the argument are equal, otherwise returns .
Provides a hash code for an InstalledVoice object.
A hash code for the current object.
Gets information about a voice, such as culture, name, gender, and age.
The information about an installed voice.
Returns data from the event.
Gets the audio position of the phoneme.
A object indicating the audio position.
Gets the duration of the phoneme.
A object indicating the duration.
Gets the emphasis of the phoneme.
A member indicating the level of emphasis.
Gets the phoneme following the phoneme associated with the event.
A string containing the next phoneme.
The phoneme associated with the event.
A string containing the phoneme.
Represents information about what can be rendered, either text or an audio file, by the .
Creates a new instance of the class from a object.
The content to be spoken.
Creates a new instance of the class and specifies the text to be spoken.
The text to be spoken.
Creates a new instance of the class and specifies the text to be spoken and whether its format is plain text or markup language.
The text to be spoken.
A value that specifies the format of the text.
Gets whether the has finished playing.
Returns if the prompt has completed; otherwise .
Enumerates values for intervals of prosodic separation (breaks) between word boundaries.
Indicates an extra-large break.
Indicates an extra-small break.
Indicates a large break.
Indicates a medium break.
Indicates no break.
Indicates a small break.
Creates an empty object and provides methods for adding content, selecting voices, controlling voice attributes, and controlling the pronunciation of spoken words.
Creates a new instance of the class.
Creates a new instance of the class and specifies a culture.
Provides information about a specific culture, such as its language, the name of the culture, the writing system, the calendar used, and how to format dates and sort strings.
Appends the specified audio file to the .
A fully qualified path to the audio file.
Appends the audio file at the specified URI to the .
URI for the audio file.
Appends the specified audio file and alternate text to the .
URI for the audio file.
A string containing alternate text representing the audio.
Appends a bookmark to the object.
A string containing the name of the appended bookmark.
Appends a break to the object.
Appends a break to the object and specifies its strength (duration).
Indicates the duration of the break, with the following increasing values:
Appends a break of the specified duration to the object.
The time in ticks, where one tick equals 100 nanoseconds.
Appends a object to another object.
The content to append.
Appends the SSML file at the specified path to the object.
A fully qualified path to the SSML file to append.
Appends the SSML file at the specified URI to the object.
A fully qualified URI to the SSML file to append.
Appends an XMLReader object that references an SSML prompt to the object.
A fully qualified name to the XML file to append.
Appends the specified string containing SSML markup to the object.
A string containing SSML markup.
Specifies text to append to the object.
A string containing the text to be spoken.
Appends text to the object and specifies the degree of emphasis for the text.
A string containing the text to be spoken.
The value for the emphasis or stress to apply to the text.
Appends text to the object and specifies the speaking rate for the text.
A string containing the text to be spoken.
The value for the speaking rate to apply to the text.
Appends text to the object and specifies the volume to speak the text.
A string containing the text to be spoken.
The value for the speaking volume (loudness) to apply to the text.
Appends text to the object and specifies the alias text to be spoken in place of the appended text.
A string containing the text representation.
A string containing the text to be spoken.
Appends text to the object and specifies the content type using a member of the enumeration.
A string containing the text to be spoken.
The content type of the text.
Appends text to the object and a that specifies the content type of the text.
A string containing the text to be spoken.
The content type of the text.
Appends text to the object and specifies the pronunciation for the text.
A string containing the written form of the word using the conventional alphabet for a language.
A string containing phones to be spoken from the International Phonetic Alphabet (IPA).
Clears the content from the object.
Gets or sets the culture information for the object.
Specifies the end of a paragraph in the object.
Specifies the end of a sentence in the object.
Specifies the end of a style in the object.
Specifies the end of use of a voice in the object.
Gets whether the is empty.
Specifies the start of a paragraph in the object.
Specifies the start of a paragraph in the specified culture in the object.
Provides information about a specific culture, such as the language, the name of the culture, the writing system, the calendar used, and how to format dates and sort strings.
Specifies the start of a sentence in the object.
Specifies the start of a sentence in the specified culture in the object.
Provides information about a specific culture, such as the language, the name of the culture, the writing system, the calendar used, and how to format dates and sort strings.
Specifies the start of a style in the object.
The style to start.
Instructs the synthesizer to change the voice in the object and specifies the culture of the voice to use.
Provides information about a specific culture, such as the language, the name of the culture, the writing system, the calendar used, and how to format dates and sort strings.
Instructs the synthesizer to change the voice in the object and specifies the gender of the voice to use.
The gender of the voice to use.
Instructs the synthesizer to change the voice in the object and specifies the gender and the age of the new voice.
The gender of the new voice to use.
The age of the voice to use.
Instructs the synthesizer to change the voice in the object and specifies its gender, age, and a preferred voice that matches the specified gender and age.
The gender of the voice to use.
The age of the voice to use.
An integer that specifies a preferred voice when more than one voice matches the and parameters.
Instructs the synthesizer to change the voice in the object and specifies criteria for the new voice.
The criteria for the voice to use.
Instructs the synthesizer to change the voice in the object and specifies the name of the voice to use.
The name of the voice to use.
Returns the SSML generated from the object.
Returns the SSML generated from the object as a single line.
Enumerates values for levels of emphasis in prompts.
Indicates a moderate level of emphasis.
Indicates no emphasis.
Indicates that no emphasis value is specified.
Indicates a reduced level of emphasis.
Indicates a strong level of emphasis.
Represents the base class for classes in the namespace.
Gets the prompt associated with the event.
The object associated with the event.
Enumerates values for the speaking rate of prompts.
Indicates an extra-fast rate.
Indicates an extra-slow rate.
Indicates a fast rate.
Indicates a medium rate.
Indicates no rate is specified.
Indicates a slow rate.
Defines a style for speaking prompts that consists of settings for emphasis, rate, and volume.
Initializes a new instance of the class.
Initializes a new instance of the class and specifies the setting for the emphasis of the style.
The setting for the emphasis of the style.
Initializes a new instance of the class and specifies the setting for the speaking rate of the style.
The setting for the speaking rate of the style.
Initializes a new instance of the class and specifies the setting for the speaking volume of the style.
The setting for the volume (loudness) of the style.
Gets or sets the setting for the emphasis of the style.
Returns the setting for the emphasis of the style.
Gets or sets the setting for the speaking rate of the style.
Returns the setting for the speaking rate of the style.
Gets or sets the setting for the volume (loudness) of the style.
Returns the setting for the volume (loudness) of the style.
Enumerates values for volume levels (loudness) in prompts.
Indicates the engine-specific default volume level.
Indicates an extra loud volume level.
Indicates an extra soft volume level.
Indicates a loud volume level.
Indicates a medium volume level.
Indicates that the volume level is not set.
Indicates a muted volume level.
Indicates a soft volume level.
Enumerates the content types for the speaking of elements such as times, dates, and currency.
Speak a number sequence as a date. For example, speak "05/19/2004" or "19.5.2004" as "may nineteenth two thousand four".
Speak a number as the day in a date. For example, speak "3rd" as "third".
Speak a number sequence as a day and month. For example, speak "12/05" as "May twelfth", and speak "05/12" as "December 5th".
Speak a number sequence as a date including the day, month, and year. For example, speak "12/05/2004" as "May twelfth two thousand four".
Speak a word as a month. For example, speak "June" as "June".
Speak a number sequence as a month and day. For example, speak "05/12" as "may twelfth", and speak "12/5" as "December 5th".
Speak a number sequence as a date including the day, month, and year. For example, speak "12/05/2004" as "December fifth two thousand four".
Speak a number sequence as a month and year. For example, speak "05/2004" as "May two thousand four".
Speak a number as a cardinal number. For example, speak "3" as "three".
Speak a number as an ordinal number. For example, speak "3rd" as "third".
Spell the word or phrase. For example, say "clock" as "C L O C K".
Speak a number sequence as a U.S. telephone number. For example, speak "(306) 555-1212" as "Area code three zero six five five five one two one two".
Speak the word or phrase as text. For example, speak "timeline" as "timeline".
Speak a number sequence as a time. For example, speak "9:45" as "nine forty-five", and speak "9:45am" as "nine forty-five A M".
Speak a number sequence as a time using the 12-hour clock. For example, speak "03:25" as "three twenty-five".
Speak a number sequence as a time using the 24-hour clock. For example, speak "18:00" as "eighteen hundred hours".
Speak a number as a year. For example, speak "1998" as "nineteen ninety-eight".
Speak a number sequence as a year and month. For example, speak "2004/05" as "May two thousand four".
Speak a number sequence as a date including the day, month, and year. For example, speak "2004/05/12" as "May twelfth two thousand four".
Returns notification from the event.
Returns data from the event.
Gets the audio position of the event.
Returns the position of the event in the audio output stream.
Gets the number of characters in the word that was spoken just before the event was raised.
Returns the number of characters in the word that was spoken just before the event was raised.
Gets the number of characters and spaces from the beginning of the prompt to the position before the first letter of the word that was just spoken.
Returns the number of characters and spaces from the beginning of the prompt to the position before the first letter of the word that was just spoken.
The text that was just spoken when the event was raised.
Returns the text that was just spoken when the event was raised.
Returns notification from the event.
Provides access to the functionality of an installed speech synthesis engine.
Initializes a new instance of the class.
Adds a lexicon to the object.
The location of the lexicon information.
The media type of the lexicon. Media type values are not case sensitive.
Raised when the encounters a bookmark in a prompt.
Disposes the object and releases resources used during the session.
Acts as a safeguard to clean up resources in the event that the method is not called.
Gets the prompt that the is speaking.
Returns the prompt object that is currently being spoken.
Returns all of the installed speech synthesis (text-to-speech) voices.
Returns a read-only collection of the voices currently installed on the system.
Returns all of the installed speech synthesis (text-to-speech) voices that support a specific locale.
The locale that the voice must support.
Returns a read-only collection of the voices currently installed on the system that support the specified locale.
Pauses the object.
Raised when a phoneme is reached.
Gets or sets the speaking rate of the object.
Returns the speaking rate of the object, from -10 through 10.
Removes a lexicon from the object.
The location of the lexicon document.
Resumes the object after it has been paused.
Selects a specific voice by name.
The name of the voice to select.
Selects a voice with a specific gender.
The gender of the voice to select.
Selects a voice with a specific gender and age.
The gender of the voice to select.
The age of the voice to select.
Selects a voice with a specific gender and age, based on the position in which the voices are ordered.
The gender of the voice to select.
The age of the voice to select.
The position of the voice to select.
Selects a voice with a specific gender, age, and locale, based on the position in which the voices are ordered.
The gender of the voice to select.
The age of the voice to select.
The position of the voice to select.
The locale of the voice to select.
Configures the object to append output to an audio stream.
The stream to which to append synthesis output.
The format to use for the synthesis output.
Configures the object to send output to the default audio device.
Configures the object to not send output from synthesis operations to a device, file, or stream.
Configures the object to append output to a file that contains Waveform format audio.
The path to the file.
Configures the object to append output to a Waveform audio format file in a specified format.
The path to the file.
The audio format information.
Configures the object to append output to a stream that contains Waveform format audio.
The stream to which to append synthesis output.
Synchronously speaks the contents of a object.
The content to speak.
Synchronously speaks the contents of a object.
The content to speak.
Synchronously speaks the contents of a string.
The text to speak.
Asynchronously speaks the contents of a object.
The content to speak.
Asynchronously speaks the contents of a object.
The content to speak.
Returns the object that contains the content to speak.
Asynchronously speaks the contents of a string.
The text to speak.
Returns the object that contains the content to speak.
Cancels the asynchronous synthesis operation for a queued prompt.
The content for which to cancel a speak operation.
Cancels all queued, asynchronous, speech synthesis operations.
Raised when the completes the speaking of a prompt.
Raised after the speaks each individual word of a prompt.
Synchronously speaks a that contains SSML markup.
The SSML string to speak.
Asynchronously speaks a that contains SSML markup.
The SMML markup to speak.
Raised when the begins the speaking of a prompt.
Gets the current speaking state of the object.
Returns the current speaking state of the object.
Raised when the state of the changes.
Raised when a viseme is reached.
Gets information about the current voice of the object.
Returns information about the current voice of the object.
Raised when the voice of the changes.
Get or sets the output volume of the object.
Returns the volume of the , from 0 through 100.
Returns data from the event.
Gets the state of the before the event.
Returns the state of the synthesizer before the state changed.
Gets the state of the before the event.
The state of the synthesizer after the state changed.
Enumerates the types of media files.
Indicates that the media type is SSML.
Indicates that the media type is Text.
Indicates that the media type is WaveAudio.
Enumerates the types of text formats that may be used to construct a object.
Indicates that the text format is SSML.
Indicates that the text format is Text.
Enumerates levels of synthesizer emphasis.
Indicates a low level of synthesizer emphasis.
Indicates a high level of synthesizer emphasis.
Enumerates values for the state of the .
Indicates that the is paused.
Indicates that the is ready to generate speech from a prompt.
Indicates that the is speaking.
Represents changes in pitch for the speech content of a .
Creates a new instance of the class.
A that specifies the point at which to apply the pitch change in the . This is expressed as the elapsed percentage of the duration of the at that point.
A that specifies the amount to raise or lower the pitch.
A member of that specifies the unit to use for the number specified in the parameter.
Gets the value that represents the amount to raise or lower the pitch at a point in a .
Gets a member of that specifies the unit to use for the number specified in the parameter of a object.
Determines if a given object is an instance of and equal to the current instance of .
An object that can be cast to an instance of .
Returns if the current instance of and that obtained from the argument are equal, otherwise returns .
Determines if a given instance of is equal to the current instance of .
An instance of that will be compared to the current instance.
Returns if both the current instance of and that supplied through the argument are equal, otherwise returns .
Returns a hash code for this instance.
A 32-bit signed integer hash code.
Determines if two instances of are equal.
An instance of to compare against the instance of provided by the argument.
An instance of to compare against the instance of provided by the argument.
Returns if the instances referenced by and are equal, otherwise returns .
Determines if two instances of are NOT equal.
An instance of to compare against the instance of provided by the argument.
An instance of to compare against the instance of provided by the argument.
Returns if the instances referenced by and are NOT equal, otherwise returns .
Gets a that specifies the point at which to apply the pitch change in a . This is expressed as the elapsed percentage of the duration of the at that point.
Enumerates values for the types of change.
Indicates a change of the pitch value.
Indicates a change of the time value.
Enumerates values for lengths of between spoken words.
Normal word break.
Longest word break.
Very small word break.
Moderate word break.
No word break.
Long word break.
Small word break.
Enumerates the values of for a specific .
Indicates an engine-specific default level of emphasis.
Indicates moderate emphasis.
Indicates no emphasis specified.
Indicates reduced emphasis.
Indicates strong emphasis.
Enumerates the types of data pointers passed to speech synthesis events.
Currently not supported.
Currently not supported.
Indicates that the argument to the is a created using referencing a object; may take on any value.
Indicates that the argument to the is a
Indicates that the argument to the is undefined.
Provides detailed information about a .
Constructs a new instance of .
A member of the enumeration that specifies a speech synthesis action.
The id of the language being used. Corresponds to the XML xml:lang attribute.
The emphasis to be applied to speech output or pauses.
The time allotted to speak the text of the .
A member of the class, indicating the type of text of the and the level of detail required for accurate rendering of the contained text.
Corresponds to the <say-as> XML tag in the SSML specification
The argument may be
A object indicating characteristics of the speech output such as pitch, speaking rate and volume.
Corresponds to the <prosody> XML tag in the SSML specification
An array of objects providing the phonetic pronunciation for text contained in the , using the International Phonetic Alphabet (IPA) specification.
Corresponds to the <phoneme> XML tag in the SSML specification.
This argument may be .
Returns the requested speech synthesizer action.
A member of indicating the speech synthesis action requested by SSML input.
Returns the desired time for rendering a
Returns an containing a value in millisecond of the desired time for rendering a .
Returns instructions on how to emphasize a .
Returns an value indicating how to emphasize a .
Determines if a given object is an instance equal to the current instance of .
An object that can be cast to an instance of
Returns , if the current instance of and that obtained from the provided by the argument describe the same state. Returns if the current and the argument do not support the same state.
Determines if a given instance of is equal to the current instance of .
An instance of that
Returns , if both the current instance of and that supplied through the argument describe the same state. Returns if the current and the argument do not support the same state.
Returns the hash code for this instance.
A 32-bit signed integer that is the hash code for this instance.
Returns the language supported by the current .
Returns an containing an identifier for the language used by the current .
Determines if two instances of describes the same state.
An instance of whose described state is compared against the instance of provided by the argument.
An instance of whose described state is compared against the instance of provided by the argument.
Returns if both instances of , and , describe the same state, otherwise is returned.
Determines if two instances of describes the different state.
An instance of whose described state is compared against the instance of provided by the argument.
An instance of whose described state is compared against the instance of provided by the argument.
Returns if both instances of , and , do not describe the same state, otherwise is returned.
Returns phonetic information for a
Returns detailed information about the pitch, speaking rate, and volume of speech output.
Returns a valid instance of containing the pitch, speaking rate, and volume settings, and changes to those setting, for speech output.
Returns information about the context for the generation of speech from text.
Returns a value instance if the SSML used by a speech synthesis engine contains detailed information about the context to be used to generate speech, otherwise .
Provides methods for writing audio data and events.
Determines the action or actions the engine should perform.
An containing the sum of one or more members of the enumeration.
Adds one or more events to the property.
An array of objects.
The size of the array.
Returns the number of items skipped.
The number of items skipped.
Determines the events the engine should raise.
An containing the sum of one or more members of the enumeration.
Returns the number and type of items to be skipped.
Loads the resource at the specified URI.
The URI of the resource.
The media type of the resource.
Gets the speaking rate of the engine.
An containing the speaking rate.
Gets the speaking volume of the engine.
An containing the speaking volume.
Outputs audio data.
The location of the output audio data.
The number of items in the output audio stream.
Represents a collection of settings for voice properties such as , and .
Constructs a new instance of the class.
Gets or sets the duration of the in milliseconds.
A value in milliseconds for the desired time to speak the text.
Returns an array containing the of the .
Gets or sets the baseline pitch of the .
Gets or sets the pitch range of the .
Gets or sets the speaking rate of the .
Sets the of the .
A byte array of objects.
Gets or sets the speaking volume (loudness) of the .
Specifies prosody attributes and their values.
Creates a new instance of the ProsodyNumber class and specifies the identifier for a prosody attribute.
The identifier for a prosody attribute.
Creates a new instance of the ProsodyNumber class and specifies a value for a prosody attribute.
A value for a prosody attribute.
Holds a value that represents a setting for a prosody attribute.
Determines whether a specified object is an instance of ProsodyNumber and equal to the current instance of ProsodyNumber.
The to evaluate.
if is equal to the current object; otherwise, .
Determines whether a specified ProsodyNumber object is equal to the current instance of ProsodyNumber.
The object to evaluate.
if is equal to the current object; otherwise, .
Provides a hash code for a ProsodyNumber object.
A hash code for a object.
Gets whether the Number property represents a percent value.
if the represents a percent value, otherwise, .
Gets a numeric value for an SSML prosody attribute.
The numerical value for an SSML prosody attribute.
Determines whether two instances of ProsodyNumber are equal.
The object to compare to .
The object to compare to .
if is the same as ; otherwise, .
Determines whether two instances of ProsodyNumber are not equal.
The object to compare to .
The object to compare to .
if is different from ; otherwise, .
Gets the identifier for an SSML prosody attribute.
The identifier for an SSML prosody attribute.
Gets the unit in which the amount of change is specified.
The unit in which the amount of change is specified, for example Hz (Hertz) or semitone.
Enumerates values for the property of a object.
Indicates a normal pitch range.
Indicates an extra- high pitch range.
Indicates an extra-low pitch range.
Indicates a high pitch range.
Indicates a low pitch range.
Indicates a medium pitch range.
Enumerates values for the property of a object.
Indicates a normal prosody range.
Indicates an extra high prosody range.
Indicates an extra low prosody range.
Indicates a high prosody range.
Indicates a low prosody range.
Indicates a medium prosody range.
Enumerates values for the property of a object.
Indicates the engine-specific default rate.
Indicates an extra-fast rate.
Indicates an extra-slow rate.
Indicates a fast rate.
Indicates a medium rate.
Indicates a slow rate.
Enumerates values for the property on the object.
Indicates the engine-specific default value.
Indicates the Unit value is Hz.
Indicates the Unit value is semitone.
Enumerates values for the property of a object.
Current default volume value, same as the value returned by the property on the site supplied to that engine.
Maximum volume.
Approximately 20% of maximum volume.
Approximately 80% of maximum volume.
Approximately 60% of maximum volume.
Volume off
Approximately 40% of maximum volume.
Contains information about the content type (such as currency, date, or address) or language construct that determine how text should be spoken.
Creates a new instance of the SayAs class.
Gets or sets the value of the detail attribute for a say-as element in the SSML markup language of a prompt.
Gets or sets the value of the format attribute for a say-as element in the SSML markup language of a prompt.
Gets or sets the value of the interpret-as attribute for a say-as element in the SSML markup language of a prompt.
Provides information about text stream items to be skipped.
Creates a new instance of the object.
Gets or sets the number of items to be skipped.
An containing the number of items.
Gets or sets the type of object to skip.
An representing the type of the object.
Enumerates the types of speech output formats.
Indicates text output.
Indicates wave (audio) output.
Used to specify the type of event, and its arguments (if any) to be generated as part of the rendering of text to speech by a custom synthetic speech engine.
Constucts an appropriate .
An instance of indicating the sort of Speech platform event the object is to handle.
An instance of indicating how the reference of is to be interpreted, and, by implication, the use of .
An integer value to be passed to the Speech platform when the event requested by the instance of to be constructed is generated.
The exact meaning of this integer is implicitly determined by the value of .
A instance referencing an object. to be passed to the Speech platform when the event requested by the instance of to be constructed is generated.
The type which must be referenced is explicitly defined by the value . The value .
Determines whether a specified object is an instance of SpeechEventInfo and equal to the current instance of SpeechEventInfo.
The object to evaluate.
if is equal to the current object; otherwise, .
Determines whether a specified SpeechEventInfo object is equal to the current instance of SpeechEventInfo.
The object to evaluate.
if is equal to the current object; otherwise, .
Gets and set the Speech platform event which an instance of is used to request.
Returns a member of as a , indicating the event type the object is to generate.
Provides a hash code for a SpeechEventInfo object.
A hash code for a object.
Determines whether two instances of SpeechEventInfo are equal.
The object to compare to .
The object to compare to .
if is the same as ; otherwise, .
Determines whether two instances of SpeechEventInfo are not equal.
The object to compare to .
The object to compare to .
if is different from ; otherwise, .
Gets and set the value ( in the constructor) to be passed to the Speech platform to generate an event the current instance of is used to request.
Returns the to be passed to Speech platform when the event specified by the current instance of is generated.
Gets and set the instance ( in the constructor) referencing the object to be passed to the Speech platform to generate an event the current instance of is used to request.
Returns the referencing the object to be passed to Speech platform when the event specified by the current instance of is generated.
Returns the data type of the object pointed to by the returned by the parameter on the current object.
A value corresponding to a member of the enumeration and indicating the data type of the object pointed to by the returned by the parameter and used as the second argument for the constructor of the current object.
Contains text and speech attribute information for consumption by a speech synthsizer engine.
Constructs a new instance of .
Gets or sets speech attribute information for a .
A instance is returned, or used to set speech attribute information for a .
Gets or sets the length of the speech text in the fragment.
An is returned or can be used to set the length, in character, of the text string associated with this fragment to be spoken.
Gets or sets the starting location of the text in the fragment.
An is returned or can be used to set the start location, in character, of the part of text string associated with this fragment to be spoken.
Sets or gets the speech text of the fragment.
A is returned or can be used to set the speech text to be used by a speech synthesis engine to generate audio output.
Specifies the Speech Synthesis Markup Language (SSML) action to be taken in rendering a given .
Indicates that is to be used as the contents of a bookmark.
Indicates that no action has been determined from SSML input.
Requests that input text be interpreted as phonemes.
Indicates that a contains no text to be rendered as speech.
Requests that the associated should be processed and spoken.
Indicates that text values provided by a through its property are to be synthesize as individual characters.
Indicates state of paragraph.
Indicates start of sentence.
Abstract base class to be implemented by all text to speech synthesis engines.
Constructs a new instance of based on an appropriate Voice Token registry key.
Full name of the registry key for the Voice Token associated with the implementation. engine.
Adds a lexicon to the implemented by the current instance.
A valid instance of indicating the location of the lexicon information.
A string containing the media type of the lexicon. Media types are case insensitive.
A reference to an interface used to interact with the platform infrastructure.
Returns the best matching audio output supported by a given synthesize engine response to a request to the synthesizer engine for the support of a particular output format.
Valid member of the enumeration indicating the type of requested audio output format.
A pointer to a containing detail setting for the audio format type requested by the argument.
Returns a valid instance referring to a containing detailed information about the output format.
Removes a lexicon currently loaded by the implemented by the current instance.
A valid instance of indicating the location of the lexicon information.
A reference to an interface passed in by the platform infrastructure to allow access to the infrastructure resources.
Renders specified array in the specified output format.
An array of instances containing the text to be rendered into speech.
An pointing to a structure containing audio output format.
A reference to an interface passed in by the platform infrastructure to allow access to the infrastructure resources.
Enumerates types of speech synthesis events.
Identifies events generated when a speech synthesize engine completes an audio level change while speaking.
Identifies events generated when a speech synthesize engine encounters a bookmark while speaking.
Identifies events generated when a speech synthesize engine encounters the end of its input stream while speaking.
Identifies events generated when a speech synthesize engine completes a phoneme while speaking.
Identifies events generated when a speech synthesize engine completes a sentence while speaking.
Identifies events generated when a speech synthesize engine a begins speaking a stream.
Identifies events generated when a speech synthesize engine completes a viseme while speaking.
Identifies events generated when a speech synthesize engine encounters a change of Voice while speaking.
Identifies events generated when a speech synthesize engine completes a word while speaking.
Returns data from the event.
Gets the position of the viseme in the audio stream.
A object that represents the position of the viseme.
Gets the duration of the viseme.
A object that represents the duration of the viseme.
Gets a object that describes the emphasis of the viseme.
A object that represents the emphasis of the viseme.
Gets the value of the next viseme.
An object that contains the value of the next viseme.
Gets the value of the viseme.
An object that contains the value of the viseme.
Defines the values for the age of a synthesized voice.
Indicates an adult voice (age 30).
Indicates a child voice (age 10).
Indicates that no voice age is specified.
Indicates a senior voice (age 65).
Indicates a teenage voice (age 15).
Returns data from the event.
Gets the object of the new voice.
Returns information that describes and identifies the new voice.
Defines the values for the gender of a synthesized voice.
Indicates a female voice.
Indicates a male voice.
Indicates a gender-neutral voice.
Indicates no voice gender specification.
Represents an installed speech synthesis engine.
Gets additional information about the voice.
Returns a collection of name/value pairs that describe and identify the voice.
Gets the age of the voice.
Returns the age of the voice.
Gets the culture of the voice.
Returns a object that provides information about a specific culture, such as the names of the culture, the writing system, the calendar used, and how to format dates and sort strings.
Gets the description of the voice.
Returns the description of the voice.
Compares the fields of the voice with the specified object to determine whether they contain the same values.
The specified object.
if the fields of the two objects are equal; otherwise, .
Gets the gender of the voice.
Returns the gender of the voice.
Provides a hash code for a VoiceInfo object.
A hash code for the current object.
Gets the ID of the voice.
Returns the identifier for the voice.
Gets the name of the voice.
Returns the name of the voice.
Gets the collection of audio formats that the voice supports.
Returns a collection of the audio formats that the voice supports.