NAudio a-law decoder based on code from: http://hazelware.luggle.com/tutorials/mulawcompression.html only 512 bytes required, so just use a lookup Converts an a-law encoded byte to a 16 bit linear sample a-law encoded byte Linear sample A-law encoder Encodes a single 16 bit sample to a-law 16 bit PCM sample a-law encoded byte SpanDSP - a series of DSP components for telephony g722_decode.c - The ITU G.722 codec, decode part. Written by Steve Underwood <steveu@coppice.org> Copyright (C) 2005 Steve Underwood Ported to C# by Mark Heath 2011 Despite my general liking of the GPL, I place my own contributions to this code in the public domain for the benefit of all mankind - even the slimy ones who might try to proprietize my work and use it to my detriment. Based in part on a single channel G.722 codec which is: Copyright (c) CMU 1993 Computer Science, Speech Group Chengxiang Lu and Alex Hauptmann hard limits to 16 bit samples Decodes a buffer of G722 Codec state Output buffer (to contain decompressed PCM samples) Number of bytes in input G722 data to decode Number of samples written into output buffer Encodes a buffer of G722 Codec state Output buffer (to contain encoded G722) PCM 16 bit samples to encode Number of samples in the input buffer to encode Number of encoded bytes written into output buffer Stores state to be used between calls to Encode or Decode ITU Test Mode TRUE if the operating in the special ITU test mode, with the band split filters disabled. TRUE if the G.722 data is packed 8kHz Sampling TRUE if encode from 8k samples/second Bits Per Sample 6 for 48000kbps, 7 for 56000kbps, or 8 for 64000kbps. Signal history for the QMF (x) Band In bit buffer Number of bits in InBuffer Out bit buffer Number of bits in OutBuffer Creates a new instance of G722 Codec State for a new encode or decode session Bitrate (typically 64000) Special options Band data for G722 Codec s sp sz r a ap p d b bp sg nb det G722 Flags None Using a G722 sample rate of 8000 Packed mu-law decoder based on code from: http://hazelware.luggle.com/tutorials/mulawcompression.html only 512 bytes required, so just use a lookup Converts a mu-law encoded byte to a 16 bit linear sample mu-law encoded byte Linear sample mu-law encoder based on code from: http://hazelware.luggle.com/tutorials/mulawcompression.html Encodes a single 16 bit sample to mu-law 16 bit PCM sample mu-law encoded byte Audio Capture Client Gets a pointer to the buffer Pointer to the buffer Gets a pointer to the buffer Number of frames to read Buffer flags Pointer to the buffer Gets the size of the next packet Release buffer Number of frames written Release the COM object Windows CoreAudio AudioClient Retrieves the stream format that the audio engine uses for its internal processing of shared-mode streams. Can be called before initialize Initializes the Audio Client Share Mode Stream Flags Buffer Duration Periodicity Wave Format Audio Session GUID (can be null) Retrieves the size (maximum capacity) of the audio buffer associated with the endpoint. (must initialize first) Retrieves the maximum latency for the current stream and can be called any time after the stream has been initialized. Retrieves the number of frames of padding in the endpoint buffer (must initialize first) Retrieves the length of the periodic interval separating successive processing passes by the audio engine on the data in the endpoint buffer. (can be called before initialize) Gets the minimum device period (can be called before initialize) Returns the AudioStreamVolume service for this AudioClient. This returns the AudioStreamVolume object ONLY for shared audio streams. This is thrown when an exclusive audio stream is being used. Gets the AudioClockClient service Gets the AudioRenderClient service Gets the AudioCaptureClient service Determines whether if the specified output format is supported The share mode. The desired format. True if the format is supported Determines if the specified output format is supported in shared mode Share Mode Desired Format Output The closest match format. True if the format is supported Starts the audio stream Stops the audio stream. Set the Event Handle for buffer synchro. The Wait Handle to setup Resets the audio stream Reset is a control method that the client calls to reset a stopped audio stream. Resetting the stream flushes all pending data and resets the audio clock stream position to 0. This method fails if it is called on a stream that is not stopped Dispose Audio Client Buffer Flags None AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY AUDCLNT_BUFFERFLAGS_SILENT AUDCLNT_BUFFERFLAGS_TIMESTAMP_ERROR The AudioClientProperties structure is used to set the parameters that describe the properties of the client's audio stream. http://msdn.microsoft.com/en-us/library/windows/desktop/hh968105(v=vs.85).aspx The size of the buffer for the audio stream. Boolean value to indicate whether or not the audio stream is hardware-offloaded An enumeration that is used to specify the category of the audio stream. A bit-field describing the characteristics of the stream. Supported in Windows 8.1 and later. AUDCLNT_SHAREMODE AUDCLNT_SHAREMODE_SHARED, AUDCLNT_SHAREMODE_EXCLUSIVE AUDCLNT_STREAMFLAGS None AUDCLNT_STREAMFLAGS_CROSSPROCESS AUDCLNT_STREAMFLAGS_LOOPBACK AUDCLNT_STREAMFLAGS_EVENTCALLBACK AUDCLNT_STREAMFLAGS_NOPERSIST Defines values that describe the characteristics of an audio stream. No stream options. The audio stream is a 'raw' stream that bypasses all signal processing except for endpoint specific, always-on processing in the APO, driver, and hardware. Audio Clock Client Characteristics Frequency Get Position Adjusted Position Can Adjust Position Dispose Audio Endpoint Volume GUID to pass to AudioEndpointVolumeCallback On Volume Notification Volume Range Hardware Support Step Information Channels Master Volume Level Master Volume Level Scalar Mute Volume Step Up Volume Step Down Creates a new Audio endpoint volume IAudioEndpointVolume COM interface Dispose Finalizer Audio Endpoint Volume Channel GUID to pass to AudioEndpointVolumeCallback Volume Level Volume Level Scalar Audio Endpoint Volume Channels Channel Count Indexer - get a specific channel Audio Endpoint Volume Notifiaction Delegate Audio Volume Notification Data Audio Endpoint Volume Step Information Step StepCount Audio Endpoint Volume Volume Range Minimum Decibels Maximum Decibels Increment Decibels Audio Meter Information Peak Values Hardware Support Master Peak Value Audio Meter Information Channels Metering Channel Count Get Peak value Channel index Peak value Audio Render Client Gets a pointer to the buffer Number of frames requested Pointer to the buffer Release buffer Number of frames written Buffer flags Release the COM object AudioSessionControl object for information regarding an audio session Constructor. Dispose Finalizer Audio meter information of the audio session. Simple audio volume of the audio session (for volume and mute status). The current state of the audio session. The name of the audio session. the path to the icon shown in the mixer. The session identifier of the audio session. The session instance identifier of the audio session. The process identifier of the audio session. Is the session a system sounds session. the grouping param for an audio session grouping For chanigng the grouping param and supplying the context of said change Registers an even client for callbacks Unregisters an event client from receiving callbacks AudioSessionEvents callback implementation Constructor. Notifies the client that the display name for the session has changed. The new display name for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the display icon for the session has changed. The path for the new display icon for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the volume level or muting state of the session has changed. The new volume level for the audio session. The new muting state. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the volume level of an audio channel in the session submix has changed. The channel count. An array of volumnes cooresponding with each channel index. The number of the channel whose volume level changed. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the grouping parameter for the session has changed. The new grouping parameter for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the stream-activity state of the session has changed. The new session state. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the session has been disconnected. The reason that the audio session was disconnected. An HRESULT code indicating whether the operation succeeded of failed. AudioSessionManager Designed to manage audio sessions and in particuar the SimpleAudioVolume interface to adjust a session volume Session created delegate Occurs when audio session has been added (for example run another program that use audio playback). SimpleAudioVolume object for adjusting the volume for the user session AudioSessionControl object for registring for callbacks and other session information Refresh session of current device. Returns list of sessions of current device. Dispose. Finalizer. Specifies the category of an audio stream. Other audio stream. Media that will only stream when the app is in the foreground. Media that can be streamed when the app is in the background. Real-time communications, such as VOIP or chat. Alert sounds. Sound effects. Game sound effects. Background audio for games. Manages the AudioStreamVolume for the . Verify that the channel index is valid. Return the current stream volumes for all channels An array of volume levels between 0.0 and 1.0 for each channel in the audio stream. Returns the current number of channels in this audio stream. Return the current volume for the requested channel. The 0 based index into the channels. The volume level for the channel between 0.0 and 1.0. Set the volume level for each channel of the audio stream. An array of volume levels (between 0.0 and 1.0) one for each channel. A volume level MUST be supplied for reach channel in the audio stream. Thrown when does not contain elements. Sets the volume level for one channel in the audio stream. The 0-based index into the channels to adjust the volume of. The volume level between 0.0 and 1.0 for this channel of the audio stream. Dispose Release/cleanup objects during Dispose/finalization. True if disposing and false if being finalized. Audio Volume Notification Data Event Context Muted Guid that raised the event Master Volume Channels Channel Volume Audio Volume Notification Data Connector Connects this connector to a connector in another device-topology object Retreives the type of this connector Retreives the data flow of this connector Disconnects this connector from it's connected connector (if connected) Indicates whether this connector is connected to another connector Retreives the connector this connector is connected to (if connected) Retreives the global ID of the connector this connector is connected to (if connected) Retreives the device ID of the audio device this connector is connected to (if connected) Connector Type The connector is part of a connection of unknown type. The connector is part of a physical connection to an auxiliary device that is installed inside the system chassis The connector is part of a physical connection to an external device. The connector is part of a software-configured I/O connection (typically a DMA channel) between system memory and an audio hardware device on an audio adapter. The connector is part of a permanent connection that is fixed and cannot be configured under software control. The connector is part of a connection to a network. The EDataFlow enumeration defines constants that indicate the direction in which audio data flows between an audio endpoint device and an application Audio rendering stream. Audio data flows from the application to the audio endpoint device, which renders the stream. Audio capture stream. Audio data flows from the audio endpoint device that captures the stream, to the application Audio rendering or capture stream. Audio data can flow either from the application to the audio endpoint device, or from the audio endpoint device to the application. Device State DEVICE_STATE_ACTIVE DEVICE_STATE_DISABLED DEVICE_STATE_NOTPRESENT DEVICE_STATE_UNPLUGGED DEVICE_STATEMASK_ALL Windows CoreAudio DeviceTopology Retrieves the number of connections associated with this device-topology object Retrieves the connector at the supplied index Retrieves the device id of the device represented by this device-topology object Endpoint Hardware Support Volume Mute Meter Representation of binary large object container. Length of binary object. Pointer to buffer storing data. is defined in WTypes.h Audio Client WASAPI Error Codes (HResult) AUDCLNT_E_NOT_INITIALIZED AUDCLNT_E_UNSUPPORTED_FORMAT AUDCLNT_E_DEVICE_IN_USE AUDCLNT_E_RESOURCES_INVALIDATED Windows CoreAudio IAudioClient interface Defined in AudioClient.h The GetBufferSize method retrieves the size (maximum capacity) of the endpoint buffer. The GetService method accesses additional services from the audio client object. The interface ID for the requested service. Pointer to a pointer variable into which the method writes the address of an instance of the requested interface. Defined in AudioClient.h Defined in AudioClient.h Windows CoreAudio IAudioSessionControl interface Defined in AudioPolicy.h Retrieves the current state of the audio session. Receives the current session state. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the display name for the audio session. Receives a string that contains the display name. An HRESULT code indicating whether the operation succeeded of failed. Assigns a display name to the current audio session. A string that contains the new display name for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the path for the display icon for the audio session. Receives a string that specifies the fully qualified path of the file that contains the icon. An HRESULT code indicating whether the operation succeeded of failed. Assigns a display icon to the current session. A string that specifies the fully qualified path of the file that contains the new icon. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the grouping parameter of the audio session. Receives the grouping parameter ID. An HRESULT code indicating whether the operation succeeded of failed. Assigns a session to a grouping of sessions. The new grouping parameter ID. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Registers the client to receive notifications of session events, including changes in the session state. A client-implemented interface. An HRESULT code indicating whether the operation succeeded of failed. Deletes a previous registration by the client to receive notifications. A client-implemented interface. An HRESULT code indicating whether the operation succeeded of failed. Windows CoreAudio IAudioSessionControl interface Defined in AudioPolicy.h Retrieves the current state of the audio session. Receives the current session state. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the display name for the audio session. Receives a string that contains the display name. An HRESULT code indicating whether the operation succeeded of failed. Assigns a display name to the current audio session. A string that contains the new display name for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the path for the display icon for the audio session. Receives a string that specifies the fully qualified path of the file that contains the icon. An HRESULT code indicating whether the operation succeeded of failed. Assigns a display icon to the current session. A string that specifies the fully qualified path of the file that contains the new icon. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the grouping parameter of the audio session. Receives the grouping parameter ID. An HRESULT code indicating whether the operation succeeded of failed. Assigns a session to a grouping of sessions. The new grouping parameter ID. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Registers the client to receive notifications of session events, including changes in the session state. A client-implemented interface. An HRESULT code indicating whether the operation succeeded of failed. Deletes a previous registration by the client to receive notifications. A client-implemented interface. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the identifier for the audio session. Receives the session identifier. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the identifier of the audio session instance. Receives the identifier of a particular instance. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the process identifier of the audio session. Receives the process identifier of the audio session. An HRESULT code indicating whether the operation succeeded of failed. Indicates whether the session is a system sounds session. An HRESULT code indicating whether the operation succeeded of failed. Enables or disables the default stream attenuation experience (auto-ducking) provided by the system. A variable that enables or disables system auto-ducking. An HRESULT code indicating whether the operation succeeded of failed. Defines constants that indicate the current state of an audio session. MSDN Reference: http://msdn.microsoft.com/en-us/library/dd370792.aspx The audio session is inactive. The audio session is active. The audio session has expired. Defines constants that indicate a reason for an audio session being disconnected. MSDN Reference: Unknown The user removed the audio endpoint device. The Windows audio service has stopped. The stream format changed for the device that the audio session is connected to. The user logged off the WTS session that the audio session was running in. The WTS session that the audio session was running in was disconnected. The (shared-mode) audio session was disconnected to make the audio endpoint device available for an exclusive-mode connection. Windows CoreAudio IAudioSessionControl interface Defined in AudioPolicy.h Notifies the client that the display name for the session has changed. The new display name for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the display icon for the session has changed. The path for the new display icon for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the volume level or muting state of the session has changed. The new volume level for the audio session. The new muting state. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the volume level of an audio channel in the session submix has changed. The channel count. An array of volumnes cooresponding with each channel index. The number of the channel whose volume level changed. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the grouping parameter for the session has changed. The new grouping parameter for the session. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the stream-activity state of the session has changed. The new session state. An HRESULT code indicating whether the operation succeeded of failed. Notifies the client that the session has been disconnected. The reason that the audio session was disconnected. An HRESULT code indicating whether the operation succeeded of failed. interface to receive session related events notification of volume changes including muting of audio session the current volume the current mute state, true muted, false otherwise notification of display name changed the current display name notification of icon path changed the current icon path notification of the client that the volume level of an audio channel in the session submix has changed The channel count. An array of volumnes cooresponding with each channel index. The number of the channel whose volume level changed. notification of the client that the grouping parameter for the session has changed >The new grouping parameter for the session. notification of the client that the stream-activity state of the session has changed The new session state. notification of the client that the session has been disconnected The reason that the audio session was disconnected. Windows CoreAudio IAudioSessionManager interface Defined in AudioPolicy.h Retrieves an audio session control. A new or existing session ID. Audio session flags. Receives an interface for the audio session. An HRESULT code indicating whether the operation succeeded of failed. Retrieves a simple audio volume control. A new or existing session ID. Audio session flags. Receives an interface for the audio session. An HRESULT code indicating whether the operation succeeded of failed. Retrieves an audio session control. A new or existing session ID. Audio session flags. Receives an interface for the audio session. An HRESULT code indicating whether the operation succeeded of failed. Retrieves a simple audio volume control. A new or existing session ID. Audio session flags. Receives an interface for the audio session. An HRESULT code indicating whether the operation succeeded of failed. Windows CoreAudio IAudioSessionNotification interface Defined in AudioPolicy.h session being added An HRESULT code indicating whether the operation succeeded of failed. Windows CoreAudio IConnector interface Defined in devicetopology.h Windows CoreAudio IDeviceTopology interface Defined in devicetopology.h defined in MMDeviceAPI.h IMMNotificationClient Device State Changed Device Added Device Removed Default Device Changed Property Value Changed Windows CoreAudio IPart interface Defined in devicetopology.h Windows CoreAudio IPartsList interface Defined in devicetopology.h is defined in propsys.h Windows CoreAudio ISimpleAudioVolume interface Defined in AudioClient.h Sets the master volume level for the audio session. The new volume level expressed as a normalized value between 0.0 and 1.0. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the client volume level for the audio session. Receives the volume level expressed as a normalized value between 0.0 and 1.0. An HRESULT code indicating whether the operation succeeded of failed. Sets the muting state for the audio session. The new muting state. A user context value that is passed to the notification callback. An HRESULT code indicating whether the operation succeeded of failed. Retrieves the current muting state for the audio session. Receives the muting state. An HRESULT code indicating whether the operation succeeded of failed. implements IMMDeviceEnumerator MMDevice STGM enumeration Read-only access mode. Write-only access mode. Read-write access mode. from Propidl.h. http://msdn.microsoft.com/en-us/library/aa380072(VS.85).aspx contains a union so we have to do an explicit layout Value type tag. Reserved1. Reserved2. Reserved3. cVal. bVal. iVal. uiVal. lVal. ulVal. intVal. uintVal. hVal. uhVal. fltVal. dblVal. boolVal. scode. Date time. Binary large object. Pointer value. Creates a new PropVariant containing a long value Helper method to gets blob data Interprets a blob as an array of structs Gets the type of data in this PropVariant Property value allows freeing up memory, might turn this into a Dispose method? Clears with a known pointer MM Device Initializes the device's property store. The storage-access mode to open store for. Administrative client is required for Write and ReadWrite modes. Audio Client Makes a new one each call to allow caller to manage when to dispose n.b. should probably not be a property anymore Audio Meter Information Audio Endpoint Volume AudioSessionManager instance DeviceTopology instance Properties Friendly name for the endpoint Friendly name of device Icon path of device Device Instance Id of Device Device ID Data Flow Device State To string Dispose Finalizer Multimedia Device Collection Device count Get device by index Device index Device at the specified index Get Enumerator Device enumerator MM Device Enumerator Creates a new MM Device Enumerator Enumerate Audio Endpoints Desired DataFlow State Mask Device Collection Get Default Endpoint Data Flow Role Device Check to see if a default audio end point exists without needing an exception. Data Flow Role True if one exists, and false if one does not exist. Get device by ID Device ID Device Registers a call back for Device Events Object implementing IMMNotificationClient type casted as IMMNotificationClient interface Unregisters a call back for Device Events Object implementing IMMNotificationClient type casted as IMMNotificationClient interface Called to dispose/finalize contained objects. True if disposing, false if called from a finalizer. PROPERTYKEY is defined in wtypes.h Format ID Property ID Property Keys PKEY_DeviceInterface_FriendlyName PKEY_AudioEndpoint_FormFactor PKEY_AudioEndpoint_ControlPanelPageProvider PKEY_AudioEndpoint_Association PKEY_AudioEndpoint_PhysicalSpeakers PKEY_AudioEndpoint_GUID PKEY_AudioEndpoint_Disable_SysFx PKEY_AudioEndpoint_FullRangeSpeakers PKEY_AudioEndpoint_Supports_EventDriven_Mode PKEY_AudioEndpoint_JackSubType PKEY_AudioEngine_DeviceFormat PKEY_AudioEngine_OEMFormat PKEY _Devie_FriendlyName PKEY _Device_IconPath Device description property. Id of controller device for endpoint device property. Device interface key property. System-supplied device instance identification string, assigned by PnP manager, persistent across system restarts. Property Store class, only supports reading properties at the moment. Property Count Gets property by index Property index The property Contains property guid Looks for a specific key True if found Indexer by guid Property Key Property or null if not found Gets property key at sepecified index Index Property key Gets property value at specified index Index Property value Sets property value at specified key. Key of property to set. Value to write. Saves a property change. Creates a new property store IPropertyStore COM interface Property Store Property Property Key Property Value The ERole enumeration defines constants that indicate the role that the system has assigned to an audio endpoint device Games, system notification sounds, and voice commands. Music, movies, narration, and live music recording Voice communications (talking to another person). Collection of sessions. Returns session at index. Number of current sessions. Windows CoreAudio SimpleAudioVolume Creates a new Audio endpoint volume ISimpleAudioVolume COM interface Dispose Finalizer Allows the user to adjust the volume from 0.0 to 1.0 Mute Represents state of a capture device Not recording Beginning to record Recording in progress Requesting stop Audio Capture using Wasapi See http://msdn.microsoft.com/en-us/library/dd370800%28VS.85%29.aspx Indicates recorded data is available Indicates that all recorded data has now been received. Initialises a new instance of the WASAPI capture class Initialises a new instance of the WASAPI capture class Capture device to use Initializes a new instance of the class. The capture device. true if sync is done with event. false use sleep. Initializes a new instance of the class. The capture device. true if sync is done with event. false use sleep. Length of the audio buffer in milliseconds. A lower value means lower latency but increased CPU usage. Share Mode - set before calling StartRecording Current Capturing State Capturing wave format Gets the default audio capture device The default audio capture device To allow overrides to specify different flags (e.g. loopback) Start Capturing Stop Capturing (requests a stop, wait for RecordingStopped event to know it has finished) Dispose Contains the name and CLSID of a DirectX Media Object Name CLSID Initializes a new instance of DmoDescriptor DirectX Media Object Enumerator Get audio effect names Audio effect names Get audio encoder names Audio encoder names Get audio decoder names Audio decoder names DMO Guids for use with DMOEnum dmoreg.h MediaErr.h DMO Inplace Process Flags DMO_INPLACE_NORMAL DMO_INPLACE_ZERO Return value when Process is executed with IMediaObjectInPlace Success. There is no remaining data to process. Success. There is still data to process. DMO Input Data Buffer Flags None DMO_INPUT_DATA_BUFFERF_SYNCPOINT DMO_INPUT_DATA_BUFFERF_TIME DMO_INPUT_DATA_BUFFERF_TIMELENGTH http://msdn.microsoft.com/en-us/library/aa929922.aspx DMO_MEDIA_TYPE Major type Major type name Subtype Subtype name Fixed size samples Sample size Format type Format type name Gets the structure as a Wave format (if it is one) Sets this object up to point to a wave format Wave format structure DMO Output Data Buffer Creates a new DMO Output Data Buffer structure Maximum buffer size Dispose Media Buffer Length of data in buffer Status Flags Timestamp Duration Retrives the data in this buffer Buffer to receive data Offset into buffer Is more data available If true, ProcessOuput should be called again DMO Output Data Buffer Flags None DMO_OUTPUT_DATA_BUFFERF_SYNCPOINT DMO_OUTPUT_DATA_BUFFERF_TIME DMO_OUTPUT_DATA_BUFFERF_TIMELENGTH DMO_OUTPUT_DATA_BUFFERF_INCOMPLETE DMO_PARTIAL_MEDIATYPE DMO Process Output Flags None DMO_PROCESS_OUTPUT_DISCARD_WHEN_NO_BUFFER Chorus Phase DSFXCHORUS_PHASE_NEG_180 DSFXCHORUS_PHASE_NEG_90 DSFXCHORUS_PHASE_ZERO DSFXCHORUS_PHASE_90 DSFXCHORUS_PHASE_180 Chorus Wave Form DSFXCHORUS_WAVE_TRIANGLE DSFXCHORUS_WAVE_SIN DMO Chorus Effect DMO Chorus Params DSFXCHORUS_WETDRYMIX_MIN DSFXCHORUS_WETDRYMIX_MAX DSFXCHORUS_WETDRYMIX_DEFAULT DSFXCHORUS_DEPTH_MIN DSFXCHORUS_DEPTH_MAX DSFXCHORUS_DEPTH_DEFAULT DSFXCHORUS_FEEDBACK_MIN DSFXCHORUS_FEEDBACK_MAX DSFXCHORUS_FEEDBACK_DEFAULT DSFXCHORUS_FREQUENCY_MIN DSFXCHORUS_FREQUENCY_MAX DSFXCHORUS_FREQUENCY_DEFAULT DSFXCHORUS_WAVE_DEFAULT DSFXCHORUS_DELAY_MIN DSFXCHORUS_DELAY_MAX DSFXCHORUS_DELAY_DEFAULT DSFXCHORUS_PHASE_DEFAULT Ratio of wet (processed) signal to dry (unprocessed) signal. Percentage by which the delay time is modulated by the low-frequency oscillator, in hundredths of a percentage point. Percentage of output signal to feed back into the effect's input. Frequency of the LFO. Waveform shape of the LFO. Number of milliseconds the input is delayed before it is played back. Phase differential between left and right LFOs. Media Object Media Object InPlace Effect Parameter Create new DMO Chorus Dispose code DMO Compressor Effect DMO Compressor Params DSFXCOMPRESSOR_GAIN_MIN DSFXCOMPRESSOR_GAIN_MAX DSFXCOMPRESSOR_GAIN_DEFAULT DSFXCOMPRESSOR_ATTACK_MIN DSFXCOMPRESSOR_ATTACK_MAX DSFXCOMPRESSOR_ATTACK_DEFAULT DSFXCOMPRESSOR_RELEASE_MIN DSFXCOMPRESSOR_RELEASE_MAX DSFXCOMPRESSOR_RELEASE_DEFAULT DSFXCOMPRESSOR_THRESHOLD_MIN DSFXCOMPRESSOR_THRESHOLD_MAX DSFXCOMPRESSOR_THRESHOLD_DEFAULT DSFXCOMPRESSOR_RATIO_MIN DSFXCOMPRESSOR_RATIO_MAX DSFXCOMPRESSOR_RATIO_DEFAULT DSFXCOMPRESSOR_PREDELAY_MIN DSFXCOMPRESSOR_PREDELAY_MAX DSFXCOMPRESSOR_PREDELAY_DEFAULT Output gain of signal after compression. Time before compression reaches its full value. Speed at which compression is stopped after input drops below Threshold. Point at which compression begins, in decibels. Compression ratio Time after Threshold is reached before attack phase is started, in milliseconds. Media Object Media Object InPlace Effect Parameter Create new DMO Compressor Dispose code DMO Distortion Effect DMO Distortion Params DSFXDISTORTION_GAIN_MIN DSFXDISTORTION_GAIN_MAX DSFXDISTORTION_GAIN_DEFAULT DSFXDISTORTION_EDGE_MIN DSFXDISTORTION_EDGE_MAX DSFXDISTORTION_EDGE_DEFAULT DSFXDISTORTION_POSTEQCENTERFREQUENCY_MIN DSFXDISTORTION_POSTEQCENTERFREQUENCY_MAX DSFXDISTORTION_POSTEQCENTERFREQUENCY_DEFAULT DSFXDISTORTION_POSTEQBANDWIDTH_MIN DSFXDISTORTION_POSTEQBANDWIDTH_MAX DSFXDISTORTION_POSTEQBANDWIDTH_DEFAULT DSFXDISTORTION_PRELOWPASSCUTOFF_MIN DSFXDISTORTION_PRELOWPASSCUTOFF_MAX DSFXDISTORTION_PRELOWPASSCUTOFF_DEFAULT Amount of signal change after distortion. Percentage of distortion intensity. Center frequency of harmonic content addition. Width of frequency band that determines range of harmonic content addition. Filter cutoff for high-frequency harmonics attenuation. Media Object Media Object InPlace Effect Parameter Create new DMO Distortion Dispose code Dmo Echo Effect DMO Echo Params DSFXECHO_WETDRYMIX_MIN DSFXECHO_WETDRYMIX_MAX DSFXECHO_WETDRYMIX_DEFAULT DSFXECHO_FEEDBACK_MIN DSFXECHO_FEEDBACK_MAX DSFXECHO_FEEDBACK_DEFAULT DSFXECHO_LEFTDELAY_MIN DSFXECHO_LEFTDELAY_MAX DSFXECHO_LEFTDELAY_DEFAULT DSFXECHO_RIGHTDELAY_MIN DSFXECHO_RIGHTDELAY_MAX DSFXECHO_RIGHTDELAY_DEFAULT DSFXECHO_PANDELAY_DEFAULT Ratio of wet (processed) signal to dry (unprocessed) signal. Percentage of output fed back into input. Delay for left channel, in milliseconds. Delay for right channel, in milliseconds. Value that specifies whether to swap left and right delays with each successive echo. Media Object Media Object InPlace Effect Parameter Create new DMO Echo Dispose code DMO Flanger Effect DMO Flanger Params DSFXFLANGER_WETDRYMIX_MIN DSFXFLANGER_WETDRYMIX_MAX DSFXFLANGER_WETDRYMIX_DEFAULT DSFXFLANGER_DEPTH_MIN DSFXFLANGER_DEPTH_MAX DSFXFLANGER_DEPTH_DEFAULT DSFXFLANGER_FEEDBACK_MIN DSFXFLANGER_FEEDBACK_MAX DSFXFLANGER_FEEDBACK_DEFAULT DSFXFLANGER_FREQUENCY_MIN DSFXFLANGER_FREQUENCY_MAX DSFXFLANGER_FREQUENCY_DEFAULT DSFXFLANGER_WAVE_DEFAULT DSFXFLANGER_DELAY_MIN DSFXFLANGER_DELAY_MAX DSFXFLANGER_DELAY_DEFAULT DSFXFLANGER_PHASE_DEFAULT Ratio of wet (processed) signal to dry (unprocessed) signal. Percentage by which the delay time is modulated by the low-frequency oscillator, in hundredths of a percentage point. Percentage of output signal to feed back into the effect's input. Frequency of the LFO. Waveform shape of the LFO. Number of milliseconds the input is delayed before it is played back. Phase differential between left and right LFOs. Media Object Media Object InPlace Effect Parameter Create new DMO Flanger Dispose code DMO Gargle Effect DMO Gargle Params DSFXGARGLE_RATEHZ_MIN DSFXGARGLE_RATEHZ_MAX DSFXGARGLE_RATEHZ_DEFAULT DSFXGARGLE_WAVE_DEFAULT Rate of modulation in hz Gargle Wave Shape Media Object Media Object InPlace Effect Parameter Create new DMO Gargle Dispose code DMO I3DL2Reverb Effect DMO I3DL2Reverb Params DSFX_I3DL2REVERB_ROOM_MIN DSFX_I3DL2REVERB_ROOM_MAX DSFX_I3DL2REVERB_ROOM_DEFAULT DSFX_I3DL2REVERB_ROOMHF_MIN DSFX_I3DL2REVERB_ROOMHF_MAX DSFX_I3DL2REVERB_ROOMHF_DEFAULT DSFX_I3DL2REVERB_ROOMROLLOFFFACTOR_MIN DSFX_I3DL2REVERB_ROOMROLLOFFFACTOR_MAX DSFX_I3DL2REVERB_ROOMROLLOFFFACTOR_DEFAULT DSFX_I3DL2REVERB_DECAYTIME_MIN DSFX_I3DL2REVERB_DECAYTIME_MAX DSFX_I3DL2REVERB_DECAYTIME_DEFAULT DSFX_I3DL2REVERB_DECAYHFRATIO_MIN DSFX_I3DL2REVERB_DECAYHFRATIO_MAX DSFX_I3DL2REVERB_DECAYHFRATIO_DEFAULT DSFX_I3DL2REVERB_REFLECTIONS_MIN DSFX_I3DL2REVERB_REFLECTIONS_MAX DSFX_I3DL2REVERB_REFLECTIONS_DEFAULT DSFX_I3DL2REVERB_REFLECTIONSDELAY_MIN DSFX_I3DL2REVERB_REFLECTIONSDELAY_MAX DSFX_I3DL2REVERB_REFLECTIONSDELAY_DEFAULT DSFX_I3DL2REVERB_REVERB_MIN DSFX_I3DL2REVERB_REVERB_MAX DSFX_I3DL2REVERB_REVERB_DEFAULT DSFX_I3DL2REVERB_REVERBDELAY_MIN DSFX_I3DL2REVERB_REVERBDELAY_MAX DSFX_I3DL2REVERB_REVERBDELAY_DEFAULT DSFX_I3DL2REVERB_DIFFUSION_MIN DSFX_I3DL2REVERB_DIFFUSION_MAX DSFX_I3DL2REVERB_DIFFUSION_DEFAULT DSFX_I3DL2REVERB_DENSITY_MIN DSFX_I3DL2REVERB_DENSITY_MAX DSFX_I3DL2REVERB_DENSITY_DEFAULT DSFX_I3DL2REVERB_HFREFERENCE_MIN DSFX_I3DL2REVERB_HFREFERENCE_MAX DSFX_I3DL2REVERB_HFREFERENCE_DEFAULT DSFX_I3DL2REVERB_QUALITY_MIN DSFX_I3DL2REVERB_QUALITY_MAX DSFX_I3DL2REVERB_QUALITY_DEFAULT Attenuation of the room effect, in millibels (mB) Attenuation of the room high-frequency effect, in mB. Rolloff factor for the reflected signals. Decay time, in seconds. Ratio of the decay time at high frequencies to the decay time at low frequencies. Attenuation of early reflections relative to lRoom, in mB. Delay time of the first reflection relative to the direct path, in seconds. Attenuation of late reverberation relative to lRoom, in mB. Time limit between the early reflections and the late reverberation relative to the time of the first reflection. Echo density in the late reverberation decay, in percent. Modal density in the late reverberation decay, in percent. Reference high frequency, in hertz. the quality of the environmental reverberation effect. Higher values produce better quality at the expense of processing time. Sets standard reverberation parameters of a buffer. I3DL2EnvironmentPreset retrieves an identifier for standard reverberation parameters of a buffer. I3DL2EnvironmentPreset Media Object Media Object InPlace Effect Parameter Create new DMO I3DL2Reverb Dispose code DMO Parametric Equalizer Effect DMO ParamEq Params DSFXPARAMEQ_CENTER_MIN DSFXPARAMEQ_CENTER_MAX DSFXPARAMEQ_CENTER_DEFAULT DSFXPARAMEQ_BANDWIDTH_MIN DSFXPARAMEQ_BANDWIDTH_MAX DSFXPARAMEQ_BANDWIDTH_DEFAULT DSFXPARAMEQ_GAIN_MIN DSFXPARAMEQ_GAIN_MAX DSFXPARAMEQ_GAIN_DEFAULT Center frequency, in hertz Bandwidth, in semitones. Gain Media Object Media Object InPlace Effect Parameter Create new DMO ParamEq Dispose code DMO Reverb Effect DMO Reverb Params DSFX_WAVESREVERB_INGAIN_MIN DSFX_WAVESREVERB_INGAIN_MAX DSFX_WAVESREVERB_INGAIN_DEFAULT DSFX_WAVESREVERB_REVERBMIX_MIN DSFX_WAVESREVERB_REVERBMIX_MAX DSFX_WAVESREVERB_REVERBMIX_DEFAULT DSFX_WAVESREVERB_REVERBTIME_MIN DSFX_WAVESREVERB_REVERBTIME_MAX DSFX_WAVESREVERB_REVERBTIME_DEFAULT DSFX_WAVESREVERB_HIGHFREQRTRATIO_MIN DSFX_WAVESREVERB_HIGHFREQRTRATIO_MAX DSFX_WAVESREVERB_HIGHFREQRTRATIO_DEFAULT Input gain of signal, in decibels (dB). Reverb mix, in dB. Reverb time, in milliseconds. High-frequency reverb time ratio. Media Object Media Object InPlace Effect Parameter Create new DMO WavesReverb Dispose code DSFXECHO_PANDELAY DSFXECHO_PANDELAY_MIN DSFXECHO_PANDELAY_MAX Flanger Phase DSFXFLANGER_PHASE_NEG_180 DSFXFLANGER_PHASE_NEG_90 DSFXFLANGER_PHASE_ZERO DSFXFLANGER_PHASE_90 DSFXFLANGER_PHASE_180 Flanger Wave Form DSFXFLANGER_WAVE_TRIANGLE DSFXFLANGER_WAVE_SIN Gargle Wave Shape DSFXGARGLE_WAVE_TRIANGLE DSFXGARGLE_WAVE_SQUARE I3DL2 Reverberation Presets DSFX_I3DL2_ENVIRONMENT_PRESET_DEFAULT DSFX_I3DL2_ENVIRONMENT_PRESET_GENERIC DSFX_I3DL2_ENVIRONMENT_PRESET_PADDEDCELL DSFX_I3DL2_ENVIRONMENT_PRESET_ROOM DSFX_I3DL2_ENVIRONMENT_PRESET_BATHROOM DSFX_I3DL2_ENVIRONMENT_PRESET_LIVINGROOM DSFX_I3DL2_ENVIRONMENT_PRESET_STONEROOM DSFX_I3DL2_ENVIRONMENT_PRESET_AUDITORIUM DSFX_I3DL2_ENVIRONMENT_PRESET_CONCERTHALL DSFX_I3DL2_ENVIRONMENT_PRESET_CAVE DSFX_I3DL2_ENVIRONMENT_PRESET_ARENA DSFX_I3DL2_ENVIRONMENT_PRESET_HANGAR DSFX_I3DL2_ENVIRONMENT_PRESET_CARPETEDHALLWAY DSFX_I3DL2_ENVIRONMENT_PRESET_HALLWAY DSFX_I3DL2_ENVIRONMENT_PRESET_STONECORRIDOR DSFX_I3DL2_ENVIRONMENT_PRESET_ALLEY DSFX_I3DL2_ENVIRONMENT_PRESET_FOREST DSFX_I3DL2_ENVIRONMENT_PRESET_CITY DSFX_I3DL2_ENVIRONMENT_PRESET_MOUNTAINS DSFX_I3DL2_ENVIRONMENT_PRESET_QUARRY DSFX_I3DL2_ENVIRONMENT_PRESET_PLAIN DSFX_I3DL2_ENVIRONMENT_PRESET_PARKINGLOT DSFX_I3DL2_ENVIRONMENT_PRESET_SEWERPIPE DSFX_I3DL2_ENVIRONMENT_PRESET_UNDERWATER DSFX_I3DL2_ENVIRONMENT_PRESET_SMALLROOM DSFX_I3DL2_ENVIRONMENT_PRESET_MEDIUMROOM DSFX_I3DL2_ENVIRONMENT_PRESET_LARGEROOM DSFX_I3DL2_ENVIRONMENT_PRESET_MEDIUMHALL DSFX_I3DL2_ENVIRONMENT_PRESET_LARGEHALL DSFX_I3DL2_ENVIRONMENT_PRESET_PLATE Interface of DMO Effectors Parameters of the effect to be used Media Object Media Object InPlace Effect Parameter IMediaBuffer Interface Set Length Length HRESULT Get Max Length Max Length HRESULT Get Buffer and Length Pointer to variable into which to write the Buffer Pointer Pointer to variable into which to write the Valid Data Length HRESULT defined in mediaobj.h defined in mediaobj.h defined in Medparam.h Windows Media Resampler Props wmcodecdsp.h Range is 1 to 60 Specifies the channel matrix. Attempting to implement the COM IMediaBuffer interface as a .NET object Not sure what will happen when I pass this to an unmanaged object Creates a new Media Buffer Maximum length in bytes Dispose and free memory for buffer Finalizer Set length of valid data in the buffer length HRESULT Gets the maximum length of the buffer Max length (output parameter) HRESULT Gets buffer and / or length Pointer to variable into which buffer pointer should be written Pointer to variable into which valid data length should be written HRESULT Length of data in the media buffer Loads data into this buffer Data to load Number of bytes to load Retrieves the data in the output buffer buffer to retrieve into offset within that buffer Media Object Creates a new Media Object Media Object COM interface Number of input streams Number of output streams Gets the input media type for the specified input stream Input stream index Input type index DMO Media Type or null if there are no more input types Gets the DMO Media Output type The output stream Output type index DMO Media Type or null if no more available retrieves the media type that was set for an output stream, if any Output stream index DMO Media Type or null if no more available Enumerates the supported input types Input stream index Enumeration of input types Enumerates the output types Output stream index Enumeration of supported output types Querys whether a specified input type is supported Input stream index Media type to check true if supports Sets the input type helper method Input stream index Media type Flags (can be used to test rather than set) Sets the input type Input stream index Media Type Sets the input type to the specified Wave format Input stream index Wave format Requests whether the specified Wave format is supported as an input Input stream index Wave format true if supported Helper function to make a DMO Media Type to represent a particular WaveFormat Checks if a specified output type is supported n.b. you may need to set the input type first Output stream index Media type True if supported Tests if the specified Wave Format is supported for output n.b. may need to set the input type first Output stream index Wave format True if supported Helper method to call SetOutputType Sets the output type n.b. may need to set the input type first Output stream index Media type to set Set output type to the specified wave format n.b. may need to set input type first Output stream index Wave format Get Input Size Info Input Stream Index Input Size Info Get Output Size Info Output Stream Index Output Size Info Process Input Input Stream index Media Buffer Flags Timestamp Duration Process Output Flags Output buffer count Output buffers Gives the DMO a chance to allocate any resources needed for streaming Tells the DMO to free any resources needed for streaming Gets maximum input latency input stream index Maximum input latency as a ref-time Flushes all buffered data Report a discontinuity on the specified input stream Input Stream index Is this input stream accepting data? Input Stream index true if accepting data Experimental code, not currently being called Not sure if it is necessary anyway Media Object InPlace Creates a new Media Object InPlace Media Object InPlace COM Interface Processes a block of data. The application supplies a pointer to a block of input data. The DMO processes the data in place. Size of the data, in bytes. offset into buffer In/Out Data Buffer Start time of the data. DmoInplaceProcessFlags Return value when Process is executed with IMediaObjectInPlace Creates a copy of the DMO in its current state. Copyed MediaObjectInPlace Retrieves the latency introduced by this DMO. The latency, in 100-nanosecond units Get Media Object Media Object Dispose code Media Object Size Info Minimum Buffer Size, in bytes Max Lookahead Alignment Media Object Size Info ToString MP_PARAMINFO MP_TYPE MPT_INT MPT_FLOAT MPT_BOOL MPT_ENUM MPT_MAX MP_CURVE_TYPE uuids.h, ksuuids.h From wmcodecsdp.h Implements: - IMediaObject - IMFTransform (Media foundation - we will leave this for now as there is loads of MF stuff) - IPropertyStore - IWMResamplerProps Can resample PCM or IEEE DMO Resampler Creates a new Resampler based on the DMO Resampler Media Object Dispose code - experimental at the moment Was added trying to track down why Resampler crashes NUnit This code not currently being called by ResamplerDmoStream implements IMediaObject (DirectX Media Object) implements IMFTransform (Media Foundation Transform) On Windows XP, it is always an MM (if present at all) Windows Media MP3 Decoder (as a DMO) WORK IN PROGRESS - DO NOT USE! Creates a new Resampler based on the DMO Resampler Media Object Dispose code - experimental at the moment Was added trying to track down why Resampler crashes NUnit This code not currently being called by ResamplerDmoStream BiQuad filter Passes a single sample through the filter Input sample Output sample Set this up as a low pass filter Sample Rate Cut-off Frequency Bandwidth Set this up as a peaking EQ Sample Rate Centre Frequency Bandwidth (Q) Gain in decibels Set this as a high pass filter Create a low pass filter Create a High pass filter Create a bandpass filter with constant skirt gain Create a bandpass filter with constant peak gain Creates a notch filter Creaes an all pass filter Create a Peaking EQ H(s) = A * (s^2 + (sqrt(A)/Q)*s + A)/(A*s^2 + (sqrt(A)/Q)*s + 1) a "shelf slope" parameter (for shelving EQ only). When S = 1, the shelf slope is as steep as it can be and remain monotonically increasing or decreasing gain with frequency. The shelf slope, in dB/octave, remains proportional to S for all other values for a fixed f0/Fs and dBgain. Gain in decibels H(s) = A * (A*s^2 + (sqrt(A)/Q)*s + 1)/(s^2 + (sqrt(A)/Q)*s + A) Type to represent complex number Real Part Imaginary Part Envelope generator (ADSR) Envelope State Idle Attack Decay Sustain Release Creates and Initializes an Envelope Generator Attack Rate (seconds * SamplesPerSecond) Decay Rate (seconds * SamplesPerSecond) Release Rate (seconds * SamplesPerSecond) Sustain Level (1 = 100%) Sets the attack curve Sets the decay release curve Read the next volume multiplier from the envelope generator A volume multiplier Trigger the gate If true, enter attack phase, if false enter release phase (unless already idle) Current envelope state Reset to idle state Get the current output level Summary description for FastFourierTransform. This computes an in-place complex-to-complex FFT x and y are the real and imaginary arrays of 2^m points. Applies a Hamming Window Index into frame Frame size (e.g. 1024) Multiplier for Hamming window Applies a Hann Window Index into frame Frame size (e.g. 1024) Multiplier for Hann window Applies a Blackman-Harris Window Index into frame Frame size (e.g. 1024) Multiplier for Blackmann-Harris window Summary description for ImpulseResponseConvolution. A very simple mono convolution algorithm This will be very slow This is actually a downwards normalize for data that will clip SMB Pitch Shifter Pitch Shift Pitch Shift Short Time Fourier Transform Fully managed resampler, based on Cockos WDL Resampler Creates a new Resampler sets the mode if sinc set, it overrides interp or filtercnt Sets the filter parameters used for filtercnt>0 but not sinc Set feed mode if true, that means the first parameter to ResamplePrepare will specify however much input you have, not how much you want Reset Prepare note that it is safe to call ResamplePrepare without calling ResampleOut (the next call of ResamplePrepare will function as normal) nb inbuffer was WDL_ResampleSample **, returning a place to put the in buffer, so we return a buffer and offset req_samples is output samples desired if !wantInputDriven, or if wantInputDriven is input samples that we have returns number of samples desired (put these into *inbuffer) Channel Mode Stereo Joint Stereo Dual Channel Mono An ID3v2 Tag Reads an ID3v2 tag from a stream Creates a new ID3v2 tag from a collection of key-value pairs. A collection of key-value pairs containing the tags to include in the ID3v2 tag. A new ID3v2 tag Convert the frame size to a byte array. The frame body size. Creates an ID3v2 frame for the given key-value pair. Gets the Id3v2 Header size. The size is encoded so that only 7 bits per byte are actually used. Creates the Id3v2 tag header and returns is as a byte array. The Id3v2 frames that will be included in the file. This is used to calculate the ID3v2 tag size. Creates the Id3v2 tag for the given key-value pairs and returns it in the a stream. Raw data from this tag Interface for MP3 frame by frame decoder Decompress a single MP3 frame Frame to decompress Output buffer Offset within output buffer Bytes written to output buffer Tell the decoder that we have repositioned PCM format that we are converting into Represents an MP3 Frame Reads an MP3 frame from a stream input stream A valid MP3 frame, or null if none found Reads an MP3Frame from a stream http://mpgedit.org/mpgedit/mpeg_format/mpeghdr.htm has some good info also see http://www.codeproject.com/KB/audio-video/mpegaudioinfo.aspx A valid MP3 frame, or null if none found Constructs an MP3 frame checks if the four bytes represent a valid header, if they are, will parse the values into Mp3Frame Sample rate of this frame Frame length in bytes Bit Rate Raw frame data (includes header bytes) MPEG Version MPEG Layer Channel Mode The number of samples in this frame The channel extension bits The bitrate index (directly from the header) Whether the Copyright bit is set Whether a CRC is present Not part of the MP3 frame itself - indicates where in the stream we found this header MP3 Frame Decompressor using ACM Creates a new ACM frame decompressor The MP3 source format Output format (PCM) Decompresses a frame The MP3 frame destination buffer Offset within destination buffer Bytes written into destination buffer Resets the MP3 Frame Decompressor after a reposition operation Disposes of this MP3 frame decompressor Finalizer ensuring that resources get released properly MPEG Layer flags Reserved Layer 3 Layer 2 Layer 1 MPEG Version Flags Version 2.5 Reserved Version 2 Version 1 Represents a Xing VBR header Load Xing Header Frame Xing Header Sees if a frame contains a Xing header Number of frames Number of bytes VBR Scale property The MP3 frame ASIO 64 bit value Unfortunately the ASIO API was implemented it before compiler supported consistently 64 bit integer types. By using the structure the data layout on a little-endian system like the Intel x86 architecture will result in a "non native" storage of the 64 bit data. The most significant 32 bit are stored first in memory, the least significant bits are stored in the higher memory space. However each 32 bit is stored in the native little-endian fashion most significant bits (Bits 32..63) least significant bits (Bits 0..31) ASIO Callbacks ASIO Buffer Switch Callback ASIO Sample Rate Did Change Callback ASIO Message Callback ASIO Buffer Switch Time Info Callback Buffer switch callback void (*bufferSwitch) (long doubleBufferIndex, AsioBool directProcess); Sample Rate Changed callback void (*sampleRateDidChange) (AsioSampleRate sRate); ASIO Message callback long (*asioMessage) (long selector, long value, void* message, double* opt); ASIO Buffer Switch Time Info Callback AsioTime* (*bufferSwitchTimeInfo) (AsioTime* params, long doubleBufferIndex, AsioBool directProcess); ASIO Channel Info on input, channel index Is Input Is Active Channel Info ASIO Sample Type Name Main AsioDriver Class. To use this class, you need to query first the GetAsioDriverNames() and then use the GetAsioDriverByName to instantiate the correct AsioDriver. This is the first AsioDriver binding fully implemented in C#! Contributor: Alexandre Mutel - email: alexandre_mutel at yahoo.fr Gets the ASIO driver names installed. a list of driver names. Use this name to GetAsioDriverByName Instantiate a AsioDriver given its name. The name of the driver an AsioDriver instance Instantiate the ASIO driver by GUID. The GUID. an AsioDriver instance Inits the AsioDriver.. The sys handle. Gets the name of the driver. Gets the driver version. Gets the error message. Starts this instance. Stops this instance. Gets the number of channels. The num input channels. The num output channels. Gets the latencies (n.b. does not throw an exception) The input latency. The output latency. Gets the size of the buffer. Size of the min. Size of the max. Size of the preferred. The granularity. Determines whether this instance can use the specified sample rate. The sample rate. true if this instance [can sample rate] the specified sample rate; otherwise, false. Gets the sample rate. Sets the sample rate. The sample rate. Gets the clock sources. The clocks. The num sources. Sets the clock source. The reference. Gets the sample position. The sample pos. The time stamp. Gets the channel info. The channel number. if set to true [true for input info]. Channel Info Creates the buffers. The buffer infos. The num channels. Size of the buffer. The callbacks. Disposes the buffers. Controls the panel. Futures the specified selector. The selector. The opt. Notifies OutputReady to the AsioDriver. Releases this instance. Handles the exception. Throws an exception based on the error. The error to check. Method name Inits the vTable method from GUID. This is a tricky part of this class. The ASIO GUID. Internal VTable structure to store all the delegates to the C++ COM method. ASIODriverCapability holds all the information from the AsioDriver. Use ASIODriverExt to get the Capabilities Drive Name Number of Input Channels Number of Output Channels Input Latency Output Latency Buffer Minimum Size Buffer Maximum Size Buffer Preferred Size Buffer Granularity Sample Rate Input Channel Info Output Channel Info Callback used by the AsioDriverExt to get wave data AsioDriverExt is a simplified version of the AsioDriver. It provides an easier way to access the capabilities of the Driver and implement the callbacks necessary for feeding the driver. Implementation inspired from Rob Philpot's with a managed C++ ASIO wrapper BlueWave.Interop.Asio http://www.codeproject.com/KB/mcpp/Asio.Net.aspx Contributor: Alexandre Mutel - email: alexandre_mutel at yahoo.fr Initializes a new instance of the class based on an already instantiated AsioDriver instance. A AsioDriver already instantiated. Allows adjustment of which is the first output channel we write to Output Channel offset Input Channel offset Gets the driver used. The ASIOdriver. Starts playing the buffers. Stops playing the buffers. Shows the control panel. Releases this instance. Determines whether the specified sample rate is supported. The sample rate. true if [is sample rate supported]; otherwise, false. Sets the sample rate. The sample rate. Gets or sets the fill buffer callback. The fill buffer callback. Gets the capabilities of the AsioDriver. The capabilities. Creates the buffers for playing. The number of outputs channels. The number of input channel. if set to true [use max buffer size] else use Prefered size Builds the capabilities internally. Callback called by the AsioDriver on fill buffer demand. Redirect call to external callback. Index of the double buffer. if set to true [direct process]. Callback called by the AsioDriver on event "Samples rate changed". The sample rate. Asio message call back. The selector. The value. The message. The opt. Buffers switch time info call back. The asio time param. Index of the double buffer. if set to true [direct process]. ASIO Error Codes This value will be returned whenever the call succeeded unique success return value for ASIOFuture calls hardware input or output is not present or available hardware is malfunctioning (can be returned by any ASIO function) input parameter invalid hardware is in a bad mode or used in a bad mode hardware is not running when sample position is inquired sample clock or rate cannot be determined or is not present not enough memory for completing the request ASIO Message Selector selector in <value>, returns 1L if supported, returns engine (host) asio implementation version, request driver reset. if accepted, this not yet supported, will currently always return 0L. the driver went out of sync, such that the drivers latencies have changed. The engine if host returns true here, it will expect the supports timecode unused - value: number of commands, message points to mmc commands kAsioSupportsXXX return 1 if host supports this unused and undefined unused and undefined unused and undefined unused and undefined driver detected an overload This class stores convertors for different interleaved WaveFormat to ASIOSampleType separate channel format. Selects the sample convertor based on the input WaveFormat and the output ASIOSampleTtype. The wave format. The type. Optimized convertor for 2 channels SHORT Generic convertor for SHORT Optimized convertor for 2 channels FLOAT Generic convertor Float to INT Optimized convertor for 2 channels INT to INT Generic convertor INT to INT Optimized convertor for 2 channels INT to SHORT Generic convertor INT to SHORT Generic convertor INT to FLOAT Optimized convertor for 2 channels SHORT Generic convertor for SHORT Optimized convertor for 2 channels FLOAT Generic convertor SHORT Generic converter 24 LSB Generic convertor for float ASIO Sample Type Int 16 MSB Int 24 MSB (used for 20 bits as well) Int 32 MSB IEEE 754 32 bit float IEEE 754 64 bit double float 32 bit data with 16 bit alignment 32 bit data with 18 bit alignment 32 bit data with 20 bit alignment 32 bit data with 24 bit alignment Int 16 LSB Int 24 LSB used for 20 bits as well Int 32 LSB IEEE 754 32 bit float, as found on Intel x86 architecture IEEE 754 64 bit double float, as found on Intel x86 architecture 32 bit data with 16 bit alignment 32 bit data with 18 bit alignment 32 bit data with 20 bit alignment 32 bit data with 24 bit alignment DSD 1 bit data, 8 samples per byte. First sample in Least significant bit. DSD 1 bit data, 8 samples per byte. First sample in Most significant bit. DSD 8 bit data, 1 sample per byte. No Endianness required. ASIO common Exception. Gets the name of the error. The error. the name of the error Represents an installed ACM Driver Helper function to determine whether a particular codec is installed The short name of the function Whether the codec is installed Attempts to add a new ACM driver from a file Full path of the .acm or dll file containing the driver Handle to the driver Removes a driver previously added using AddLocalDriver Local driver to remove Show Format Choose Dialog Owner window handle, can be null Window title Enumeration flags. None to get everything Enumeration format. Only needed with certain enumeration flags The selected format Textual description of the selected format Textual description of the selected format tag True if a format was selected Gets the maximum size needed to store a WaveFormat for ACM interop functions Finds a Driver by its short name Short Name The driver, or null if not found Gets a list of the ACM Drivers installed The callback for acmDriverEnum Creates a new ACM Driver object Driver handle The short name of this driver The full name of this driver The driver ID ToString The list of FormatTags for this ACM Driver Gets all the supported formats for a given format tag Format tag Supported formats Opens this driver Closes this driver Dispose Flags for use with acmDriverAdd ACM_DRIVERADDF_LOCAL ACM_DRIVERADDF_GLOBAL ACM_DRIVERADDF_FUNCTION ACM_DRIVERADDF_NOTIFYHWND Interop structure for ACM driver details (ACMDRIVERDETAILS) http://msdn.microsoft.com/en-us/library/dd742889%28VS.85%29.aspx DWORD cbStruct FOURCC fccType FOURCC fccComp WORD wMid; WORD wPid DWORD vdwACM DWORD vdwDriver DWORD fdwSupport; DWORD cFormatTags DWORD cFilterTags HICON hicon TCHAR szShortName[ACMDRIVERDETAILS_SHORTNAME_CHARS]; TCHAR szLongName[ACMDRIVERDETAILS_LONGNAME_CHARS]; TCHAR szCopyright[ACMDRIVERDETAILS_COPYRIGHT_CHARS]; TCHAR szLicensing[ACMDRIVERDETAILS_LICENSING_CHARS]; TCHAR szFeatures[ACMDRIVERDETAILS_FEATURES_CHARS]; ACMDRIVERDETAILS_SHORTNAME_CHARS ACMDRIVERDETAILS_LONGNAME_CHARS ACMDRIVERDETAILS_COPYRIGHT_CHARS ACMDRIVERDETAILS_LICENSING_CHARS ACMDRIVERDETAILS_FEATURES_CHARS Flags indicating what support a particular ACM driver has ACMDRIVERDETAILS_SUPPORTF_CODEC - Codec ACMDRIVERDETAILS_SUPPORTF_CONVERTER - Converter ACMDRIVERDETAILS_SUPPORTF_FILTER - Filter ACMDRIVERDETAILS_SUPPORTF_HARDWARE - Hardware ACMDRIVERDETAILS_SUPPORTF_ASYNC - Async ACMDRIVERDETAILS_SUPPORTF_LOCAL - Local ACMDRIVERDETAILS_SUPPORTF_DISABLED - Disabled ACM_DRIVERENUMF_NOLOCAL, Only global drivers should be included in the enumeration ACM_DRIVERENUMF_DISABLED, Disabled ACM drivers should be included in the enumeration ACM Format Format Index Format Tag Support Flags WaveFormat WaveFormat Size Format Description ACMFORMATCHOOSE http://msdn.microsoft.com/en-us/library/dd742911%28VS.85%29.aspx DWORD cbStruct; DWORD fdwStyle; HWND hwndOwner; LPWAVEFORMATEX pwfx; DWORD cbwfx; LPCTSTR pszTitle; TCHAR szFormatTag[ACMFORMATTAGDETAILS_FORMATTAG_CHARS]; TCHAR szFormat[ACMFORMATDETAILS_FORMAT_CHARS]; LPTSTR pszName; n.b. can be written into DWORD cchName Should be at least 128 unless name is zero DWORD fdwEnum; LPWAVEFORMATEX pwfxEnum; HINSTANCE hInstance; LPCTSTR pszTemplateName; LPARAM lCustData; ACMFORMATCHOOSEHOOKPROC pfnHook; None ACMFORMATCHOOSE_STYLEF_SHOWHELP ACMFORMATCHOOSE_STYLEF_ENABLEHOOK ACMFORMATCHOOSE_STYLEF_ENABLETEMPLATE ACMFORMATCHOOSE_STYLEF_ENABLETEMPLATEHANDLE ACMFORMATCHOOSE_STYLEF_INITTOWFXSTRUCT ACMFORMATCHOOSE_STYLEF_CONTEXTHELP ACMFORMATDETAILS http://msdn.microsoft.com/en-us/library/dd742913%28VS.85%29.aspx DWORD cbStruct; DWORD dwFormatIndex; DWORD dwFormatTag; DWORD fdwSupport; LPWAVEFORMATEX pwfx; DWORD cbwfx; TCHAR szFormat[ACMFORMATDETAILS_FORMAT_CHARS]; ACMFORMATDETAILS_FORMAT_CHARS Format Enumeration Flags None ACM_FORMATENUMF_CONVERT The WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will only enumerate destination formats that can be converted from the given pwfx format. ACM_FORMATENUMF_HARDWARE The enumerator should only enumerate formats that are supported as native input or output formats on one or more of the installed waveform-audio devices. This flag provides a way for an application to choose only formats native to an installed waveform-audio device. This flag must be used with one or both of the ACM_FORMATENUMF_INPUT and ACM_FORMATENUMF_OUTPUT flags. Specifying both ACM_FORMATENUMF_INPUT and ACM_FORMATENUMF_OUTPUT will enumerate only formats that can be opened for input or output. This is true regardless of whether this flag is specified. ACM_FORMATENUMF_INPUT Enumerator should enumerate only formats that are supported for input (recording). ACM_FORMATENUMF_NCHANNELS The nChannels member of the WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will enumerate only a format that conforms to this attribute. ACM_FORMATENUMF_NSAMPLESPERSEC The nSamplesPerSec member of the WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will enumerate only a format that conforms to this attribute. ACM_FORMATENUMF_OUTPUT Enumerator should enumerate only formats that are supported for output (playback). ACM_FORMATENUMF_SUGGEST The WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will enumerate all suggested destination formats for the given pwfx format. This mechanism can be used instead of the acmFormatSuggest function to allow an application to choose the best suggested format for conversion. The dwFormatIndex member will always be set to zero on return. ACM_FORMATENUMF_WBITSPERSAMPLE The wBitsPerSample member of the WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will enumerate only a format that conforms to this attribute. ACM_FORMATENUMF_WFORMATTAG The wFormatTag member of the WAVEFORMATEX structure pointed to by the pwfx member of the ACMFORMATDETAILS structure is valid. The enumerator will enumerate only a format that conforms to this attribute. The dwFormatTag member of the ACMFORMATDETAILS structure must be equal to the wFormatTag member. ACM_FORMATSUGGESTF_WFORMATTAG ACM_FORMATSUGGESTF_NCHANNELS ACM_FORMATSUGGESTF_NSAMPLESPERSEC ACM_FORMATSUGGESTF_WBITSPERSAMPLE ACM_FORMATSUGGESTF_TYPEMASK ACM Format Tag Format Tag Index Format Tag Format Size Support Flags Standard Formats Count Format Description DWORD cbStruct; DWORD dwFormatTagIndex; DWORD dwFormatTag; DWORD cbFormatSize; DWORD fdwSupport; DWORD cStandardFormats; TCHAR szFormatTag[ACMFORMATTAGDETAILS_FORMATTAG_CHARS]; ACMFORMATTAGDETAILS_FORMATTAG_CHARS Interop definitions for Windows ACM (Audio Compression Manager) API http://msdn.microsoft.com/en-us/library/dd742910%28VS.85%29.aspx UINT ACMFORMATCHOOSEHOOKPROC acmFormatChooseHookProc( HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam A version with pointers for troubleshooting AcmStream encapsulates an Audio Compression Manager Stream used to convert audio from one format to another Creates a new ACM stream to convert one format to another. Note that not all conversions can be done in one step The source audio format The destination audio format Creates a new ACM stream to convert one format to another, using a specified driver identifier and wave filter the driver identifier the source format the wave filter Returns the number of output bytes for a given number of input bytes Number of input bytes Number of output bytes Returns the number of source bytes for a given number of destination bytes Number of destination bytes Number of source bytes Suggests an appropriate PCM format that the compressed format can be converted to in one step The compressed format The PCM format Returns the Source Buffer. Fill this with data prior to calling convert Returns the Destination buffer. This will contain the converted data after a successful call to Convert Report that we have repositioned in the source stream Converts the contents of the SourceBuffer into the DestinationBuffer The number of bytes in the SourceBuffer that need to be converted The number of source bytes actually converted The number of converted bytes in the DestinationBuffer Converts the contents of the SourceBuffer into the DestinationBuffer The number of bytes in the SourceBuffer that need to be converted The number of converted bytes in the DestinationBuffer Frees resources associated with this ACM Stream Frees resources associated with this ACM Stream Frees resources associated with this ACM Stream ACMSTREAMHEADER_STATUSF_DONE ACMSTREAMHEADER_STATUSF_PREPARED ACMSTREAMHEADER_STATUSF_INQUEUE Interop structure for ACM stream headers. ACMSTREAMHEADER http://msdn.microsoft.com/en-us/library/dd742926%28VS.85%29.aspx ACM_STREAMOPENF_QUERY, ACM will be queried to determine whether it supports the given conversion. A conversion stream will not be opened, and no handle will be returned in the phas parameter. ACM_STREAMOPENF_ASYNC, Stream conversion should be performed asynchronously. If this flag is specified, the application can use a callback function to be notified when the conversion stream is opened and closed and after each buffer is converted. In addition to using a callback function, an application can examine the fdwStatus member of the ACMSTREAMHEADER structure for the ACMSTREAMHEADER_STATUSF_DONE flag. ACM_STREAMOPENF_NONREALTIME, ACM will not consider time constraints when converting the data. By default, the driver will attempt to convert the data in real time. For some formats, specifying this flag might improve the audio quality or other characteristics. CALLBACK_TYPEMASK, callback type mask CALLBACK_NULL, no callback CALLBACK_WINDOW, dwCallback is a HWND CALLBACK_TASK, dwCallback is a HTASK CALLBACK_FUNCTION, dwCallback is a FARPROC CALLBACK_THREAD, thread ID replaces 16 bit task CALLBACK_EVENT, dwCallback is an EVENT Handle ACM_STREAMSIZEF_SOURCE ACM_STREAMSIZEF_DESTINATION Summary description for WaveFilter. cbStruct dwFilterTag fdwFilter reserved ACM_METRIC_COUNT_DRIVERS ACM_METRIC_COUNT_CODECS ACM_METRIC_COUNT_CONVERTERS ACM_METRIC_COUNT_FILTERS ACM_METRIC_COUNT_DISABLED ACM_METRIC_COUNT_HARDWARE ACM_METRIC_COUNT_LOCAL_DRIVERS ACM_METRIC_COUNT_LOCAL_CODECS ACM_METRIC_COUNT_LOCAL_CONVERTERS ACM_METRIC_COUNT_LOCAL_FILTERS ACM_METRIC_COUNT_LOCAL_DISABLED ACM_METRIC_HARDWARE_WAVE_INPUT ACM_METRIC_HARDWARE_WAVE_OUTPUT ACM_METRIC_MAX_SIZE_FORMAT ACM_METRIC_MAX_SIZE_FILTER ACM_METRIC_DRIVER_SUPPORT ACM_METRIC_DRIVER_PRIORITY ACM_STREAMCONVERTF_BLOCKALIGN ACM_STREAMCONVERTF_START ACM_STREAMCONVERTF_END Wave Callback Strategy Use a function Create a new window (should only be done if on GUI thread) Use an existing window handle Use an event handle WaveHeader interop structure (WAVEHDR) http://msdn.microsoft.com/en-us/library/dd743837%28VS.85%29.aspx pointer to locked data buffer (lpData) length of data buffer (dwBufferLength) used for input only (dwBytesRecorded) for client's use (dwUser) assorted flags (dwFlags) loop control counter (dwLoops) PWaveHdr, reserved for driver (lpNext) reserved for driver Wave Header Flags enumeration WHDR_BEGINLOOP This buffer is the first buffer in a loop. This flag is used only with output buffers. WHDR_DONE Set by the device driver to indicate that it is finished with the buffer and is returning it to the application. WHDR_ENDLOOP This buffer is the last buffer in a loop. This flag is used only with output buffers. WHDR_INQUEUE Set by Windows to indicate that the buffer is queued for playback. WHDR_PREPARED Set by Windows to indicate that the buffer has been prepared with the waveInPrepareHeader or waveOutPrepareHeader function. WaveInCapabilities structure (based on WAVEINCAPS2 from mmsystem.h) http://msdn.microsoft.com/en-us/library/ms713726(VS.85).aspx wMid wPid vDriverVersion Product Name (szPname) Supported formats (bit flags) dwFormats Supported channels (1 for mono 2 for stereo) (wChannels) Seems to be set to -1 on a lot of devices wReserved1 Number of channels supported The product name The device name Guid (if provided) The product name Guid (if provided) The manufacturer guid (if provided) Checks to see if a given SupportedWaveFormat is supported The SupportedWaveFormat true if supported Event Args for WaveInStream event Creates new WaveInEventArgs Buffer containing recorded data. Note that it might not be completely full. The number of recorded bytes in Buffer. MME Wave function interop CALLBACK_NULL No callback CALLBACK_FUNCTION dwCallback is a FARPROC CALLBACK_EVENT dwCallback is an EVENT handle CALLBACK_WINDOW dwCallback is a HWND CALLBACK_THREAD callback is a thread ID WIM_OPEN WIM_CLOSE WIM_DATA WOM_CLOSE WOM_DONE WOM_OPEN WaveOutCapabilities structure (based on WAVEOUTCAPS2 from mmsystem.h) http://msdn.microsoft.com/library/default.asp?url=/library/en-us/multimed/htm/_win32_waveoutcaps_str.asp wMid wPid vDriverVersion Product Name (szPname) Supported formats (bit flags) dwFormats Supported channels (1 for mono 2 for stereo) (wChannels) Seems to be set to -1 on a lot of devices wReserved1 Optional functionality supported by the device Number of channels supported Whether playback control is supported The product name Checks to see if a given SupportedWaveFormat is supported The SupportedWaveFormat true if supported The device name Guid (if provided) The product name Guid (if provided) The manufacturer guid (if provided) Supported wave formats for WaveOutCapabilities 11.025 kHz, Mono, 8-bit 11.025 kHz, Stereo, 8-bit 11.025 kHz, Mono, 16-bit 11.025 kHz, Stereo, 16-bit 22.05 kHz, Mono, 8-bit 22.05 kHz, Stereo, 8-bit 22.05 kHz, Mono, 16-bit 22.05 kHz, Stereo, 16-bit 44.1 kHz, Mono, 8-bit 44.1 kHz, Stereo, 8-bit 44.1 kHz, Mono, 16-bit 44.1 kHz, Stereo, 16-bit 44.1 kHz, Mono, 8-bit 44.1 kHz, Stereo, 8-bit 44.1 kHz, Mono, 16-bit 44.1 kHz, Stereo, 16-bit 48 kHz, Mono, 8-bit 48 kHz, Stereo, 8-bit 48 kHz, Mono, 16-bit 48 kHz, Stereo, 16-bit 96 kHz, Mono, 8-bit 96 kHz, Stereo, 8-bit 96 kHz, Mono, 16-bit 96 kHz, Stereo, 16-bit Flags indicating what features this WaveOut device supports supports pitch control (WAVECAPS_PITCH) supports playback rate control (WAVECAPS_PLAYBACKRATE) supports volume control (WAVECAPS_VOLUME) supports separate left-right volume control (WAVECAPS_LRVOLUME) (WAVECAPS_SYNC) (WAVECAPS_SAMPLEACCURATE) Sample provider interface to make WaveChannel32 extensible Still a bit ugly, hence internal at the moment - and might even make these into bit depth converting WaveProviders ADSR sample provider allowing you to specify attack, decay, sustain and release values Creates a new AdsrSampleProvider with default values Attack time in seconds Release time in seconds Reads audio from this sample provider Enters the Release phase The output WaveFormat Sample Provider to concatenate multiple sample providers together Creates a new ConcatenatingSampleProvider The source providers to play one after the other. Must all share the same sample rate and channel count The WaveFormat of this Sample Provider Read Samples from this sample provider Sample Provider to allow fading in and out Creates a new FadeInOutSampleProvider The source stream with the audio to be faded in or out If true, we start faded out Requests that a fade-in begins (will start on the next call to Read) Duration of fade in milliseconds Requests that a fade-out begins (will start on the next call to Read) Duration of fade in milliseconds Reads samples from this sample provider Buffer to read into Offset within buffer to write to Number of samples desired Number of samples read WaveFormat of this SampleProvider Simple SampleProvider that passes through audio unchanged and raises an event every n samples with the maximum sample value from the period for metering purposes Number of Samples per notification Raised periodically to inform the user of the max volume Initialises a new instance of MeteringSampleProvider that raises 10 stream volume events per second Source sample provider Initialises a new instance of MeteringSampleProvider source sampler provider Number of samples between notifications The WaveFormat of this sample provider Reads samples from this Sample Provider Sample buffer Offset into sample buffer Number of samples required Number of samples read Event args for aggregated stream volume Max sample values array (one for each channel) A sample provider mixer, allowing inputs to be added and removed Creates a new MixingSampleProvider, with no inputs, but a specified WaveFormat The WaveFormat of this mixer. All inputs must be in this format Creates a new MixingSampleProvider, based on the given inputs Mixer inputs - must all have the same waveformat, and must all be of the same WaveFormat. There must be at least one input Returns the mixer inputs (read-only - use AddMixerInput to add an input When set to true, the Read method always returns the number of samples requested, even if there are no inputs, or if the current inputs reach their end. Setting this to true effectively makes this a never-ending sample provider, so take care if you plan to write it out to a file. Adds a WaveProvider as a Mixer input. Must be PCM or IEEE float already IWaveProvider mixer input Adds a new mixer input Mixer input Raised when a mixer input has been removed because it has ended Removes a mixer input Mixer input to remove Removes all mixer inputs The output WaveFormat of this sample provider Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples required Number of samples read SampleProvider event args Constructs a new SampleProviderEventArgs The Sample Provider No nonsense mono to stereo provider, no volume adjustment, just copies input to left and right. Initializes a new instance of MonoToStereoSampleProvider Source sample provider WaveFormat of this provider Reads samples from this provider Sample buffer Offset into sample buffer Number of samples required Number of samples read Multiplier for left channel (default is 1.0) Multiplier for right channel (default is 1.0) Allows any number of inputs to be patched to outputs Uses could include swapping left and right channels, turning mono into stereo, feeding different input sources to different soundcard outputs etc Creates a multiplexing sample provider, allowing re-patching of input channels to different output channels Input sample providers. Must all be of the same sample rate, but can have any number of channels Desired number of output channels. persistent temporary buffer to prevent creating work for garbage collector Reads samples from this sample provider Buffer to be filled with sample data Offset into buffer to start writing to, usually 0 Number of samples required Number of samples read The output WaveFormat for this SampleProvider Connects a specified input channel to an output channel Input Channel index (zero based). Must be less than InputChannelCount Output Channel index (zero based). Must be less than OutputChannelCount The number of input channels. Note that this is not the same as the number of input wave providers. If you pass in one stereo and one mono input provider, the number of input channels is three. The number of output channels, as specified in the constructor. Simple class that raises an event on every sample Initializes a new instance of NotifyingSampleProvider Source Sample Provider WaveFormat Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples desired Number of samples read Sample notifier Allows you to: 1. insert a pre-delay of silence before the source begins 2. skip over a certain amount of the beginning of the source 3. only play a set amount from the source 4. insert silence at the end after the source is complete Number of samples of silence to insert before playing source Amount of silence to insert before playing Number of samples in source to discard Amount of audio to skip over from the source before beginning playback Number of samples to read from source (if 0, then read it all) Amount of audio to take from the source (TimeSpan.Zero means play to end) Number of samples of silence to insert after playing source Amount of silence to insert after playing source Creates a new instance of offsetSampleProvider The Source Sample Provider to read from The WaveFormat of this SampleProvider Reads from this sample provider Sample buffer Offset within sample buffer to read to Number of samples required Number of samples read Converts a mono sample provider to stereo, with a customisable pan strategy Initialises a new instance of the PanningSampleProvider Source sample provider, must be mono Pan value, must be between -1 (left) and 1 (right) The pan strategy currently in use The WaveFormat of this sample provider Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples desired Number of samples read Pair of floating point values, representing samples or multipliers Left value Right value Required Interface for a Panning Strategy Gets the left and right multipliers for a given pan value Pan value from -1 to 1 Left and right multipliers in a stereo sample pair Simplistic "balance" control - treating the mono input as if it was stereo In the centre, both channels full volume. Opposite channel decays linearly as balance is turned to to one side Gets the left and right channel multipliers for this pan value Pan value, between -1 and 1 Left and right multipliers Square Root Pan, thanks to Yuval Naveh Gets the left and right channel multipliers for this pan value Pan value, between -1 and 1 Left and right multipliers Sinus Pan, thanks to Yuval Naveh Gets the left and right channel multipliers for this pan value Pan value, between -1 and 1 Left and right multipliers Linear Pan Gets the left and right channel multipliers for this pan value Pan value, between -1 and 1 Left and right multipliers Converts an IWaveProvider containing 16 bit PCM to an ISampleProvider Initialises a new instance of Pcm16BitToSampleProvider Source wave provider Reads samples from this sample provider Sample buffer Offset into sample buffer Samples required Number of samples read Converts an IWaveProvider containing 24 bit PCM to an ISampleProvider Initialises a new instance of Pcm24BitToSampleProvider Source Wave Provider Reads floating point samples from this sample provider sample buffer offset within sample buffer to write to number of samples required number of samples provided Converts an IWaveProvider containing 32 bit PCM to an ISampleProvider Initialises a new instance of Pcm32BitToSampleProvider Source Wave Provider Reads floating point samples from this sample provider sample buffer offset within sample buffer to write to number of samples required number of samples provided Converts an IWaveProvider containing 8 bit PCM to an ISampleProvider Initialises a new instance of Pcm8BitToSampleProvider Source wave provider Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples to read Number of samples read Utility class that takes an IWaveProvider input at any bit depth and exposes it as an ISampleProvider. Can turn mono inputs into stereo, and allows adjusting of volume (The eventual successor to WaveChannel32) This class also serves as an example of how you can link together several simple Sample Providers to form a more useful class. Initialises a new instance of SampleChannel Source wave provider, must be PCM or IEEE Initialises a new instance of SampleChannel Source wave provider, must be PCM or IEEE force mono inputs to become stereo Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples desired Number of samples read The WaveFormat of this Sample Provider Allows adjusting the volume, 1.0f = full volume Raised periodically to inform the user of the max volume (before the volume meter) Helper base class for classes converting to ISampleProvider Source Wave Provider Source buffer (to avoid constantly creating small buffers during playback) Initialises a new instance of SampleProviderConverterBase Source Wave provider Wave format of this wave provider Reads samples from the source wave provider Sample buffer Offset into sample buffer Number of samples required Number of samples read Ensure the source buffer exists and is big enough Bytes required Utility class for converting to SampleProvider Helper function to go from IWaveProvider to a SampleProvider Must already be PCM or IEEE float The WaveProvider to convert A sample provider Helper class for when you need to convert back to an IWaveProvider from an ISampleProvider. Keeps it as IEEE float Initializes a new instance of the WaveProviderFloatToWaveProvider class Source wave provider Reads from this provider The waveformat of this WaveProvider (same as the source) Converts a sample provider to 16 bit PCM, optionally clipping and adjusting volume along the way Converts from an ISampleProvider (IEEE float) to a 16 bit PCM IWaveProvider. Number of channels and sample rate remain unchanged. The input source provider Reads bytes from this wave stream The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. Volume of this channel. 1.0 = full scale Converts a sample provider to 24 bit PCM, optionally clipping and adjusting volume along the way Converts from an ISampleProvider (IEEE float) to a 16 bit PCM IWaveProvider. Number of channels and sample rate remain unchanged. The input source provider Reads bytes from this wave stream, clipping if necessary The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. The Format of this IWaveProvider Volume of this channel. 1.0 = full scale, 0.0 to mute Signal Generator Sin, Square, Triangle, SawTooth, White Noise, Pink Noise, Sweep. Posibility to change ISampleProvider Example : --------- WaveOut _waveOutGene = new WaveOut(); WaveGenerator wg = new SignalGenerator(); wg.Type = ... wg.Frequency = ... wg ... _waveOutGene.Init(wg); _waveOutGene.Play(); Initializes a new instance for the Generator (Default :: 44.1Khz, 2 channels, Sinus, Frequency = 440, Gain = 1) Initializes a new instance for the Generator (UserDef SampleRate & Channels) Desired sample rate Number of channels The waveformat of this WaveProvider (same as the source) Frequency for the Generator. (20.0 - 20000.0 Hz) Sin, Square, Triangle, SawTooth, Sweep (Start Frequency). Return Log of Frequency Start (Read only) End Frequency for the Sweep Generator. (Start Frequency in Frequency) Return Log of Frequency End (Read only) Gain for the Generator. (0.0 to 1.0) Channel PhaseReverse Type of Generator. Length Seconds for the Sweep Generator. Reads from this provider. Private :: Random for WhiteNoise & Pink Noise (Value form -1 to 1) Random value from -1 to +1 Signal Generator type Pink noise White noise Sweep Sine wave Square wave Triangle Wave Sawtooth wave Author: Freefall Date: 05.08.16 Based on: the port of Stephan M. Bernsee´s pitch shifting class Port site: https://sites.google.com/site/mikescoderama/pitch-shifting Test application and github site: https://github.com/Freefall63/NAudio-Pitchshifter NOTE: I strongly advice to add a Limiter for post-processing. For my needs the FastAttackCompressor1175 provides acceptable results: https://github.com/Jiyuu/SkypeFX/blob/master/JSNet/FastAttackCompressor1175.cs UPDATE: Added a simple Limiter based on the pydirac implementation. https://github.com/echonest/remix/blob/master/external/pydirac225/source/Dirac_LE.cpp Creates a new SMB Pitch Shifting Sample Provider with default settings Source provider Creates a new SMB Pitch Shifting Sample Provider with custom settings Source provider FFT Size (any power of two <= 4096: 4096, 2048, 1024, 512, ...) Oversampling (number of overlapping windows) Initial pitch (0.5f = octave down, 1.0f = normal, 2.0f = octave up) Read from this sample provider WaveFormat Pitch Factor (0.5f = octave down, 1.0f = normal, 2.0f = octave up) Takes a stereo input and turns it to mono Creates a new mono ISampleProvider based on a stereo input Stereo 16 bit PCM input 1.0 to mix the mono source entirely to the left channel 1.0 to mix the mono source entirely to the right channel Output Wave Format Reads bytes from this SampleProvider Very simple sample provider supporting adjustable gain Initializes a new instance of VolumeSampleProvider Source Sample Provider WaveFormat Reads samples from this sample provider Sample buffer Offset into sample buffer Number of samples desired Number of samples read Allows adjusting the volume, 1.0f = full volume Helper class turning an already 32 bit floating point IWaveProvider into an ISampleProvider - hopefully not needed for most applications Initializes a new instance of the WaveToSampleProvider class Source wave provider, must be IEEE float Reads from this provider Helper class turning an already 64 bit floating point IWaveProvider into an ISampleProvider - hopefully not needed for most applications Initializes a new instance of the WaveToSampleProvider class Source wave provider, must be IEEE float Reads from this provider Fully managed resampling sample provider, based on the WDL Resampler Constructs a new resampler Source to resample Desired output sample rate Reads from this sample provider Output WaveFormat Useful extension methods to make switching between WaveAndSampleProvider easier Converts a WaveProvider into a SampleProvider (only works for PCM) WaveProvider to convert Allows sending a SampleProvider directly to an IWavePlayer without needing to convert back to an IWaveProvider The WavePlayer Turns WaveFormatExtensible into a standard waveformat if possible Input wave format A standard PCM or IEEE waveformat, or the original waveformat Converts a ISampleProvider to a IWaveProvider but still 32 bit float SampleProvider to convert An IWaveProvider Converts a ISampleProvider to a IWaveProvider but and convert to 16 bit SampleProvider to convert A 16 bit IWaveProvider Concatenates one Sample Provider on the end of another The sample provider to play first The sample provider to play next A single sampleprovider to play one after the other Concatenates one Sample Provider on the end of another with silence inserted The sample provider to play first Silence duration to insert between the two The sample provider to play next A single sample provider Skips over a specified amount of time (by consuming source stream) Source sample provider Duration to skip over A sample provider that skips over the specified amount of time Takes a specified amount of time from the source stream Source sample provider Duration to take A sample provider that reads up to the specified amount of time Converts a Stereo Sample Provider to mono, allowing mixing of channel volume Stereo Source Provider Amount of left channel to mix in (0 = mute, 1 = full, 0.5 for mixing half from each channel) Amount of right channel to mix in (0 = mute, 1 = full, 0.5 for mixing half from each channel) A mono SampleProvider Converts a Mono ISampleProvider to stereo Mono Source Provider Amount to mix to left channel (1.0 is full volume) Amount to mix to right channel (1.0 is full volume) Microsoft ADPCM See http://icculus.org/SDL_sound/downloads/external_documentation/wavecomp.htm Empty constructor needed for marshalling from a pointer Samples per block Number of coefficients Coefficients Microsoft ADPCM Sample Rate Channels Serializes this wave format Binary writer String Description of this WaveFormat GSM 610 Creates a GSM 610 WaveFormat For now hardcoded to 13kbps Samples per block Writes this structure to a BinaryWriter IMA/DVI ADPCM Wave Format Work in progress parameterless constructor for Marshalling Creates a new IMA / DVI ADPCM Wave Format Sample Rate Number of channels Bits Per Sample MP3 WaveFormat, MPEGLAYER3WAVEFORMAT from mmreg.h Wave format ID (wID) Padding flags (fdwFlags) Block Size (nBlockSize) Frames per block (nFramesPerBlock) Codec Delay (nCodecDelay) Creates a new MP3 WaveFormat Wave Format Padding Flags MPEGLAYER3_FLAG_PADDING_ISO MPEGLAYER3_FLAG_PADDING_ON MPEGLAYER3_FLAG_PADDING_OFF Wave Format ID MPEGLAYER3_ID_UNKNOWN MPEGLAYER3_ID_MPEG MPEGLAYER3_ID_CONSTANTFRAMESIZE DSP Group TrueSpeech DSP Group TrueSpeech WaveFormat Writes this structure to a BinaryWriter Represents a Wave file format format type number of channels sample rate for buffer estimation block size of data number of bits per sample of mono data number of following bytes Creates a new PCM 44.1Khz stereo 16 bit format Creates a new 16 bit wave format with the specified sample rate and channel count Sample Rate Number of channels Gets the size of a wave buffer equivalent to the latency in milliseconds. The milliseconds. Creates a WaveFormat with custom members The encoding Sample Rate Number of channels Average Bytes Per Second Block Align Bits Per Sample Creates an A-law wave format Sample Rate Number of Channels Wave Format Creates a Mu-law wave format Sample Rate Number of Channels Wave Format Creates a new PCM format with the specified sample rate, bit depth and channels Creates a new 32 bit IEEE floating point wave format sample rate number of channels Helper function to retrieve a WaveFormat structure from a pointer WaveFormat structure Helper function to marshal WaveFormat to an IntPtr WaveFormat IntPtr to WaveFormat structure (needs to be freed by callee) Reads in a WaveFormat (with extra data) from a fmt chunk (chunk identifier and length should already have been read) Binary reader Format chunk length A WaveFormatExtraData Reads a new WaveFormat object from a stream A binary reader that wraps the stream Reports this WaveFormat as a string String describing the wave format Compares with another WaveFormat object Object to compare to True if the objects are the same Provides a Hashcode for this WaveFormat A hashcode Returns the encoding type used Writes this WaveFormat object to a stream the output stream Returns the number of channels (1=mono,2=stereo etc) Returns the sample rate (samples per second) Returns the average number of bytes used per second Returns the block alignment Returns the number of bits per sample (usually 16 or 32, sometimes 24 or 8) Can be 0 for some codecs Returns the number of extra bytes used by this waveformat. Often 0, except for compressed formats which store extra data after the WAVEFORMATEX header Summary description for WaveFormatEncoding. WAVE_FORMAT_UNKNOWN, Microsoft Corporation WAVE_FORMAT_PCM Microsoft Corporation WAVE_FORMAT_ADPCM Microsoft Corporation WAVE_FORMAT_IEEE_FLOAT Microsoft Corporation WAVE_FORMAT_VSELP Compaq Computer Corp. WAVE_FORMAT_IBM_CVSD IBM Corporation WAVE_FORMAT_ALAW Microsoft Corporation WAVE_FORMAT_MULAW Microsoft Corporation WAVE_FORMAT_DTS Microsoft Corporation WAVE_FORMAT_DRM Microsoft Corporation WAVE_FORMAT_WMAVOICE9 WAVE_FORMAT_OKI_ADPCM OKI WAVE_FORMAT_DVI_ADPCM Intel Corporation WAVE_FORMAT_IMA_ADPCM Intel Corporation WAVE_FORMAT_MEDIASPACE_ADPCM Videologic WAVE_FORMAT_SIERRA_ADPCM Sierra Semiconductor Corp WAVE_FORMAT_G723_ADPCM Antex Electronics Corporation WAVE_FORMAT_DIGISTD DSP Solutions, Inc. WAVE_FORMAT_DIGIFIX DSP Solutions, Inc. WAVE_FORMAT_DIALOGIC_OKI_ADPCM Dialogic Corporation WAVE_FORMAT_MEDIAVISION_ADPCM Media Vision, Inc. WAVE_FORMAT_CU_CODEC Hewlett-Packard Company WAVE_FORMAT_YAMAHA_ADPCM Yamaha Corporation of America WAVE_FORMAT_SONARC Speech Compression WAVE_FORMAT_DSPGROUP_TRUESPEECH DSP Group, Inc WAVE_FORMAT_ECHOSC1 Echo Speech Corporation WAVE_FORMAT_AUDIOFILE_AF36, Virtual Music, Inc. WAVE_FORMAT_APTX Audio Processing Technology WAVE_FORMAT_AUDIOFILE_AF10, Virtual Music, Inc. WAVE_FORMAT_PROSODY_1612, Aculab plc WAVE_FORMAT_LRC, Merging Technologies S.A. WAVE_FORMAT_DOLBY_AC2, Dolby Laboratories WAVE_FORMAT_GSM610, Microsoft Corporation WAVE_FORMAT_MSNAUDIO, Microsoft Corporation WAVE_FORMAT_ANTEX_ADPCME, Antex Electronics Corporation WAVE_FORMAT_CONTROL_RES_VQLPC, Control Resources Limited WAVE_FORMAT_DIGIREAL, DSP Solutions, Inc. WAVE_FORMAT_DIGIADPCM, DSP Solutions, Inc. WAVE_FORMAT_CONTROL_RES_CR10, Control Resources Limited WAVE_FORMAT_MPEG, Microsoft Corporation WAVE_FORMAT_MPEGLAYER3, ISO/MPEG Layer3 Format Tag WAVE_FORMAT_GSM WAVE_FORMAT_G729 WAVE_FORMAT_G723 WAVE_FORMAT_ACELP WAVE_FORMAT_RAW_AAC1 Windows Media Audio, WAVE_FORMAT_WMAUDIO2, Microsoft Corporation Windows Media Audio Professional WAVE_FORMAT_WMAUDIO3, Microsoft Corporation Windows Media Audio Lossless, WAVE_FORMAT_WMAUDIO_LOSSLESS Windows Media Audio Professional over SPDIF WAVE_FORMAT_WMASPDIF (0x0164) Advanced Audio Coding (AAC) audio in Audio Data Transport Stream (ADTS) format. The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_ADTS_AAC. The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels, prior to applying spectral band replication (SBR) or parametric stereo (PS) tools, if present. No additional data is required after the WAVEFORMATEX structure. http://msdn.microsoft.com/en-us/library/dd317599%28VS.85%29.aspx Source wmCodec.h MPEG-4 audio transport stream with a synchronization layer (LOAS) and a multiplex layer (LATM). The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_LOAS. The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels, prior to applying spectral SBR or PS tools, if present. No additional data is required after the WAVEFORMATEX structure. http://msdn.microsoft.com/en-us/library/dd317599%28VS.85%29.aspx NOKIA_MPEG_ADTS_AAC Source wmCodec.h NOKIA_MPEG_RAW_AAC Source wmCodec.h VODAFONE_MPEG_ADTS_AAC Source wmCodec.h VODAFONE_MPEG_RAW_AAC Source wmCodec.h High-Efficiency Advanced Audio Coding (HE-AAC) stream. The format block is an HEAACWAVEFORMAT structure. http://msdn.microsoft.com/en-us/library/dd317599%28VS.85%29.aspx WAVE_FORMAT_DVM WAVE_FORMAT_VORBIS1 "Og" Original stream compatible WAVE_FORMAT_VORBIS2 "Pg" Have independent header WAVE_FORMAT_VORBIS3 "Qg" Have no codebook header WAVE_FORMAT_VORBIS1P "og" Original stream compatible WAVE_FORMAT_VORBIS2P "pg" Have independent headere WAVE_FORMAT_VORBIS3P "qg" Have no codebook header WAVE_FORMAT_EXTENSIBLE WaveFormatExtensible http://www.microsoft.com/whdc/device/audio/multichaud.mspx Parameterless constructor for marshalling Creates a new WaveFormatExtensible for PCM or IEEE WaveFormatExtensible for PCM or floating point can be awkward to work with This creates a regular WaveFormat structure representing the same audio format Returns the WaveFormat unchanged for non PCM or IEEE float SubFormat (may be one of AudioMediaSubtypes) Serialize String representation This class used for marshalling from unmanaged code Allows the extra data to be read parameterless constructor for marshalling Reads this structure from a BinaryReader Writes this structure to a BinaryWriter The WMA wave format. May not be much use because WMA codec is a DirectShow DMO not an ACM Generic interface for wave recording Recording WaveFormat Start Recording Stop Recording Indicates recorded data is available Indicates that all recorded data has now been received. WASAPI Loopback Capture based on a contribution from "Pygmy" - http://naudio.codeplex.com/discussions/203605 Initialises a new instance of the WASAPI capture class Initialises a new instance of the WASAPI capture class Capture device to use Gets the default audio loopback capture device The default audio loopback capture device Capturing wave format Specify loopback Recording using waveIn api with event callbacks. Use this for recording in non-gui applications Events are raised as recorded buffers are made available Indicates recorded data is available Indicates that all recorded data has now been received. Prepares a Wave input device for recording Returns the number of Wave In devices available in the system Retrieves the capabilities of a waveIn device Device to test The WaveIn device capabilities Milliseconds for the buffer. Recommended value is 100ms Number of Buffers to use (usually 2 or 3) The device number to use Start recording Stop recording Gets the current position in bytes from the wave input device. it calls directly into waveInGetPosition) Position in bytes WaveFormat we are recording in Dispose pattern Microphone Level Dispose method This class writes audio data to a .aif file on disk Creates an Aiff file by reading all the data from a WaveProvider BEWARE: the WaveProvider MUST return 0 from its Read method when it is finished, or the Aiff File will grow indefinitely. The filename to use The source WaveProvider AiffFileWriter that actually writes to a stream Stream to be written to Wave format to use Creates a new AiffFileWriter The filename to write to The Wave Format of the output data The aiff file name or null if not applicable Number of bytes of audio in the data chunk WaveFormat of this aiff file Returns false: Cannot read from a AiffFileWriter Returns true: Can write to a AiffFileWriter Returns false: Cannot seek within a AiffFileWriter Read is not supported for a AiffFileWriter Seek is not supported for a AiffFileWriter SetLength is not supported for AiffFileWriter Gets the Position in the AiffFile (i.e. number of bytes written so far) Appends bytes to the AiffFile (assumes they are already in the correct format) the buffer containing the wave data the offset from which to start writing the number of bytes to write Writes a single sample to the Aiff file the sample to write (assumed floating point with 1.0f as max value) Writes 32 bit floating point samples to the Aiff file They will be converted to the appropriate bit depth depending on the WaveFormat of the AIF file The buffer containing the floating point samples The offset from which to start writing The number of floating point samples to write Writes 16 bit samples to the Aiff file The buffer containing the 16 bit samples The offset from which to start writing The number of 16 bit samples to write Ensures data is written to disk Actually performs the close,making sure the header contains the correct data True if called from Dispose Updates the header with file size information Finaliser - should only be called if the user forgot to close this AiffFileWriter Raised when ASIO data has been recorded. It is important to handle this as quickly as possible as it is in the buffer callback Initialises a new instance of AsioAudioAvailableEventArgs Pointers to the ASIO buffers for each channel Pointers to the ASIO buffers for each channel Number of samples in each buffer Audio format within each buffer Pointer to a buffer per input channel Pointer to a buffer per output channel Allows you to write directly to the output buffers If you do so, set SamplesPerBuffer = true, and make sure all buffers are written to with valid data Set to true if you have written to the output buffers If so, AsioOut will not read from its source Number of samples in each buffer Converts all the recorded audio into a buffer of 32 bit floating point samples, interleaved by channel The samples as 32 bit floating point, interleaved Audio format within each buffer Most commonly this will be one of, Int32LSB, Int16LSB, Int24LSB or Float32LSB Gets as interleaved samples, allocating a float array The samples as 32 bit floating point values ASIO Out Player. New implementation using an internal C# binding. This implementation is only supporting Short16Bit and Float32Bit formats and is optimized for 2 outputs channels . SampleRate is supported only if AsioDriver is supporting it This implementation is probably the first AsioDriver binding fully implemented in C#! Original Contributor: Mark Heath New Contributor to C# binding : Alexandre Mutel - email: alexandre_mutel at yahoo.fr Playback Stopped When recording, fires whenever recorded audio is available Initializes a new instance of the class with the first available ASIO Driver. Initializes a new instance of the class with the driver name. Name of the device. Opens an ASIO output device Device number (zero based) Releases unmanaged resources and performs other cleanup operations before the is reclaimed by garbage collection. Dispose Gets the names of the installed ASIO Driver. an array of driver names Determines whether ASIO is supported. true if ASIO is supported; otherwise, false. Inits the driver from the asio driver name. Name of the driver. Shows the control panel Starts playback Stops playback Pauses playback Initialises to play Source wave provider Initialises to play, with optional recording Source wave provider - set to null for record only Number of channels to record Specify sample rate here if only recording, ignored otherwise driver buffer update callback to fill the wave buffer. The input channels. The output channels. Gets the latency (in ms) of the playback driver Automatically stop when the end of the input stream is reached Disable this if auto-stop is causing hanging issues A flag to let you know that we have reached the end of the input file Useful if AutoStop is set to false You can monitor this yourself and call Stop when it is true Playback State Driver Name The number of output channels we are currently using for playback (Must be less than or equal to DriverOutputChannelCount) The number of input channels we are currently recording from (Must be less than or equal to DriverInputChannelCount) The maximum number of input channels this ASIO driver supports The maximum number of output channels this ASIO driver supports The number of samples per channel, per buffer. By default the first channel on the input WaveProvider is sent to the first ASIO output. This option sends it to the specified channel number. Warning: make sure you don't set it higher than the number of available output channels - the number of source channels. n.b. Future NAudio may modify this Input channel offset (used when recording), allowing you to choose to record from just one specific input rather than them all Sets the volume (1.0 is unity gain) Not supported for ASIO Out. Set the volume on the input stream instead Get the input channel name channel index (zero based) channel name Get the output channel name channel index (zero based) channel name https://tech.ebu.ch/docs/tech/tech3285.pdf Constructs a new BextChunkInfo Description (max 256 chars) Originator (max 32 chars) Originator Reference (max 32 chars) Originator Date Time Origination Date as string Origination as time Time reference (first sample count since midnight) version 2 has loudness stuff which we don't know so using version 1 64 bytes http://en.wikipedia.org/wiki/UMID for version 2 = 180 bytes (10 before are loudness values), using version 1 = 190 bytes Coding history arbitrary length string at end of structure http://www.ebu.ch/CMSimages/fr/tec_text_r98-1999_tcm7-4709.pdf A=PCM,F=48000,W=16,M=stereo,T=original,CR/LF Broadcast WAVE File Writer Createa a new BwfWriter Rarget filename WaveFormat Chunk information Write audio data to this BWF Flush writer, and fix up header sizes Disposes this writer A wave file writer that adds cue support Writes a wave file, including a cues chunk Adds a cue to the Wave file Sample position Label text Updates the header, and writes the cues out NativeDirectSoundOut using DirectSound COM interop. Contact author: Alexandre Mutel - alexandre_mutel at yahoo.fr Modified by: Graham "Gee" Plumb Playback Stopped Gets the DirectSound output devices in the system Initializes a new instance of the class. Initializes a new instance of the class. Initializes a new instance of the class. Initializes a new instance of the class. (40ms seems to work under Vista). The latency. Selected device Releases unmanaged resources and performs other cleanup operations before the is reclaimed by garbage collection. Begin playback Stop playback Pause Playback Gets the current position in bytes from the wave output device. (n.b. this is not the same thing as the position within your reader stream) Position in bytes Gets the current position from the wave output device. Initialise playback The waveprovider to be played Current playback state The volume 1.0 is full scale Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. Determines whether the SecondaryBuffer is lost. true if [is buffer lost]; otherwise, false. Convert ms to bytes size according to WaveFormat The ms number of byttes Processes the samples in a separate thread. Stop playback Clean up the SecondaryBuffer In DirectSound, when playback is started, the rest of the sound that was played last time is played back as noise. This happens even if the secondary buffer is completely silenced, so it seems that the buffer in the primary buffer or higher is not cleared. To solve this problem fill the secondary buffer with silence data when stop playback. Feeds the SecondaryBuffer with the WaveStream number of bytes to feed IDirectSound interface IDirectSoundBuffer interface IDirectSoundNotify interface Instanciate DirectSound from the DLL The GUID. The direct sound. The p unk outer. DirectSound default playback device GUID DirectSound default capture device GUID DirectSound default device for voice playback DirectSound default device for voice capture The DSEnumCallback function is an application-defined callback function that enumerates the DirectSound drivers. The system calls this function in response to the application's call to the DirectSoundEnumerate or DirectSoundCaptureEnumerate function. Address of the GUID that identifies the device being enumerated, or NULL for the primary device. This value can be passed to the DirectSoundCreate8 or DirectSoundCaptureCreate8 function to create a device object for that driver. Address of a null-terminated string that provides a textual description of the DirectSound device. Address of a null-terminated string that specifies the module name of the DirectSound driver corresponding to this device. Address of application-defined data. This is the pointer passed to DirectSoundEnumerate or DirectSoundCaptureEnumerate as the lpContext parameter. Returns TRUE to continue enumerating drivers, or FALSE to stop. The DirectSoundEnumerate function enumerates the DirectSound drivers installed in the system. callback function User context Gets the HANDLE of the desktop window. HANDLE of the Desktop window Class for enumerating DirectSound devices The device identifier Device description Device module name IWaveBuffer interface use to store wave datas. Data can be manipulated with arrays (,, , ) that are pointing to the same memory buffer. This is a requirement for all subclasses. Use the associated Count property based on the type of buffer to get the number of data in the buffer. for the standard implementation using C# unions. Gets the byte buffer. The byte buffer. Gets the float buffer. The float buffer. Gets the short buffer. The short buffer. Gets the int buffer. The int buffer. Gets the max size in bytes of the byte buffer.. Maximum number of bytes in the buffer. Gets the byte buffer count. The byte buffer count. Gets the float buffer count. The float buffer count. Gets the short buffer count. The short buffer count. Gets the int buffer count. The int buffer count. Represents the interface to a device that can play a WaveFile Begin playback Stop playback Pause Playback Initialise playback The waveprovider to be played The volume 1.0f is full scale Note that not all implementations necessarily support volume changes Current playback state Indicates that playback has gone into a stopped state due to reaching the end of the input stream or an error has been encountered during playback Interface for IWavePlayers that can report position Position (in terms of bytes played - does not necessarily translate directly to the position within the source audio file) Position in bytes Gets a instance indicating the format the hardware is using. Generic interface for all WaveProviders. Gets the WaveFormat of this WaveProvider. The wave format. Fill the specified buffer with wave data. The buffer to fill of wave data. Offset into buffer The number of bytes to read the number of bytes written to the buffer. Like IWaveProvider, but makes it much simpler to put together a 32 bit floating point mixing engine Gets the WaveFormat of this Sample Provider. The wave format. Fill the specified buffer with 32 bit floating point samples The buffer to fill with samples. Offset into buffer The number of samples to read the number of samples written to the buffer. Media Foundation Encoder class allows you to use Media Foundation to encode an IWaveProvider to any supported encoding format Queries the available bitrates for a given encoding output type, sample rate and number of channels Audio subtype - a value from the AudioSubtypes class The sample rate of the PCM to encode The number of channels of the PCM to encode An array of available bitrates in average bits per second Gets all the available media types for a particular Audio subtype - a value from the AudioSubtypes class An array of available media types that can be encoded with this subtype Helper function to simplify encoding Window Media Audio Should be supported on Vista and above (not tested) Input provider, must be PCM Output file path, should end with .wma Desired bitrate. Use GetEncodeBitrates to find the possibilities for your input type Helper function to simplify encoding to MP3 By default, will only be available on Windows 8 and above Input provider, must be PCM Output file path, should end with .mp3 Desired bitrate. Use GetEncodeBitrates to find the possibilities for your input type Helper function to simplify encoding to AAC By default, will only be available on Windows 7 and above Input provider, must be PCM Output file path, should end with .mp4 (or .aac on Windows 8) Desired bitrate. Use GetEncodeBitrates to find the possibilities for your input type Tries to find the encoding media type with the closest bitrate to that specified Audio subtype, a value from AudioSubtypes Your encoder input format (used to check sample rate and channel count) Your desired bitrate The closest media type, or null if none available Creates a new encoder that encodes to the specified output media type Desired output media type Encodes a file Output filename (container type is deduced from the filename) Input provider (should be PCM, some encoders will also allow IEEE float) Disposes this instance Disposes this instance Finalizer Playback State Stopped Playing Paused Stopped Event Args Initializes a new instance of StoppedEventArgs An exception to report (null if no exception) An exception. Will be null if the playback or record operation stopped due to the user requesting stop or reached the end of the input audio Support for playback using Wasapi Playback Stopped WASAPI Out shared mode, default WASAPI Out using default audio endpoint ShareMode - shared or exclusive Desired latency in milliseconds WASAPI Out using default audio endpoint ShareMode - shared or exclusive true if sync is done with event. false use sleep. Desired latency in milliseconds Creates a new WASAPI Output Device to use true if sync is done with event. false use sleep. Desired latency in milliseconds Gets the current position in bytes from the wave output device. (n.b. this is not the same thing as the position within your reader stream) Position in bytes Gets a instance indicating the format the hardware is using. Begin Playback Stop playback and flush buffers Stop playback without flushing buffers Initialize for playing the specified wave stream IWaveProvider to play Playback State Volume Retrieve the AudioStreamVolume object for this audio stream This returns the AudioStreamVolume object ONLY for shared audio streams. This is thrown when an exclusive audio stream is being used. Dispose WaveBuffer class use to store wave datas. Data can be manipulated with arrays (,,, ) that are pointing to the same memory buffer. Use the associated Count property based on the type of buffer to get the number of data in the buffer. Implicit casting is now supported to float[], byte[], int[], short[]. You must not use Length on returned arrays. n.b. FieldOffset is 8 now to allow it to work natively on 64 bit Number of Bytes Initializes a new instance of the class. The number of bytes. The size of the final buffer will be aligned on 4 Bytes (upper bound) Initializes a new instance of the class binded to a specific byte buffer. A byte buffer to bound the WaveBuffer to. Binds this WaveBuffer instance to a specific byte buffer. A byte buffer to bound the WaveBuffer to. Performs an implicit conversion from to . The wave buffer. The result of the conversion. Performs an implicit conversion from to . The wave buffer. The result of the conversion. Performs an implicit conversion from to . The wave buffer. The result of the conversion. Performs an implicit conversion from to . The wave buffer. The result of the conversion. Gets the byte buffer. The byte buffer. Gets the float buffer. The float buffer. Gets the short buffer. The short buffer. Gets the int buffer. The int buffer. Gets the max size in bytes of the byte buffer.. Maximum number of bytes in the buffer. Gets or sets the byte buffer count. The byte buffer count. Gets or sets the float buffer count. The float buffer count. Gets or sets the short buffer count. The short buffer count. Gets or sets the int buffer count. The int buffer count. Clears the associated buffer. Copy this WaveBuffer to a destination buffer up to ByteBufferCount bytes. Checks the validity of the count parameters. Name of the arg. The value. The size of value. This class writes WAV data to a .wav file on disk Creates a 16 bit Wave File from an ISampleProvider BEWARE: the source provider must not return data indefinitely The filename to write to The source sample provider Creates a Wave file by reading all the data from a WaveProvider BEWARE: the WaveProvider MUST return 0 from its Read method when it is finished, or the Wave File will grow indefinitely. The filename to use The source WaveProvider Writes to a stream by reading all the data from a WaveProvider BEWARE: the WaveProvider MUST return 0 from its Read method when it is finished, or the Wave File will grow indefinitely. The stream the method will output to The source WaveProvider WaveFileWriter that actually writes to a stream Stream to be written to Wave format to use Creates a new WaveFileWriter The filename to write to The Wave Format of the output data The wave file name or null if not applicable Number of bytes of audio in the data chunk Total time (calculated from Length and average bytes per second) WaveFormat of this wave file Returns false: Cannot read from a WaveFileWriter Returns true: Can write to a WaveFileWriter Returns false: Cannot seek within a WaveFileWriter Read is not supported for a WaveFileWriter Seek is not supported for a WaveFileWriter SetLength is not supported for WaveFileWriter Gets the Position in the WaveFile (i.e. number of bytes written so far) Appends bytes to the WaveFile (assumes they are already in the correct format) the buffer containing the wave data the offset from which to start writing the number of bytes to write Appends bytes to the WaveFile (assumes they are already in the correct format) the buffer containing the wave data the offset from which to start writing the number of bytes to write Writes a single sample to the Wave file the sample to write (assumed floating point with 1.0f as max value) Writes 32 bit floating point samples to the Wave file They will be converted to the appropriate bit depth depending on the WaveFormat of the WAV file The buffer containing the floating point samples The offset from which to start writing The number of floating point samples to write Writes 16 bit samples to the Wave file The buffer containing the 16 bit samples The offset from which to start writing The number of 16 bit samples to write Writes 16 bit samples to the Wave file The buffer containing the 16 bit samples The offset from which to start writing The number of 16 bit samples to write Ensures data is written to disk Also updates header, so that WAV file will be valid up to the point currently written Actually performs the close,making sure the header contains the correct data True if called from Dispose Updates the header with file size information Finaliser - should only be called if the user forgot to close this WaveFileWriter Alternative WaveOut class, making use of the Event callback Indicates playback has stopped automatically Gets or sets the desired latency in milliseconds Should be set before a call to Init Gets or sets the number of buffers used Should be set before a call to Init Gets or sets the device number Should be set before a call to Init This must be between -1 and DeviceCount - 1. -1 means stick to default device even default device is changed Opens a WaveOut device Initialises the WaveOut device WaveProvider to play Start playing the audio from the WaveStream Pause the audio Resume playing after a pause from the same position Stop and reset the WaveOut device Gets the current position in bytes from the wave output device. (n.b. this is not the same thing as the position within your reader stream - it calls directly into waveOutGetPosition) Position in bytes Gets a instance indicating the format the hardware is using. Playback State Volume for this device 1.0 is full scale Closes this WaveOut device Closes the WaveOut device and disposes of buffers True if called from Dispose Finalizer. Only called when user forgets to call Dispose Provides a buffered store of samples Read method will return queued samples or fill buffer with zeroes Now backed by a circular buffer Creates a new buffered WaveProvider WaveFormat If true, always read the amount of data requested, padding with zeroes if necessary By default is set to true Buffer length in bytes Buffer duration If true, when the buffer is full, start throwing away data if false, AddSamples will throw an exception when buffer is full The number of buffered bytes Buffered Duration Gets the WaveFormat Adds samples. Takes a copy of buffer, so that buffer can be reused if necessary Reads from this WaveProvider Will always return count bytes, since we will zero-fill the buffer if not enough available Discards all audio from the buffer Provide WaveProvider that can apply effects in real time using DMO. If the audio thread is running on the STA thread, please generate and operate from the same thread. If the audio thread is running on the MTA thread, please operate on any MTA thread. Types of DMO effectors to use Parameters of the effect to be used Create a new DmoEffectWaveProvider Input Stream Stream Wave Format Reads data from input stream buffer offset into buffer Bytes required Number of bytes read Get Effector Parameters Dispose The Media Foundation Resampler Transform Creates the Media Foundation Resampler, allowing modifying of sample rate, bit depth and channel count Source provider, must be PCM Output format, must also be PCM Creates a resampler with a specified target output sample rate Source provider Output sample rate Creates and configures the actual Resampler transform A newly created and configured resampler MFT Gets or sets the Resampler quality. n.b. set the quality before starting to resample. 1 is lowest quality (linear interpolation) and 60 is best quality Disposes this resampler WaveProvider that can mix together multiple 32 bit floating point input provider All channels must have the same number of inputs and same sample rate n.b. Work in Progress - not tested yet Creates a new MixingWaveProvider32 Creates a new 32 bit MixingWaveProvider32 inputs - must all have the same format. Thrown if the input streams are not 32 bit floating point, or if they have different formats to each other Add a new input to the mixer The wave input to add Remove an input from the mixer waveProvider to remove The number of inputs to this mixer Reads bytes from this wave stream buffer to read into offset into buffer number of bytes required Number of bytes read. Thrown if an invalid number of bytes requested Actually performs the mixing Converts from mono to stereo, allowing freedom to route all, some, or none of the incoming signal to left or right channels Creates a new stereo waveprovider based on a mono input Mono 16 bit PCM input 1.0 to copy the mono stream to the left channel without adjusting volume 1.0 to copy the mono stream to the right channel without adjusting volume Output Wave Format Reads bytes from this WaveProvider Allows any number of inputs to be patched to outputs Uses could include swapping left and right channels, turning mono into stereo, feeding different input sources to different soundcard outputs etc Creates a multiplexing wave provider, allowing re-patching of input channels to different output channels. Number of outputs is equal to total number of channels in inputs Input wave providers. Must all be of the same format, but can have any number of channels Creates a multiplexing wave provider, allowing re-patching of input channels to different output channels Input wave providers. Must all be of the same format, but can have any number of channels Desired number of output channels. (-1 means use total number of input channels) persistent temporary buffer to prevent creating work for garbage collector Reads data from this WaveProvider Buffer to be filled with sample data Offset to write to within buffer, usually 0 Number of bytes required Number of bytes read The WaveFormat of this WaveProvider Connects a specified input channel to an output channel Input Channel index (zero based). Must be less than InputChannelCount Output Channel index (zero based). Must be less than OutputChannelCount The number of input channels. Note that this is not the same as the number of input wave providers. If you pass in one stereo and one mono input provider, the number of input channels is three. The number of output channels, as specified in the constructor. Silence producing wave provider Useful for playing silence when doing a WASAPI Loopback Capture Creates a new silence producing wave provider Desired WaveFormat (should be PCM / IEE float Read silence from into the buffer WaveFormat of this silence producing wave provider Takes a stereo 16 bit input and turns it mono, allowing you to select left or right channel only or mix them together Creates a new mono waveprovider based on a stereo input Stereo 16 bit PCM input 1.0 to mix the mono source entirely to the left channel 1.0 to mix the mono source entirely to the right channel Output Wave Format Reads bytes from this WaveProvider Helper class allowing us to modify the volume of a 16 bit stream without converting to IEEE float Constructs a new VolumeWaveProvider16 Source provider, must be 16 bit PCM Gets or sets volume. 1.0 is full scale, 0.0 is silence, anything over 1.0 will amplify but potentially clip WaveFormat of this WaveProvider Read bytes from this WaveProvider Buffer to read into Offset within buffer to read to Bytes desired Bytes read Converts 16 bit PCM to IEEE float, optionally adjusting volume along the way Creates a new Wave16toFloatProvider the source provider Reads bytes from this wave stream The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. Volume of this channel. 1.0 = full scale Converts IEEE float to 16 bit PCM, optionally clipping and adjusting volume along the way Creates a new WaveFloatTo16Provider the source provider Reads bytes from this wave stream The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. Volume of this channel. 1.0 = full scale Buffered WaveProvider taking source data from WaveIn Creates a new WaveInProvider n.b. Should make sure the WaveFormat is set correctly on IWaveIn before calling The source of wave data Reads data from the WaveInProvider The WaveFormat Base class for creating a 16 bit wave provider Initializes a new instance of the WaveProvider16 class defaulting to 44.1kHz mono Initializes a new instance of the WaveProvider16 class with the specified sample rate and number of channels Allows you to specify the sample rate and channels for this WaveProvider (should be initialised before you pass it to a wave player) Implements the Read method of IWaveProvider by delegating to the abstract Read method taking a short array Method to override in derived classes Supply the requested number of samples into the buffer The Wave Format Base class for creating a 32 bit floating point wave provider Can also be used as a base class for an ISampleProvider that can be plugged straight into anything requiring an IWaveProvider Initializes a new instance of the WaveProvider32 class defaulting to 44.1kHz mono Initializes a new instance of the WaveProvider32 class with the specified sample rate and number of channels Allows you to specify the sample rate and channels for this WaveProvider (should be initialised before you pass it to a wave player) Implements the Read method of IWaveProvider by delegating to the abstract Read method taking a float array Method to override in derived classes Supply the requested number of samples into the buffer The Wave Format Utility class to intercept audio from an IWaveProvider and save it to disk Constructs a new WaveRecorder The location to write the WAV file to The Source Wave Provider Read simply returns what the source returns, but writes to disk along the way The WaveFormat Closes the WAV file A read-only stream of AIFF data based on an aiff file with an associated WaveFormat originally contributed to NAudio by Giawa Supports opening a AIF file The AIF is of similar nastiness to the WAV format. This supports basic reading of uncompressed PCM AIF files, with 8, 16, 24 and 32 bit PCM data. Creates an Aiff File Reader based on an input stream The input stream containing a AIF file including header Ensures valid AIFF header and then finds data offset. The stream, positioned at the start of audio data The format found The position of the data chunk The length of the data chunk Additional chunks found Cleans up the resources associated with this AiffFileReader Number of Samples (if possible to calculate) Position in the AIFF file Reads bytes from the AIFF File AIFF Chunk Chunk Name Chunk Length Chunk start Creates a new AIFF Chunk AudioFileReader simplifies opening an audio file in NAudio Simply pass in the filename, and it will attempt to open the file and set up a conversion path that turns into PCM IEEE float. ACM codecs will be used for conversion. It provides a volume property and implements both WaveStream and ISampleProvider, making it possibly the only stage in your audio pipeline necessary for simple playback scenarios Initializes a new instance of AudioFileReader The file to open Creates the reader stream, supporting all filetypes in the core NAudio library, and ensuring we are in PCM format File Name File Name WaveFormat of this stream Length of this stream (in bytes) Position of this stream (in bytes) Reads from this wave stream Audio buffer Offset into buffer Number of bytes required Number of bytes read Reads audio from this sample provider Sample buffer Offset into sample buffer Number of samples required Number of samples read Gets or Sets the Volume of this AudioFileReader. 1.0f is full volume Helper to convert source to dest bytes Helper to convert dest to source bytes Disposes this AudioFileReader True if called from Dispose Helper stream that lets us read from compressed audio files with large block alignment as though we could read any amount and reposition anywhere Creates a new BlockAlignReductionStream the input stream Block alignment of this stream Wave Format Length of this Stream Current position within stream Disposes this WaveStream Reads data from this stream Implementation of Com IStream Holds information on a cue: a labeled position within a Wave file Cue position in samples Label of the cue Creates a Cue based on a sample position and label Holds a list of cues The specs for reading and writing cues from the cue and list RIFF chunks are from http://www.sonicspot.com/guide/wavefiles.html and http://www.wotsit.org/ ------------------------------ The cues are stored like this: ------------------------------ struct CuePoint { Int32 dwIdentifier; Int32 dwPosition; Int32 fccChunk; Int32 dwChunkStart; Int32 dwBlockStart; Int32 dwSampleOffset; } struct CueChunk { Int32 chunkID; Int32 chunkSize; Int32 dwCuePoints; CuePoint[] points; } ------------------------------ Labels look like this: ------------------------------ struct ListHeader { Int32 listID; /* 'list' */ Int32 chunkSize; /* includes the Type ID below */ Int32 typeID; /* 'adtl' */ } struct LabelChunk { Int32 chunkID; Int32 chunkSize; Int32 dwIdentifier; Char[] dwText; /* Encoded with extended ASCII */ } LabelChunk; Creates an empty cue list Adds an item to the list Cue Gets sample positions for the embedded cues Array containing the cue positions Gets labels for the embedded cues Array containing the labels Creates a cue list from the cue RIFF chunk and the list RIFF chunk The data contained in the cue chunk The data contained in the list chunk Gets the cues as the concatenated cue and list RIFF chunks. RIFF chunks containing the cue data Number of cues Accesses the cue at the specified index Checks if the cue and list chunks exist and if so, creates a cue list A wave file reader supporting cue reading Loads a wavefile and supports reading cues Loads a wave from a stream and supports reading cues Cue List (can be null if cues not present) An interface for WaveStreams which can report notification of individual samples A sample has been detected Sample event arguments Left sample Right sample Constructor Class for reading any file that Media Foundation can play Will only work in Windows Vista and above Automatically converts to PCM If it is a video file with multiple audio streams, it will pick out the first audio stream Allows customisation of this reader class Sets up the default settings for MediaFoundationReader Allows us to request IEEE float output (n.b. no guarantee this will be accepted) If true, the reader object created in the constructor is used in Read Should only be set to true if you are working entirely on an STA thread, or entirely with MTA threads. If true, the reposition does not happen immediately, but waits until the next call to read to be processed. Default constructor Creates a new MediaFoundationReader based on the supplied file Filename (can also be a URL e.g. http:// mms:// file://) Creates a new MediaFoundationReader based on the supplied file Filename Advanced settings Initializes Creates the reader (overridable by ) Reads from this wave stream Buffer to read into Offset in buffer Bytes required Number of bytes read; 0 indicates end of stream WaveFormat of this stream (n.b. this is after converting to PCM) The bytesRequired of this stream in bytes (n.b may not be accurate) Current position within this stream Cleans up after finishing with this reader true if called from Dispose WaveFormat has changed Class for reading from MP3 files The MP3 wave format (n.b. NOT the output format of this stream - see the WaveFormat property) Supports opening a MP3 file Supports opening a MP3 file MP3 File name Factory method to build a frame decompressor Opens MP3 from a stream rather than a file Will not dispose of this stream itself The incoming stream containing MP3 data Opens MP3 from a stream rather than a file Will not dispose of this stream itself The incoming stream containing MP3 data Factory method to build a frame decompressor Function that can create an MP3 Frame decompressor A WaveFormat object describing the MP3 file format An MP3 Frame decompressor Creates an ACM MP3 Frame decompressor. This is the default with NAudio A WaveFormat object based Gets the total length of this file in milliseconds. ID3v2 tag if present ID3v1 tag if present Reads the next mp3 frame Next mp3 frame, or null if EOF Reads the next mp3 frame Next mp3 frame, or null if EOF This is the length in bytes of data available to be read out from the Read method (i.e. the decompressed MP3 length) n.b. this may return 0 for files whose length is unknown Reads decompressed PCM data from our MP3 file. Xing header if present Disposes this WaveStream WaveStream that simply passes on data from its source stream (e.g. a MemoryStream) Initialises a new instance of RawSourceWaveStream The source stream containing raw audio The waveformat of the audio in the source stream Initialises a new instance of RawSourceWaveStream The buffer containing raw audio Offset in the source buffer to read from Number of bytes to read in the buffer The waveformat of the audio in the source stream The WaveFormat of this stream The length in bytes of this stream (if supported) The current position in this stream Reads data from the stream Wave Stream for converting between sample rates WaveStream to resample using the DMO Resampler Input Stream Desired Output Format Stream Wave Format Stream length in bytes Stream position in bytes Reads data from input stream buffer offset into buffer Bytes required Number of bytes read Dispose True if disposing (not from finalizer) Holds information about a RIFF file chunk Creates a RiffChunk object The chunk identifier The chunk identifier converted to a string The chunk length The stream position this chunk is located at A simple compressor Create a new simple compressor stream Source stream Make-up Gain Threshold Ratio Attack time Release time Turns gain on or off Gets the WaveFormat of this stream Reads bytes from this stream Buffer to read into Offset in array to read into Number of bytes to read Number of bytes read MediaFoundationReader supporting reading from a stream Constructs a new media foundation reader from a stream Creates the reader WaveStream that converts 32 bit audio back down to 16 bit, clipping if necessary The method reuses the same buffer to prevent unnecessary allocations. Creates a new Wave32To16Stream the source stream Sets the volume for this stream. 1.0f is full scale Returns the stream length Gets or sets the current position in the stream Reads bytes from this wave stream Destination buffer Offset into destination buffer Number of bytes read. Conversion to 16 bit and clipping Clip indicator. Can be reset. Disposes this WaveStream Represents Channel for the WaveMixerStream 32 bit output and 16 bit input It's output is always stereo The input stream can be panned Creates a new WaveChannel32 the source stream stream volume (1 is 0dB) pan control (-1 to 1) Creates a WaveChannel32 with default settings The source stream Gets the block alignment for this WaveStream Returns the stream length Gets or sets the current position in the stream Reads bytes from this wave stream The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. If true, Read always returns the number of bytes requested Volume of this channel. 1.0 = full scale Pan of this channel (from -1 to 1) Determines whether this channel has any data to play to allow optimisation to not read, but bump position forward Disposes this WaveStream Sample Raise the sample event (no check for null because it has already been done) This class supports the reading of WAV files, providing a repositionable WaveStream that returns the raw data contained in the WAV file Supports opening a WAV file The WAV file format is a real mess, but we will only support the basic WAV file format which actually covers the vast majority of WAV files out there. For more WAV file format information visit www.wotsit.org. If you have a WAV file that can't be read by this class, email it to the NAudio project and we will probably fix this reader to support it Creates a Wave File Reader based on an input stream The input stream containing a WAV file including header Gets a list of the additional chunks found in this file Gets the data for the specified chunk Cleans up the resources associated with this WaveFileReader This is the length of audio data contained in this WAV file, in bytes (i.e. the byte length of the data chunk, not the length of the WAV file itself) Number of Sample Frames (if possible to calculate) This currently does not take into account number of channels Multiply number of channels if you want the total number of samples Position in the WAV data chunk. Reads bytes from the Wave File Attempts to read the next sample or group of samples as floating point normalised into the range -1.0f to 1.0f An array of samples, 1 for mono, 2 for stereo etc. Null indicates end of file reached Attempts to read a sample into a float. n.b. only applicable for uncompressed formats Will normalise the value read into the range -1.0f to 1.0f if it comes from a PCM encoding False if the end of the WAV data chunk was reached IWaveProvider that passes through an ACM Codec Create a new WaveFormat conversion stream Desired output format Source Provider Gets the WaveFormat of this stream Indicates that a reposition has taken place, and internal buffers should be reset Reads bytes from this stream Buffer to read into Offset in buffer to read into Number of bytes to read Number of bytes read Disposes this stream true if the user called this Disposes this resource Finalizer WaveStream that passes through an ACM Codec Create a new WaveFormat conversion stream Desired output format Source stream Creates a stream that can convert to PCM The source stream A PCM stream Gets or sets the current position in the stream Converts source bytes to destination bytes Converts destination bytes to source bytes Returns the stream length Gets the WaveFormat of this stream Buffer to read into Offset within buffer to write to Number of bytes to read Bytes read Disposes this stream true if the user called this A buffer of Wave samples creates a new wavebuffer WaveIn device to write to Buffer size in bytes Place this buffer back to record more audio Finalizer for this wave buffer Releases resources held by this WaveBuffer Releases resources held by this WaveBuffer Provides access to the actual record buffer (for reading only) Indicates whether the Done flag is set on this buffer Indicates whether the InQueue flag is set on this buffer Number of bytes recorded The buffer size in bytes WaveStream that can mix together multiple 32 bit input streams (Normally used with stereo input channels) All channels must have the same number of inputs Creates a new 32 bit WaveMixerStream Creates a new 32 bit WaveMixerStream An Array of WaveStreams - must all have the same format. Use WaveChannel is designed for this purpose. Automatically stop when all inputs have been read Thrown if the input streams are not 32 bit floating point, or if they have different formats to each other Add a new input to the mixer The wave input to add Remove a WaveStream from the mixer waveStream to remove The number of inputs to this mixer Automatically stop when all inputs have been read Reads bytes from this wave stream buffer to read into offset into buffer number of bytes required Number of bytes read. Thrown if an invalid number of bytes requested Actually performs the mixing Length of this Wave Stream (in bytes) Position within this Wave Stream (in bytes) Disposes this WaveStream Simply shifts the input stream in time, optionally clipping its start and end. (n.b. may include looping in the future) Creates a new WaveOffsetStream the source stream the time at which we should start reading from the source stream amount to trim off the front of the source stream length of time to play from source stream Creates a WaveOffsetStream with default settings (no offset or pre-delay, and whole length of source stream) The source stream The length of time before which no audio will be played An offset into the source stream from which to start playing Length of time to read from the source stream Gets the block alignment for this WaveStream Returns the stream length Gets or sets the current position in the stream Reads bytes from this wave stream The destination buffer Offset into the destination buffer Number of bytes read Number of bytes read. Determines whether this channel has any data to play to allow optimisation to not read, but bump position forward Disposes this WaveStream A buffer of Wave samples for streaming to a Wave Output device creates a new wavebuffer WaveOut device to write to Buffer size in bytes Stream to provide more data Lock to protect WaveOut API's from being called on >1 thread Finalizer for this wave buffer Releases resources held by this WaveBuffer Releases resources held by this WaveBuffer this is called by the WAVE callback and should be used to refill the buffer Whether the header's in queue flag is set The buffer size in bytes Base class for all WaveStream classes. Derives from stream. Retrieves the WaveFormat for this stream We can read from this stream We can seek within this stream We can't write to this stream Flush does not need to do anything See An alternative way of repositioning. See Sets the length of the WaveStream. Not Supported. Writes to the WaveStream. Not Supported. The block alignment for this wavestream. Do not modify the Position to anything that is not a whole multiple of this value Moves forward or backwards the specified number of seconds in the stream Number of seconds to move, can be negative The current position in the stream in Time format Total length in real-time of the stream (may be an estimate for compressed files) Whether the WaveStream has non-zero sample data at the current position for the specified count Number of bytes to read MP3 Frame decompressor using the Windows Media MP3 Decoder DMO object Initializes a new instance of the DMO MP3 Frame decompressor Converted PCM WaveFormat Decompress a single frame of MP3 Alerts us that a reposition has occured so the MP3 decoder needs to reset its state Dispose of this obejct and clean up resources http://tech.ebu.ch/docs/tech/tech3306-2009.pdf WaveFormat Data Chunk Position Data Chunk Length Riff Chunks Soundfont generator Gets the generator type Generator amount as an unsigned short Generator amount as a signed short Low byte amount High byte amount Instrument Sample Header Generator types Start address offset End address offset Start loop address offset End loop address offset Start address coarse offset Modulation LFO to pitch Vibrato LFO to pitch Modulation envelope to pitch Initial filter cutoff frequency Initial filter Q Modulation LFO to filter Cutoff frequency Modulation envelope to filter cutoff frequency End address coarse offset Modulation LFO to volume Unused Chorus effects send Reverb effects send Pan Unused Unused Unused Delay modulation LFO Frequency modulation LFO Delay vibrato LFO Frequency vibrato LFO Delay modulation envelope Attack modulation envelope Hold modulation envelope Decay modulation envelope Sustain modulation envelop Release modulation envelope Key number to modulation envelope hold Key number to modulation envelope decay Delay volume envelope Attack volume envelope Hold volume envelope Decay volume envelope Sustain volume envelope Release volume envelope Key number to volume envelope hold Key number to volume envelope decay Instrument Reserved Key range Velocity range Start loop address coarse offset Key number Velocity Initial attenuation Reserved End loop address coarse offset Coarse tune Fine tune Sample ID Sample modes Reserved Scale tuning Exclusive class Overriding root key Unused Unused A soundfont info chunk SoundFont Version WaveTable sound engine Bank name Data ROM Creation Date Author Target Product Copyright Comments Tools ROM Version SoundFont instrument instrument name Zones Instrument Builder Transform Types Linear Modulator Source Modulation data type Destination generator type Amount Source Modulation Amount Type Source Transform Type Controller Sources No Controller Note On Velocity Note On Key Number Poly Pressure Channel Pressure Pitch Wheel Pitch Wheel Sensitivity Source Types Linear Concave Convex Switch Modulator Type A SoundFont Preset Preset name Patch Number Bank number 0 - 127, GM percussion bank is 128 Zones Class to read the SoundFont file presets chunk The Presets contained in this chunk The instruments contained in this chunk The sample headers contained in this chunk just reads a chunk ID at the current position chunk ID reads a chunk at the current position creates a new riffchunk from current position checking that we're not at the end of this chunk first the new chunk useful for chunks that just contain a string chunk as string A SoundFont Sample Header The sample name Start offset End offset Start loop point End loop point Sample Rate Original pitch Pitch correction Sample Link SoundFont Sample Link Type SoundFont sample modes No loop Loop Continuously Reserved no loop Loop and continue Sample Link Type Mono Sample Right Sample Left Sample Linked Sample ROM Mono Sample ROM Right Sample ROM Left Sample ROM Linked Sample SoundFont Version Structure Major Version Minor Version Builds a SoundFont version Reads a SoundFont Version structure Writes a SoundFont Version structure Gets the length of this structure Represents a SoundFont Loads a SoundFont from a file Filename of the SoundFont Loads a SoundFont from a stream stream The File Info Chunk The Presets The Instruments The Sample Headers The Sample Data base class for structures that can read themselves A SoundFont zone Modulators for this Zone Generators for this Zone Audio Subtype GUIDs http://msdn.microsoft.com/en-us/library/windows/desktop/aa372553%28v=vs.85%29.aspx Advanced Audio Coding (AAC). Not used Dolby AC-3 audio over Sony/Philips Digital Interface (S/PDIF). Encrypted audio data used with secure audio path. Digital Theater Systems (DTS) audio. Uncompressed IEEE floating-point audio. MPEG Audio Layer-3 (MP3). MPEG-1 audio payload. Windows Media Audio 9 Voice codec. Uncompressed PCM audio. Windows Media Audio 9 Professional codec over S/PDIF. Windows Media Audio 9 Lossless codec or Windows Media Audio 9.1 codec. Windows Media Audio 8 codec, Windows Media Audio 9 codec, or Windows Media Audio 9.1 codec. Windows Media Audio 9 Professional codec or Windows Media Audio 9.1 Professional codec. Dolby Digital (AC-3). MPEG-4 and AAC Audio Types http://msdn.microsoft.com/en-us/library/windows/desktop/dd317599(v=vs.85).aspx Reference : wmcodecdsp.h Dolby Audio Types http://msdn.microsoft.com/en-us/library/windows/desktop/dd317599(v=vs.85).aspx Reference : wmcodecdsp.h Dolby Audio Types http://msdn.microsoft.com/en-us/library/windows/desktop/dd317599(v=vs.85).aspx Reference : wmcodecdsp.h μ-law coding http://msdn.microsoft.com/en-us/library/windows/desktop/dd390971(v=vs.85).aspx Reference : Ksmedia.h Adaptive delta pulse code modulation (ADPCM) http://msdn.microsoft.com/en-us/library/windows/desktop/dd390971(v=vs.85).aspx Reference : Ksmedia.h Dolby Digital Plus formatted for HDMI output. http://msdn.microsoft.com/en-us/library/windows/hardware/ff538392(v=vs.85).aspx Reference : internet MSAudio1 - unknown meaning Reference : wmcodecdsp.h IMA ADPCM ACM Wrapper WMSP2 - unknown meaning Reference: wmsdkidl.h IMFActivate, defined in mfobjects.h Retrieves the value associated with a key. Retrieves the data type of the value associated with a key. Queries whether a stored attribute value equals a specified PROPVARIANT. Compares the attributes on this object with the attributes on another object. Retrieves a UINT32 value associated with a key. Retrieves a UINT64 value associated with a key. Retrieves a double value associated with a key. Retrieves a GUID value associated with a key. Retrieves the length of a string value associated with a key. Retrieves a wide-character string associated with a key. Retrieves a wide-character string associated with a key. This method allocates the memory for the string. Retrieves the length of a byte array associated with a key. Retrieves a byte array associated with a key. Retrieves a byte array associated with a key. This method allocates the memory for the array. Retrieves an interface pointer associated with a key. Associates an attribute value with a key. Removes a key/value pair from the object's attribute list. Removes all key/value pairs from the object's attribute list. Associates a UINT32 value with a key. Associates a UINT64 value with a key. Associates a double value with a key. Associates a GUID value with a key. Associates a wide-character string with a key. Associates a byte array with a key. Associates an IUnknown pointer with a key. Locks the attribute store so that no other thread can access it. Unlocks the attribute store. Retrieves the number of attributes that are set on this object. Retrieves an attribute at the specified index. Copies all of the attributes from this object into another attribute store. Creates the object associated with this activation object. Shuts down the created object. Detaches the created object from the activation object. Provides a generic way to store key/value pairs on an object. http://msdn.microsoft.com/en-gb/library/windows/desktop/ms704598%28v=vs.85%29.aspx Retrieves the value associated with a key. Retrieves the data type of the value associated with a key. Queries whether a stored attribute value equals a specified PROPVARIANT. Compares the attributes on this object with the attributes on another object. Retrieves a UINT32 value associated with a key. Retrieves a UINT64 value associated with a key. Retrieves a double value associated with a key. Retrieves a GUID value associated with a key. Retrieves the length of a string value associated with a key. Retrieves a wide-character string associated with a key. Retrieves a wide-character string associated with a key. This method allocates the memory for the string. Retrieves the length of a byte array associated with a key. Retrieves a byte array associated with a key. Retrieves a byte array associated with a key. This method allocates the memory for the array. Retrieves an interface pointer associated with a key. Associates an attribute value with a key. Removes a key/value pair from the object's attribute list. Removes all key/value pairs from the object's attribute list. Associates a UINT32 value with a key. Associates a UINT64 value with a key. Associates a double value with a key. Associates a GUID value with a key. Associates a wide-character string with a key. Associates a byte array with a key. Associates an IUnknown pointer with a key. Locks the attribute store so that no other thread can access it. Unlocks the attribute store. Retrieves the number of attributes that are set on this object. Retrieves an attribute at the specified index. Copies all of the attributes from this object into another attribute store. IMFByteStream http://msdn.microsoft.com/en-gb/library/windows/desktop/ms698720%28v=vs.85%29.aspx Retrieves the characteristics of the byte stream. virtual HRESULT STDMETHODCALLTYPE GetCapabilities(/*[out]*/ __RPC__out DWORD *pdwCapabilities) = 0; Retrieves the length of the stream. virtual HRESULT STDMETHODCALLTYPE GetLength(/*[out]*/ __RPC__out QWORD *pqwLength) = 0; Sets the length of the stream. virtual HRESULT STDMETHODCALLTYPE SetLength(/*[in]*/ QWORD qwLength) = 0; Retrieves the current read or write position in the stream. virtual HRESULT STDMETHODCALLTYPE GetCurrentPosition(/*[out]*/ __RPC__out QWORD *pqwPosition) = 0; Sets the current read or write position. virtual HRESULT STDMETHODCALLTYPE SetCurrentPosition(/*[in]*/ QWORD qwPosition) = 0; Queries whether the current position has reached the end of the stream. virtual HRESULT STDMETHODCALLTYPE IsEndOfStream(/*[out]*/ __RPC__out BOOL *pfEndOfStream) = 0; Reads data from the stream. virtual HRESULT STDMETHODCALLTYPE Read(/*[size_is][out]*/ __RPC__out_ecount_full(cb) BYTE *pb, /*[in]*/ ULONG cb, /*[out]*/ __RPC__out ULONG *pcbRead) = 0; Begins an asynchronous read operation from the stream. virtual /*[local]*/ HRESULT STDMETHODCALLTYPE BeginRead(/*[out]*/ _Out_writes_bytes_(cb) BYTE *pb, /*[in]*/ ULONG cb, /*[in]*/ IMFAsyncCallback *pCallback, /*[in]*/ IUnknown *punkState) = 0; Completes an asynchronous read operation. virtual /*[local]*/ HRESULT STDMETHODCALLTYPE EndRead(/*[in]*/ IMFAsyncResult *pResult, /*[out]*/ _Out_ ULONG *pcbRead) = 0; Writes data to the stream. virtual HRESULT STDMETHODCALLTYPE Write(/*[size_is][in]*/ __RPC__in_ecount_full(cb) const BYTE *pb, /*[in]*/ ULONG cb, /*[out]*/ __RPC__out ULONG *pcbWritten) = 0; Begins an asynchronous write operation to the stream. virtual /*[local]*/ HRESULT STDMETHODCALLTYPE BeginWrite(/*[in]*/ _In_reads_bytes_(cb) const BYTE *pb, /*[in]*/ ULONG cb, /*[in]*/ IMFAsyncCallback *pCallback, /*[in]*/ IUnknown *punkState) = 0; Completes an asynchronous write operation. virtual /*[local]*/ HRESULT STDMETHODCALLTYPE EndWrite(/*[in]*/ IMFAsyncResult *pResult, /*[out]*/ _Out_ ULONG *pcbWritten) = 0; Moves the current position in the stream by a specified offset. virtual HRESULT STDMETHODCALLTYPE Seek(/*[in]*/ MFBYTESTREAM_SEEK_ORIGIN SeekOrigin, /*[in]*/ LONGLONG llSeekOffset, /*[in]*/ DWORD dwSeekFlags, /*[out]*/ __RPC__out QWORD *pqwCurrentPosition) = 0; Clears any internal buffers used by the stream. virtual HRESULT STDMETHODCALLTYPE Flush( void) = 0; Closes the stream and releases any resources associated with the stream. virtual HRESULT STDMETHODCALLTYPE Close( void) = 0; Represents a generic collection of IUnknown pointers. Retrieves the number of objects in the collection. Retrieves an object in the collection. Adds an object to the collection. Removes an object from the collection. Removes an object from the collection. Removes all items from the collection. IMFMediaBuffer http://msdn.microsoft.com/en-gb/library/windows/desktop/ms696261%28v=vs.85%29.aspx Gives the caller access to the memory in the buffer. Unlocks a buffer that was previously locked. Retrieves the length of the valid data in the buffer. Sets the length of the valid data in the buffer. Retrieves the allocated size of the buffer. IMFMediaEvent - Represents an event generated by a Media Foundation object. Use this interface to get information about the event. http://msdn.microsoft.com/en-us/library/windows/desktop/ms702249%28v=vs.85%29.aspx Mfobjects.h Retrieves the value associated with a key. Retrieves the data type of the value associated with a key. Queries whether a stored attribute value equals a specified PROPVARIANT. Compares the attributes on this object with the attributes on another object. Retrieves a UINT32 value associated with a key. Retrieves a UINT64 value associated with a key. Retrieves a double value associated with a key. Retrieves a GUID value associated with a key. Retrieves the length of a string value associated with a key. Retrieves a wide-character string associated with a key. Retrieves a wide-character string associated with a key. This method allocates the memory for the string. Retrieves the length of a byte array associated with a key. Retrieves a byte array associated with a key. Retrieves a byte array associated with a key. This method allocates the memory for the array. Retrieves an interface pointer associated with a key. Associates an attribute value with a key. Removes a key/value pair from the object's attribute list. Removes all key/value pairs from the object's attribute list. Associates a UINT32 value with a key. Associates a UINT64 value with a key. Associates a double value with a key. Associates a GUID value with a key. Associates a wide-character string with a key. Associates a byte array with a key. Associates an IUnknown pointer with a key. Locks the attribute store so that no other thread can access it. Unlocks the attribute store. Retrieves the number of attributes that are set on this object. Retrieves an attribute at the specified index. Copies all of the attributes from this object into another attribute store. Retrieves the event type. virtual HRESULT STDMETHODCALLTYPE GetType( /* [out] */ __RPC__out MediaEventType *pmet) = 0; Retrieves the extended type of the event. virtual HRESULT STDMETHODCALLTYPE GetExtendedType( /* [out] */ __RPC__out GUID *pguidExtendedType) = 0; Retrieves an HRESULT that specifies the event status. virtual HRESULT STDMETHODCALLTYPE GetStatus( /* [out] */ __RPC__out HRESULT *phrStatus) = 0; Retrieves the value associated with the event, if any. virtual HRESULT STDMETHODCALLTYPE GetValue( /* [out] */ __RPC__out PROPVARIANT *pvValue) = 0; Represents a description of a media format. http://msdn.microsoft.com/en-us/library/windows/desktop/ms704850%28v=vs.85%29.aspx Retrieves the value associated with a key. Retrieves the data type of the value associated with a key. Queries whether a stored attribute value equals a specified PROPVARIANT. Compares the attributes on this object with the attributes on another object. Retrieves a UINT32 value associated with a key. Retrieves a UINT64 value associated with a key. Retrieves a double value associated with a key. Retrieves a GUID value associated with a key. Retrieves the length of a string value associated with a key. Retrieves a wide-character string associated with a key. Retrieves a wide-character string associated with a key. This method allocates the memory for the string. Retrieves the length of a byte array associated with a key. Retrieves a byte array associated with a key. Retrieves a byte array associated with a key. This method allocates the memory for the array. Retrieves an interface pointer associated with a key. Associates an attribute value with a key. Removes a key/value pair from the object's attribute list. Removes all key/value pairs from the object's attribute list. Associates a UINT32 value with a key. Associates a UINT64 value with a key. Associates a double value with a key. Associates a GUID value with a key. Associates a wide-character string with a key. Associates a byte array with a key. Associates an IUnknown pointer with a key. Locks the attribute store so that no other thread can access it. Unlocks the attribute store. Retrieves the number of attributes that are set on this object. Retrieves an attribute at the specified index. Copies all of the attributes from this object into another attribute store. Retrieves the major type of the format. Queries whether the media type is a compressed format. Compares two media types and determines whether they are identical. Retrieves an alternative representation of the media type. Frees memory that was allocated by the GetRepresentation method. Creates an instance of either the sink writer or the source reader. Creates an instance of the sink writer or source reader, given a URL. Creates an instance of the sink writer or source reader, given an IUnknown pointer. CLSID_MFReadWriteClassFactory http://msdn.microsoft.com/en-gb/library/windows/desktop/ms702192%28v=vs.85%29.aspx Retrieves the value associated with a key. Retrieves the data type of the value associated with a key. Queries whether a stored attribute value equals a specified PROPVARIANT. Compares the attributes on this object with the attributes on another object. Retrieves a UINT32 value associated with a key. Retrieves a UINT64 value associated with a key. Retrieves a double value associated with a key. Retrieves a GUID value associated with a key. Retrieves the length of a string value associated with a key. Retrieves a wide-character string associated with a key. Retrieves a wide-character string associated with a key. This method allocates the memory for the string. Retrieves the length of a byte array associated with a key. Retrieves a byte array associated with a key. Retrieves a byte array associated with a key. This method allocates the memory for the array. Retrieves an interface pointer associated with a key. Associates an attribute value with a key. Removes a key/value pair from the object's attribute list. Removes all key/value pairs from the object's attribute list. Associates a UINT32 value with a key. Associates a UINT64 value with a key. Associates a double value with a key. Associates a GUID value with a key. Associates a wide-character string with a key. Associates a byte array with a key. Associates an IUnknown pointer with a key. Locks the attribute store so that no other thread can access it. Unlocks the attribute store. Retrieves the number of attributes that are set on this object. Retrieves an attribute at the specified index. Copies all of the attributes from this object into another attribute store. Retrieves flags associated with the sample. Sets flags associated with the sample. Retrieves the presentation time of the sample. Sets the presentation time of the sample. Retrieves the duration of the sample. Sets the duration of the sample. Retrieves the number of buffers in the sample. Retrieves a buffer from the sample. Converts a sample with multiple buffers into a sample with a single buffer. Adds a buffer to the end of the list of buffers in the sample. Removes a buffer at a specified index from the sample. Removes all buffers from the sample. Retrieves the total length of the valid data in all of the buffers in the sample. Copies the sample data to a buffer. Implemented by the Microsoft Media Foundation sink writer object. Adds a stream to the sink writer. Sets the input format for a stream on the sink writer. Initializes the sink writer for writing. Delivers a sample to the sink writer. Indicates a gap in an input stream. Places a marker in the specified stream. Notifies the media sink that a stream has reached the end of a segment. Flushes one or more streams. (Finalize) Completes all writing operations on the sink writer. Queries the underlying media sink or encoder for an interface. Gets statistics about the performance of the sink writer. IMFSourceReader interface http://msdn.microsoft.com/en-us/library/windows/desktop/dd374655%28v=vs.85%29.aspx Queries whether a stream is selected. Selects or deselects one or more streams. Gets a format that is supported natively by the media source. Gets the current media type for a stream. Sets the media type for a stream. Seeks to a new position in the media source. Reads the next sample from the media source. Flushes one or more streams. Queries the underlying media source or decoder for an interface. Gets an attribute from the underlying media source. Contains flags that indicate the status of the IMFSourceReader::ReadSample method http://msdn.microsoft.com/en-us/library/windows/desktop/dd375773(v=vs.85).aspx No Error An error occurred. If you receive this flag, do not make any further calls to IMFSourceReader methods. The source reader reached the end of the stream. One or more new streams were created The native format has changed for one or more streams. The native format is the format delivered by the media source before any decoders are inserted. The current media has type changed for one or more streams. To get the current media type, call the IMFSourceReader::GetCurrentMediaType method. There is a gap in the stream. This flag corresponds to an MEStreamTick event from the media source. All transforms inserted by the application have been removed for a particular stream. IMFTransform, defined in mftransform.h Retrieves the minimum and maximum number of input and output streams. virtual HRESULT STDMETHODCALLTYPE GetStreamLimits( /* [out] */ __RPC__out DWORD *pdwInputMinimum, /* [out] */ __RPC__out DWORD *pdwInputMaximum, /* [out] */ __RPC__out DWORD *pdwOutputMinimum, /* [out] */ __RPC__out DWORD *pdwOutputMaximum) = 0; Retrieves the current number of input and output streams on this MFT. virtual HRESULT STDMETHODCALLTYPE GetStreamCount( /* [out] */ __RPC__out DWORD *pcInputStreams, /* [out] */ __RPC__out DWORD *pcOutputStreams) = 0; Retrieves the stream identifiers for the input and output streams on this MFT. virtual HRESULT STDMETHODCALLTYPE GetStreamIDs( DWORD dwInputIDArraySize, /* [size_is][out] */ __RPC__out_ecount_full(dwInputIDArraySize) DWORD *pdwInputIDs, DWORD dwOutputIDArraySize, /* [size_is][out] */ __RPC__out_ecount_full(dwOutputIDArraySize) DWORD *pdwOutputIDs) = 0; Gets the buffer requirements and other information for an input stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetInputStreamInfo( DWORD dwInputStreamID, /* [out] */ __RPC__out MFT_INPUT_STREAM_INFO *pStreamInfo) = 0; Gets the buffer requirements and other information for an output stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetOutputStreamInfo( DWORD dwOutputStreamID, /* [out] */ __RPC__out MFT_OUTPUT_STREAM_INFO *pStreamInfo) = 0; Gets the global attribute store for this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetAttributes( /* [out] */ __RPC__deref_out_opt IMFAttributes **pAttributes) = 0; Retrieves the attribute store for an input stream on this MFT. virtual HRESULT STDMETHODCALLTYPE GetInputStreamAttributes( DWORD dwInputStreamID, /* [out] */ __RPC__deref_out_opt IMFAttributes **pAttributes) = 0; Retrieves the attribute store for an output stream on this MFT. virtual HRESULT STDMETHODCALLTYPE GetOutputStreamAttributes( DWORD dwOutputStreamID, /* [out] */ __RPC__deref_out_opt IMFAttributes **pAttributes) = 0; Removes an input stream from this MFT. virtual HRESULT STDMETHODCALLTYPE DeleteInputStream( DWORD dwStreamID) = 0; Adds one or more new input streams to this MFT. virtual HRESULT STDMETHODCALLTYPE AddInputStreams( DWORD cStreams, /* [in] */ __RPC__in DWORD *adwStreamIDs) = 0; Gets an available media type for an input stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetInputAvailableType( DWORD dwInputStreamID, DWORD dwTypeIndex, /* [out] */ __RPC__deref_out_opt IMFMediaType **ppType) = 0; Retrieves an available media type for an output stream on this MFT. virtual HRESULT STDMETHODCALLTYPE GetOutputAvailableType( DWORD dwOutputStreamID, DWORD dwTypeIndex, /* [out] */ __RPC__deref_out_opt IMFMediaType **ppType) = 0; Sets, tests, or clears the media type for an input stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE SetInputType( DWORD dwInputStreamID, /* [in] */ __RPC__in_opt IMFMediaType *pType, DWORD dwFlags) = 0; Sets, tests, or clears the media type for an output stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE SetOutputType( DWORD dwOutputStreamID, /* [in] */ __RPC__in_opt IMFMediaType *pType, DWORD dwFlags) = 0; Gets the current media type for an input stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetInputCurrentType( DWORD dwInputStreamID, /* [out] */ __RPC__deref_out_opt IMFMediaType **ppType) = 0; Gets the current media type for an output stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE GetOutputCurrentType( DWORD dwOutputStreamID, /* [out] */ __RPC__deref_out_opt IMFMediaType **ppType) = 0; Queries whether an input stream on this Media Foundation transform (MFT) can accept more data. virtual HRESULT STDMETHODCALLTYPE GetInputStatus( DWORD dwInputStreamID, /* [out] */ __RPC__out DWORD *pdwFlags) = 0; Queries whether the Media Foundation transform (MFT) is ready to produce output data. virtual HRESULT STDMETHODCALLTYPE GetOutputStatus( /* [out] */ __RPC__out DWORD *pdwFlags) = 0; Sets the range of time stamps the client needs for output. virtual HRESULT STDMETHODCALLTYPE SetOutputBounds( LONGLONG hnsLowerBound, LONGLONG hnsUpperBound) = 0; Sends an event to an input stream on this Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE ProcessEvent( DWORD dwInputStreamID, /* [in] */ __RPC__in_opt IMFMediaEvent *pEvent) = 0; Sends a message to the Media Foundation transform (MFT). virtual HRESULT STDMETHODCALLTYPE ProcessMessage( MFT_MESSAGE_TYPE eMessage, ULONG_PTR ulParam) = 0; Delivers data to an input stream on this Media Foundation transform (MFT). virtual /* [local] */ HRESULT STDMETHODCALLTYPE ProcessInput( DWORD dwInputStreamID, IMFSample *pSample, DWORD dwFlags) = 0; Generates output from the current input data. virtual /* [local] */ HRESULT STDMETHODCALLTYPE ProcessOutput( DWORD dwFlags, DWORD cOutputBufferCount, /* [size_is][out][in] */ MFT_OUTPUT_DATA_BUFFER *pOutputSamples, /* [out] */ DWORD *pdwStatus) = 0; See mfobjects.h Unknown event type. Signals a serious error. Custom event type. A non-fatal error occurred during streaming. Session Unknown Raised after the IMFMediaSession::SetTopology method completes asynchronously Raised by the Media Session when the IMFMediaSession::ClearTopologies method completes asynchronously. Raised when the IMFMediaSession::Start method completes asynchronously. Raised when the IMFMediaSession::Pause method completes asynchronously. Raised when the IMFMediaSession::Stop method completes asynchronously. Raised when the IMFMediaSession::Close method completes asynchronously. Raised by the Media Session when it has finished playing the last presentation in the playback queue. Raised by the Media Session when the playback rate changes. Raised by the Media Session when it completes a scrubbing request. Raised by the Media Session when the session capabilities change. Raised by the Media Session when the status of a topology changes. Raised by the Media Session when a new presentation starts. Raised by a media source a new presentation is ready. License acquisition is about to begin. License acquisition is complete. Individualization is about to begin. Individualization is complete. Signals the progress of a content enabler object. A content enabler object's action is complete. Raised by a trusted output if an error occurs while enforcing the output policy. Contains status information about the enforcement of an output policy. A media source started to buffer data. A media source stopped buffering data. The network source started opening a URL. The network source finished opening a URL. Raised by a media source at the start of a reconnection attempt. Raised by a media source at the end of a reconnection attempt. Raised by the enhanced video renderer (EVR) when it receives a user event from the presenter. Raised by the Media Session when the format changes on a media sink. Source Unknown Raised when a media source starts without seeking. Raised by a media stream when the source starts without seeking. Raised when a media source seeks to a new position. Raised by a media stream after a call to IMFMediaSource::Start causes a seek in the stream. Raised by a media source when it starts a new stream. Raised by a media source when it restarts or seeks a stream that is already active. Raised by a media source when the IMFMediaSource::Stop method completes asynchronously. Raised by a media stream when the IMFMediaSource::Stop method completes asynchronously. Raised by a media source when the IMFMediaSource::Pause method completes asynchronously. Raised by a media stream when the IMFMediaSource::Pause method completes asynchronously. Raised by a media source when a presentation ends. Raised by a media stream when the stream ends. Raised when a media stream delivers a new sample. Signals that a media stream does not have data available at a specified time. Raised by a media stream when it starts or stops thinning the stream. Raised by a media stream when the media type of the stream changes. Raised by a media source when the playback rate changes. Raised by the sequencer source when a segment is completed and is followed by another segment. Raised by a media source when the source's characteristics change. Raised by a media source to request a new playback rate. Raised by a media source when it updates its metadata. Raised by the sequencer source when the IMFSequencerSource::UpdateTopology method completes asynchronously. Sink Unknown Raised by a stream sink when it completes the transition to the running state. Raised by a stream sink when it completes the transition to the stopped state. Raised by a stream sink when it completes the transition to the paused state. Raised by a stream sink when the rate has changed. Raised by a stream sink to request a new media sample from the pipeline. Raised by a stream sink after the IMFStreamSink::PlaceMarker method is called. Raised by a stream sink when the stream has received enough preroll data to begin rendering. Raised by a stream sink when it completes a scrubbing request. Raised by a stream sink when the sink's media type is no longer valid. Raised by the stream sinks of the EVR if the video device changes. Provides feedback about playback quality to the quality manager. Raised when a media sink becomes invalid. The audio session display name changed. The volume or mute state of the audio session changed The audio device was removed. The Windows audio server system was shut down. The grouping parameters changed for the audio session. The audio session icon changed. The default audio format for the audio device changed. The audio session was disconnected from a Windows Terminal Services session The audio session was preempted by an exclusive-mode connection. Trust Unknown The output policy for a stream changed. Content protection message The IMFOutputTrustAuthority::SetPolicy method completed. DRM License Backup Completed DRM License Backup Progress DRM License Restore Completed DRM License Restore Progress DRM License Acquisition Completed DRM Individualization Completed DRM Individualization Progress DRM Proximity Completed DRM License Store Cleaned DRM Revocation Download Completed Transform Unknown Sent by an asynchronous MFT to request a new input sample. Sent by an asynchronous MFT when new output data is available from the MFT. Sent by an asynchronous Media Foundation transform (MFT) when a drain operation is complete. Sent by an asynchronous MFT in response to an MFT_MESSAGE_COMMAND_MARKER message. Media Foundation attribute guids http://msdn.microsoft.com/en-us/library/windows/desktop/ms696989%28v=vs.85%29.aspx Specifies whether an MFT performs asynchronous processing. Enables the use of an asynchronous MFT. Contains flags for an MFT activation object. Specifies the category for an MFT. Contains the class identifier (CLSID) of an MFT. Contains the registered input types for a Media Foundation transform (MFT). Contains the registered output types for a Media Foundation transform (MFT). Contains the symbolic link for a hardware-based MFT. Contains the display name for a hardware-based MFT. Contains a pointer to the stream attributes of the connected stream on a hardware-based MFT. Specifies whether a hardware-based MFT is connected to another hardware-based MFT. Specifies the preferred output format for an encoder. Specifies whether an MFT is registered only in the application's process. Contains configuration properties for an encoder. Specifies whether a hardware device source uses the system time for time stamps. Contains an IMFFieldOfUseMFTUnlock pointer, which can be used to unlock the MFT. Contains the merit value of a hardware codec. Specifies whether a decoder is optimized for transcoding rather than for playback. Contains a pointer to the proxy object for the application's presentation descriptor. Contains a pointer to the presentation descriptor from the protected media path (PMP). Specifies the duration of a presentation, in 100-nanosecond units. Specifies the total size of the source file, in bytes. Specifies the audio encoding bit rate for the presentation, in bits per second. Specifies the video encoding bit rate for the presentation, in bits per second. Specifies the MIME type of the content. Specifies when a presentation was last modified. The identifier of the playlist element in the presentation. Contains the preferred RFC 1766 language of the media source. The time at which the presentation must begin, relative to the start of the media source. Specifies whether the audio streams in the presentation have a variable bit rate. Media type Major Type Media Type subtype Audio block alignment Audio average bytes per second Audio number of channels Audio samples per second Audio bits per sample Enables the source reader or sink writer to use hardware-based Media Foundation transforms (MFTs). Contains additional format data for a media type. Specifies for a media type whether each sample is independent of the other samples in the stream. Specifies for a media type whether the samples have a fixed size. Contains a DirectShow format GUID for a media type. Specifies the preferred legacy format structure to use when converting an audio media type. Specifies for a media type whether the media data is compressed. Approximate data rate of the video stream, in bits per second, for a video media type. Specifies the payload type of an Advanced Audio Coding (AAC) stream. 0 - The stream contains raw_data_block elements only 1 - Audio Data Transport Stream (ADTS). The stream contains an adts_sequence, as defined by MPEG-2. 2 - Audio Data Interchange Format (ADIF). The stream contains an adif_sequence, as defined by MPEG-2. 3 - The stream contains an MPEG-4 audio transport stream with a synchronization layer (LOAS) and a multiplex layer (LATM). Specifies the audio profile and level of an Advanced Audio Coding (AAC) stream, as defined by ISO/IEC 14496-3. Media Foundation Errors RANGES 14000 - 14999 = General Media Foundation errors 15000 - 15999 = ASF parsing errors 16000 - 16999 = Media Source errors 17000 - 17999 = MEDIAFOUNDATION Network Error Events 18000 - 18999 = MEDIAFOUNDATION WMContainer Error Events 19000 - 19999 = MEDIAFOUNDATION Media Sink Error Events 20000 - 20999 = Renderer errors 21000 - 21999 = Topology Errors 25000 - 25999 = Timeline Errors 26000 - 26999 = Unused 28000 - 28999 = Transform errors 29000 - 29999 = Content Protection errors 40000 - 40999 = Clock errors 41000 - 41999 = MF Quality Management Errors 42000 - 42999 = MF Transcode API Errors MessageId: MF_E_PLATFORM_NOT_INITIALIZED MessageText: Platform not initialized. Please call MFStartup().%0 MessageId: MF_E_BUFFERTOOSMALL MessageText: The buffer was too small to carry out the requested action.%0 MessageId: MF_E_INVALIDREQUEST MessageText: The request is invalid in the current state.%0 MessageId: MF_E_INVALIDSTREAMNUMBER MessageText: The stream number provided was invalid.%0 MessageId: MF_E_INVALIDMEDIATYPE MessageText: The data specified for the media type is invalid, inconsistent, or not supported by this object.%0 MessageId: MF_E_NOTACCEPTING MessageText: The callee is currently not accepting further input.%0 MessageId: MF_E_NOT_INITIALIZED MessageText: This object needs to be initialized before the requested operation can be carried out.%0 MessageId: MF_E_UNSUPPORTED_REPRESENTATION MessageText: The requested representation is not supported by this object.%0 MessageId: MF_E_NO_MORE_TYPES MessageText: An object ran out of media types to suggest therefore the requested chain of streaming objects cannot be completed.%0 MessageId: MF_E_UNSUPPORTED_SERVICE MessageText: The object does not support the specified service.%0 MessageId: MF_E_UNEXPECTED MessageText: An unexpected error has occurred in the operation requested.%0 MessageId: MF_E_INVALIDNAME MessageText: Invalid name.%0 MessageId: MF_E_INVALIDTYPE MessageText: Invalid type.%0 MessageId: MF_E_INVALID_FILE_FORMAT MessageText: The file does not conform to the relevant file format specification. MessageId: MF_E_INVALIDINDEX MessageText: Invalid index.%0 MessageId: MF_E_INVALID_TIMESTAMP MessageText: An invalid timestamp was given.%0 MessageId: MF_E_UNSUPPORTED_SCHEME MessageText: The scheme of the given URL is unsupported.%0 MessageId: MF_E_UNSUPPORTED_BYTESTREAM_TYPE MessageText: The byte stream type of the given URL is unsupported.%0 MessageId: MF_E_UNSUPPORTED_TIME_FORMAT MessageText: The given time format is unsupported.%0 MessageId: MF_E_NO_SAMPLE_TIMESTAMP MessageText: The Media Sample does not have a timestamp.%0 MessageId: MF_E_NO_SAMPLE_DURATION MessageText: The Media Sample does not have a duration.%0 MessageId: MF_E_INVALID_STREAM_DATA MessageText: The request failed because the data in the stream is corrupt.%0\n. MessageId: MF_E_RT_UNAVAILABLE MessageText: Real time services are not available.%0 MessageId: MF_E_UNSUPPORTED_RATE MessageText: The specified rate is not supported.%0 MessageId: MF_E_THINNING_UNSUPPORTED MessageText: This component does not support stream-thinning.%0 MessageId: MF_E_REVERSE_UNSUPPORTED MessageText: The call failed because no reverse playback rates are available.%0 MessageId: MF_E_UNSUPPORTED_RATE_TRANSITION MessageText: The requested rate transition cannot occur in the current state.%0 MessageId: MF_E_RATE_CHANGE_PREEMPTED MessageText: The requested rate change has been pre-empted and will not occur.%0 MessageId: MF_E_NOT_FOUND MessageText: The specified object or value does not exist.%0 MessageId: MF_E_NOT_AVAILABLE MessageText: The requested value is not available.%0 MessageId: MF_E_NO_CLOCK MessageText: The specified operation requires a clock and no clock is available.%0 MessageId: MF_S_MULTIPLE_BEGIN MessageText: This callback and state had already been passed in to this event generator earlier.%0 MessageId: MF_E_MULTIPLE_BEGIN MessageText: This callback has already been passed in to this event generator.%0 MessageId: MF_E_MULTIPLE_SUBSCRIBERS MessageText: Some component is already listening to events on this event generator.%0 MessageId: MF_E_TIMER_ORPHANED MessageText: This timer was orphaned before its callback time arrived.%0 MessageId: MF_E_STATE_TRANSITION_PENDING MessageText: A state transition is already pending.%0 MessageId: MF_E_UNSUPPORTED_STATE_TRANSITION MessageText: The requested state transition is unsupported.%0 MessageId: MF_E_UNRECOVERABLE_ERROR_OCCURRED MessageText: An unrecoverable error has occurred.%0 MessageId: MF_E_SAMPLE_HAS_TOO_MANY_BUFFERS MessageText: The provided sample has too many buffers.%0 MessageId: MF_E_SAMPLE_NOT_WRITABLE MessageText: The provided sample is not writable.%0 MessageId: MF_E_INVALID_KEY MessageText: The specified key is not valid. MessageId: MF_E_BAD_STARTUP_VERSION MessageText: You are calling MFStartup with the wrong MF_VERSION. Mismatched bits? MessageId: MF_E_UNSUPPORTED_CAPTION MessageText: The caption of the given URL is unsupported.%0 MessageId: MF_E_INVALID_POSITION MessageText: The operation on the current offset is not permitted.%0 MessageId: MF_E_ATTRIBUTENOTFOUND MessageText: The requested attribute was not found.%0 MessageId: MF_E_PROPERTY_TYPE_NOT_ALLOWED MessageText: The specified property type is not allowed in this context.%0 MessageId: MF_E_PROPERTY_TYPE_NOT_SUPPORTED MessageText: The specified property type is not supported.%0 MessageId: MF_E_PROPERTY_EMPTY MessageText: The specified property is empty.%0 MessageId: MF_E_PROPERTY_NOT_EMPTY MessageText: The specified property is not empty.%0 MessageId: MF_E_PROPERTY_VECTOR_NOT_ALLOWED MessageText: The vector property specified is not allowed in this context.%0 MessageId: MF_E_PROPERTY_VECTOR_REQUIRED MessageText: A vector property is required in this context.%0 MessageId: MF_E_OPERATION_CANCELLED MessageText: The operation is cancelled.%0 MessageId: MF_E_BYTESTREAM_NOT_SEEKABLE MessageText: The provided bytestream was expected to be seekable and it is not.%0 MessageId: MF_E_DISABLED_IN_SAFEMODE MessageText: The Media Foundation platform is disabled when the system is running in Safe Mode.%0 MessageId: MF_E_CANNOT_PARSE_BYTESTREAM MessageText: The Media Source could not parse the byte stream.%0 MessageId: MF_E_SOURCERESOLVER_MUTUALLY_EXCLUSIVE_FLAGS MessageText: Mutually exclusive flags have been specified to source resolver. This flag combination is invalid.%0 MessageId: MF_E_MEDIAPROC_WRONGSTATE MessageText: MediaProc is in the wrong state%0 MessageId: MF_E_RT_THROUGHPUT_NOT_AVAILABLE MessageText: Real time I/O service can not provide requested throughput.%0 MessageId: MF_E_RT_TOO_MANY_CLASSES MessageText: The workqueue cannot be registered with more classes.%0 MessageId: MF_E_RT_WOULDBLOCK MessageText: This operation cannot succeed because another thread owns this object.%0 MessageId: MF_E_NO_BITPUMP MessageText: Internal. Bitpump not found.%0 MessageId: MF_E_RT_OUTOFMEMORY MessageText: No more RT memory available.%0 MessageId: MF_E_RT_WORKQUEUE_CLASS_NOT_SPECIFIED MessageText: An MMCSS class has not been set for this work queue.%0 MessageId: MF_E_INSUFFICIENT_BUFFER MessageText: Insufficient memory for response.%0 MessageId: MF_E_CANNOT_CREATE_SINK MessageText: Activate failed to create mediasink. Call OutputNode::GetUINT32(MF_TOPONODE_MAJORTYPE) for more information. %0 MessageId: MF_E_BYTESTREAM_UNKNOWN_LENGTH MessageText: The length of the provided bytestream is unknown.%0 MessageId: MF_E_SESSION_PAUSEWHILESTOPPED MessageText: The media session cannot pause from a stopped state.%0 MessageId: MF_S_ACTIVATE_REPLACED MessageText: The activate could not be created in the remote process for some reason it was replaced with empty one.%0 MessageId: MF_E_FORMAT_CHANGE_NOT_SUPPORTED MessageText: The data specified for the media type is supported, but would require a format change, which is not supported by this object.%0 MessageId: MF_E_INVALID_WORKQUEUE MessageText: The operation failed because an invalid combination of workqueue ID and flags was specified.%0 MessageId: MF_E_DRM_UNSUPPORTED MessageText: No DRM support is available.%0 MessageId: MF_E_UNAUTHORIZED MessageText: This operation is not authorized.%0 MessageId: MF_E_OUT_OF_RANGE MessageText: The value is not in the specified or valid range.%0 MessageId: MF_E_INVALID_CODEC_MERIT MessageText: The registered codec merit is not valid.%0 MessageId: MF_E_HW_MFT_FAILED_START_STREAMING MessageText: Hardware MFT failed to start streaming due to lack of hardware resources.%0 MessageId: MF_S_ASF_PARSEINPROGRESS MessageText: Parsing is still in progress and is not yet complete.%0 MessageId: MF_E_ASF_PARSINGINCOMPLETE MessageText: Not enough data have been parsed to carry out the requested action.%0 MessageId: MF_E_ASF_MISSINGDATA MessageText: There is a gap in the ASF data provided.%0 MessageId: MF_E_ASF_INVALIDDATA MessageText: The data provided are not valid ASF.%0 MessageId: MF_E_ASF_OPAQUEPACKET MessageText: The packet is opaque, so the requested information cannot be returned.%0 MessageId: MF_E_ASF_NOINDEX MessageText: The requested operation failed since there is no appropriate ASF index.%0 MessageId: MF_E_ASF_OUTOFRANGE MessageText: The value supplied is out of range for this operation.%0 MessageId: MF_E_ASF_INDEXNOTLOADED MessageText: The index entry requested needs to be loaded before it can be available.%0 MessageId: MF_E_ASF_TOO_MANY_PAYLOADS MessageText: The packet has reached the maximum number of payloads.%0 MessageId: MF_E_ASF_UNSUPPORTED_STREAM_TYPE MessageText: Stream type is not supported.%0 MessageId: MF_E_ASF_DROPPED_PACKET MessageText: One or more ASF packets were dropped.%0 MessageId: MF_E_NO_EVENTS_AVAILABLE MessageText: There are no events available in the queue.%0 MessageId: MF_E_INVALID_STATE_TRANSITION MessageText: A media source cannot go from the stopped state to the paused state.%0 MessageId: MF_E_END_OF_STREAM MessageText: The media stream cannot process any more samples because there are no more samples in the stream.%0 MessageId: MF_E_SHUTDOWN MessageText: The request is invalid because Shutdown() has been called.%0 MessageId: MF_E_MP3_NOTFOUND MessageText: The MP3 object was not found.%0 MessageId: MF_E_MP3_OUTOFDATA MessageText: The MP3 parser ran out of data before finding the MP3 object.%0 MessageId: MF_E_MP3_NOTMP3 MessageText: The file is not really a MP3 file.%0 MessageId: MF_E_MP3_NOTSUPPORTED MessageText: The MP3 file is not supported.%0 MessageId: MF_E_NO_DURATION MessageText: The Media stream has no duration.%0 MessageId: MF_E_INVALID_FORMAT MessageText: The Media format is recognized but is invalid.%0 MessageId: MF_E_PROPERTY_NOT_FOUND MessageText: The property requested was not found.%0 MessageId: MF_E_PROPERTY_READ_ONLY MessageText: The property is read only.%0 MessageId: MF_E_PROPERTY_NOT_ALLOWED MessageText: The specified property is not allowed in this context.%0 MessageId: MF_E_MEDIA_SOURCE_NOT_STARTED MessageText: The media source is not started.%0 MessageId: MF_E_UNSUPPORTED_FORMAT MessageText: The Media format is recognized but not supported.%0 MessageId: MF_E_MP3_BAD_CRC MessageText: The MPEG frame has bad CRC.%0 MessageId: MF_E_NOT_PROTECTED MessageText: The file is not protected.%0 MessageId: MF_E_MEDIA_SOURCE_WRONGSTATE MessageText: The media source is in the wrong state%0 MessageId: MF_E_MEDIA_SOURCE_NO_STREAMS_SELECTED MessageText: No streams are selected in source presentation descriptor.%0 MessageId: MF_E_CANNOT_FIND_KEYFRAME_SAMPLE MessageText: No key frame sample was found.%0 MessageId: MF_E_NETWORK_RESOURCE_FAILURE MessageText: An attempt to acquire a network resource failed.%0 MessageId: MF_E_NET_WRITE MessageText: Error writing to the network.%0 MessageId: MF_E_NET_READ MessageText: Error reading from the network.%0 MessageId: MF_E_NET_REQUIRE_NETWORK MessageText: Internal. Entry cannot complete operation without network.%0 MessageId: MF_E_NET_REQUIRE_ASYNC MessageText: Internal. Async op is required.%0 MessageId: MF_E_NET_BWLEVEL_NOT_SUPPORTED MessageText: Internal. Bandwidth levels are not supported.%0 MessageId: MF_E_NET_STREAMGROUPS_NOT_SUPPORTED MessageText: Internal. Stream groups are not supported.%0 MessageId: MF_E_NET_MANUALSS_NOT_SUPPORTED MessageText: Manual stream selection is not supported.%0 MessageId: MF_E_NET_INVALID_PRESENTATION_DESCRIPTOR MessageText: Invalid presentation descriptor.%0 MessageId: MF_E_NET_CACHESTREAM_NOT_FOUND MessageText: Cannot find cache stream.%0 MessageId: MF_I_MANUAL_PROXY MessageText: The proxy setting is manual.%0 duplicate removed MessageId=17011 Severity=Informational Facility=MEDIAFOUNDATION SymbolicName=MF_E_INVALID_REQUEST Language=English The request is invalid in the current state.%0 . MessageId: MF_E_NET_REQUIRE_INPUT MessageText: Internal. Entry cannot complete operation without input.%0 MessageId: MF_E_NET_REDIRECT MessageText: The client redirected to another server.%0 MessageId: MF_E_NET_REDIRECT_TO_PROXY MessageText: The client is redirected to a proxy server.%0 MessageId: MF_E_NET_TOO_MANY_REDIRECTS MessageText: The client reached maximum redirection limit.%0 MessageId: MF_E_NET_TIMEOUT MessageText: The server, a computer set up to offer multimedia content to other computers, could not handle your request for multimedia content in a timely manner. Please try again later.%0 MessageId: MF_E_NET_CLIENT_CLOSE MessageText: The control socket is closed by the client.%0 MessageId: MF_E_NET_BAD_CONTROL_DATA MessageText: The server received invalid data from the client on the control connection.%0 MessageId: MF_E_NET_INCOMPATIBLE_SERVER MessageText: The server is not a compatible streaming media server.%0 MessageId: MF_E_NET_UNSAFE_URL MessageText: Url.%0 MessageId: MF_E_NET_CACHE_NO_DATA MessageText: Data is not available.%0 MessageId: MF_E_NET_EOL MessageText: End of line.%0 MessageId: MF_E_NET_BAD_REQUEST MessageText: The request could not be understood by the server.%0 MessageId: MF_E_NET_INTERNAL_SERVER_ERROR MessageText: The server encountered an unexpected condition which prevented it from fulfilling the request.%0 MessageId: MF_E_NET_SESSION_NOT_FOUND MessageText: Session not found.%0 MessageId: MF_E_NET_NOCONNECTION MessageText: There is no connection established with the Windows Media server. The operation failed.%0 MessageId: MF_E_NET_CONNECTION_FAILURE MessageText: The network connection has failed.%0 MessageId: MF_E_NET_INCOMPATIBLE_PUSHSERVER MessageText: The Server service that received the HTTP push request is not a compatible version of Windows Media Services (WMS). This error may indicate the push request was received by IIS instead of WMS. Ensure WMS is started and has the HTTP Server control protocol properly enabled and try again.%0 MessageId: MF_E_NET_SERVER_ACCESSDENIED MessageText: The Windows Media server is denying access. The username and/or password might be incorrect.%0 MessageId: MF_E_NET_PROXY_ACCESSDENIED MessageText: The proxy server is denying access. The username and/or password might be incorrect.%0 MessageId: MF_E_NET_CANNOTCONNECT MessageText: Unable to establish a connection to the server.%0 MessageId: MF_E_NET_INVALID_PUSH_TEMPLATE MessageText: The specified push template is invalid.%0 MessageId: MF_E_NET_INVALID_PUSH_PUBLISHING_POINT MessageText: The specified push publishing point is invalid.%0 MessageId: MF_E_NET_BUSY MessageText: The requested resource is in use.%0 MessageId: MF_E_NET_RESOURCE_GONE MessageText: The Publishing Point or file on the Windows Media Server is no longer available.%0 MessageId: MF_E_NET_ERROR_FROM_PROXY MessageText: The proxy experienced an error while attempting to contact the media server.%0 MessageId: MF_E_NET_PROXY_TIMEOUT MessageText: The proxy did not receive a timely response while attempting to contact the media server.%0 MessageId: MF_E_NET_SERVER_UNAVAILABLE MessageText: The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.%0 MessageId: MF_E_NET_TOO_MUCH_DATA MessageText: The encoding process was unable to keep up with the amount of supplied data.%0 MessageId: MF_E_NET_SESSION_INVALID MessageText: Session not found.%0 MessageId: MF_E_OFFLINE_MODE MessageText: The requested URL is not available in offline mode.%0 MessageId: MF_E_NET_UDP_BLOCKED MessageText: A device in the network is blocking UDP traffic.%0 MessageId: MF_E_NET_UNSUPPORTED_CONFIGURATION MessageText: The specified configuration value is not supported.%0 MessageId: MF_E_NET_PROTOCOL_DISABLED MessageText: The networking protocol is disabled.%0 MessageId: MF_E_ALREADY_INITIALIZED MessageText: This object has already been initialized and cannot be re-initialized at this time.%0 MessageId: MF_E_BANDWIDTH_OVERRUN MessageText: The amount of data passed in exceeds the given bitrate and buffer window.%0 MessageId: MF_E_LATE_SAMPLE MessageText: The sample was passed in too late to be correctly processed.%0 MessageId: MF_E_FLUSH_NEEDED MessageText: The requested action cannot be carried out until the object is flushed and the queue is emptied.%0 MessageId: MF_E_INVALID_PROFILE MessageText: The profile is invalid.%0 MessageId: MF_E_INDEX_NOT_COMMITTED MessageText: The index that is being generated needs to be committed before the requested action can be carried out.%0 MessageId: MF_E_NO_INDEX MessageText: The index that is necessary for the requested action is not found.%0 MessageId: MF_E_CANNOT_INDEX_IN_PLACE MessageText: The requested index cannot be added in-place to the specified ASF content.%0 MessageId: MF_E_MISSING_ASF_LEAKYBUCKET MessageText: The ASF leaky bucket parameters must be specified in order to carry out this request.%0 MessageId: MF_E_INVALID_ASF_STREAMID MessageText: The stream id is invalid. The valid range for ASF stream id is from 1 to 127.%0 MessageId: MF_E_STREAMSINK_REMOVED MessageText: The requested Stream Sink has been removed and cannot be used.%0 MessageId: MF_E_STREAMSINKS_OUT_OF_SYNC MessageText: The various Stream Sinks in this Media Sink are too far out of sync for the requested action to take place.%0 MessageId: MF_E_STREAMSINKS_FIXED MessageText: Stream Sinks cannot be added to or removed from this Media Sink because its set of streams is fixed.%0 MessageId: MF_E_STREAMSINK_EXISTS MessageText: The given Stream Sink already exists.%0 MessageId: MF_E_SAMPLEALLOCATOR_CANCELED MessageText: Sample allocations have been canceled.%0 MessageId: MF_E_SAMPLEALLOCATOR_EMPTY MessageText: The sample allocator is currently empty, due to outstanding requests.%0 MessageId: MF_E_SINK_ALREADYSTOPPED MessageText: When we try to sopt a stream sink, it is already stopped %0 MessageId: MF_E_ASF_FILESINK_BITRATE_UNKNOWN MessageText: The ASF file sink could not reserve AVIO because the bitrate is unknown.%0 MessageId: MF_E_SINK_NO_STREAMS MessageText: No streams are selected in sink presentation descriptor.%0 MessageId: MF_S_SINK_NOT_FINALIZED MessageText: The sink has not been finalized before shut down. This may cause sink generate a corrupted content.%0 MessageId: MF_E_METADATA_TOO_LONG MessageText: A metadata item was too long to write to the output container.%0 MessageId: MF_E_SINK_NO_SAMPLES_PROCESSED MessageText: The operation failed because no samples were processed by the sink.%0 MessageId: MF_E_VIDEO_REN_NO_PROCAMP_HW MessageText: There is no available procamp hardware with which to perform color correction.%0 MessageId: MF_E_VIDEO_REN_NO_DEINTERLACE_HW MessageText: There is no available deinterlacing hardware with which to deinterlace the video stream.%0 MessageId: MF_E_VIDEO_REN_COPYPROT_FAILED MessageText: A video stream requires copy protection to be enabled, but there was a failure in attempting to enable copy protection.%0 MessageId: MF_E_VIDEO_REN_SURFACE_NOT_SHARED MessageText: A component is attempting to access a surface for sharing that is not shared.%0 MessageId: MF_E_VIDEO_DEVICE_LOCKED MessageText: A component is attempting to access a shared device that is already locked by another component.%0 MessageId: MF_E_NEW_VIDEO_DEVICE MessageText: The device is no longer available. The handle should be closed and a new one opened.%0 MessageId: MF_E_NO_VIDEO_SAMPLE_AVAILABLE MessageText: A video sample is not currently queued on a stream that is required for mixing.%0 MessageId: MF_E_NO_AUDIO_PLAYBACK_DEVICE MessageText: No audio playback device was found.%0 MessageId: MF_E_AUDIO_PLAYBACK_DEVICE_IN_USE MessageText: The requested audio playback device is currently in use.%0 MessageId: MF_E_AUDIO_PLAYBACK_DEVICE_INVALIDATED MessageText: The audio playback device is no longer present.%0 MessageId: MF_E_AUDIO_SERVICE_NOT_RUNNING MessageText: The audio service is not running.%0 MessageId: MF_E_TOPO_INVALID_OPTIONAL_NODE MessageText: The topology contains an invalid optional node. Possible reasons are incorrect number of outputs and inputs or optional node is at the beginning or end of a segment. %0 MessageId: MF_E_TOPO_CANNOT_FIND_DECRYPTOR MessageText: No suitable transform was found to decrypt the content. %0 MessageId: MF_E_TOPO_CODEC_NOT_FOUND MessageText: No suitable transform was found to encode or decode the content. %0 MessageId: MF_E_TOPO_CANNOT_CONNECT MessageText: Unable to find a way to connect nodes%0 MessageId: MF_E_TOPO_UNSUPPORTED MessageText: Unsupported operations in topoloader%0 MessageId: MF_E_TOPO_INVALID_TIME_ATTRIBUTES MessageText: The topology or its nodes contain incorrectly set time attributes%0 MessageId: MF_E_TOPO_LOOPS_IN_TOPOLOGY MessageText: The topology contains loops, which are unsupported in media foundation topologies%0 MessageId: MF_E_TOPO_MISSING_PRESENTATION_DESCRIPTOR MessageText: A source stream node in the topology does not have a presentation descriptor%0 MessageId: MF_E_TOPO_MISSING_STREAM_DESCRIPTOR MessageText: A source stream node in the topology does not have a stream descriptor%0 MessageId: MF_E_TOPO_STREAM_DESCRIPTOR_NOT_SELECTED MessageText: A stream descriptor was set on a source stream node but it was not selected on the presentation descriptor%0 MessageId: MF_E_TOPO_MISSING_SOURCE MessageText: A source stream node in the topology does not have a source%0 MessageId: MF_E_TOPO_SINK_ACTIVATES_UNSUPPORTED MessageText: The topology loader does not support sink activates on output nodes.%0 MessageId: MF_E_SEQUENCER_UNKNOWN_SEGMENT_ID MessageText: The sequencer cannot find a segment with the given ID.%0\n. MessageId: MF_S_SEQUENCER_CONTEXT_CANCELED MessageText: The context was canceled.%0\n. MessageId: MF_E_NO_SOURCE_IN_CACHE MessageText: Cannot find source in source cache.%0\n. MessageId: MF_S_SEQUENCER_SEGMENT_AT_END_OF_STREAM MessageText: Cannot update topology flags.%0\n. MessageId: MF_E_TRANSFORM_TYPE_NOT_SET MessageText: A valid type has not been set for this stream or a stream that it depends on.%0 MessageId: MF_E_TRANSFORM_STREAM_CHANGE MessageText: A stream change has occurred. Output cannot be produced until the streams have been renegotiated.%0 MessageId: MF_E_TRANSFORM_INPUT_REMAINING MessageText: The transform cannot take the requested action until all of the input data it currently holds is processed or flushed.%0 MessageId: MF_E_TRANSFORM_PROFILE_MISSING MessageText: The transform requires a profile but no profile was supplied or found.%0 MessageId: MF_E_TRANSFORM_PROFILE_INVALID_OR_CORRUPT MessageText: The transform requires a profile but the supplied profile was invalid or corrupt.%0 MessageId: MF_E_TRANSFORM_PROFILE_TRUNCATED MessageText: The transform requires a profile but the supplied profile ended unexpectedly while parsing.%0 MessageId: MF_E_TRANSFORM_PROPERTY_PID_NOT_RECOGNIZED MessageText: The property ID does not match any property supported by the transform.%0 MessageId: MF_E_TRANSFORM_PROPERTY_VARIANT_TYPE_WRONG MessageText: The variant does not have the type expected for this property ID.%0 MessageId: MF_E_TRANSFORM_PROPERTY_NOT_WRITEABLE MessageText: An attempt was made to set the value on a read-only property.%0 MessageId: MF_E_TRANSFORM_PROPERTY_ARRAY_VALUE_WRONG_NUM_DIM MessageText: The array property value has an unexpected number of dimensions.%0 MessageId: MF_E_TRANSFORM_PROPERTY_VALUE_SIZE_WRONG MessageText: The array or blob property value has an unexpected size.%0 MessageId: MF_E_TRANSFORM_PROPERTY_VALUE_OUT_OF_RANGE MessageText: The property value is out of range for this transform.%0 MessageId: MF_E_TRANSFORM_PROPERTY_VALUE_INCOMPATIBLE MessageText: The property value is incompatible with some other property or mediatype set on the transform.%0 MessageId: MF_E_TRANSFORM_NOT_POSSIBLE_FOR_CURRENT_OUTPUT_MEDIATYPE MessageText: The requested operation is not supported for the currently set output mediatype.%0 MessageId: MF_E_TRANSFORM_NOT_POSSIBLE_FOR_CURRENT_INPUT_MEDIATYPE MessageText: The requested operation is not supported for the currently set input mediatype.%0 MessageId: MF_E_TRANSFORM_NOT_POSSIBLE_FOR_CURRENT_MEDIATYPE_COMBINATION MessageText: The requested operation is not supported for the currently set combination of mediatypes.%0 MessageId: MF_E_TRANSFORM_CONFLICTS_WITH_OTHER_CURRENTLY_ENABLED_FEATURES MessageText: The requested feature is not supported in combination with some other currently enabled feature.%0 MessageId: MF_E_TRANSFORM_NEED_MORE_INPUT MessageText: The transform cannot produce output until it gets more input samples.%0 MessageId: MF_E_TRANSFORM_NOT_POSSIBLE_FOR_CURRENT_SPKR_CONFIG MessageText: The requested operation is not supported for the current speaker configuration.%0 MessageId: MF_E_TRANSFORM_CANNOT_CHANGE_MEDIATYPE_WHILE_PROCESSING MessageText: The transform cannot accept mediatype changes in the middle of processing.%0 MessageId: MF_S_TRANSFORM_DO_NOT_PROPAGATE_EVENT MessageText: The caller should not propagate this event to downstream components.%0 MessageId: MF_E_UNSUPPORTED_D3D_TYPE MessageText: The input type is not supported for D3D device.%0 MessageId: MF_E_TRANSFORM_ASYNC_LOCKED MessageText: The caller does not appear to support this transform's asynchronous capabilities.%0 MessageId: MF_E_TRANSFORM_CANNOT_INITIALIZE_ACM_DRIVER MessageText: An audio compression manager driver could not be initialized by the transform.%0 MessageId: MF_E_LICENSE_INCORRECT_RIGHTS MessageText: You are not allowed to open this file. Contact the content provider for further assistance.%0 MessageId: MF_E_LICENSE_OUTOFDATE MessageText: The license for this media file has expired. Get a new license or contact the content provider for further assistance.%0 MessageId: MF_E_LICENSE_REQUIRED MessageText: You need a license to perform the requested operation on this media file.%0 MessageId: MF_E_DRM_HARDWARE_INCONSISTENT MessageText: The licenses for your media files are corrupted. Contact Microsoft product support.%0 MessageId: MF_E_NO_CONTENT_PROTECTION_MANAGER MessageText: The APP needs to provide IMFContentProtectionManager callback to access the protected media file.%0 MessageId: MF_E_LICENSE_RESTORE_NO_RIGHTS MessageText: Client does not have rights to restore licenses.%0 MessageId: MF_E_BACKUP_RESTRICTED_LICENSE MessageText: Licenses are restricted and hence can not be backed up.%0 MessageId: MF_E_LICENSE_RESTORE_NEEDS_INDIVIDUALIZATION MessageText: License restore requires machine to be individualized.%0 MessageId: MF_S_PROTECTION_NOT_REQUIRED MessageText: Protection for stream is not required.%0 MessageId: MF_E_COMPONENT_REVOKED MessageText: Component is revoked.%0 MessageId: MF_E_TRUST_DISABLED MessageText: Trusted functionality is currently disabled on this component.%0 MessageId: MF_E_WMDRMOTA_NO_ACTION MessageText: No Action is set on WMDRM Output Trust Authority.%0 MessageId: MF_E_WMDRMOTA_ACTION_ALREADY_SET MessageText: Action is already set on WMDRM Output Trust Authority.%0 MessageId: MF_E_WMDRMOTA_DRM_HEADER_NOT_AVAILABLE MessageText: DRM Heaader is not available.%0 MessageId: MF_E_WMDRMOTA_DRM_ENCRYPTION_SCHEME_NOT_SUPPORTED MessageText: Current encryption scheme is not supported.%0 MessageId: MF_E_WMDRMOTA_ACTION_MISMATCH MessageText: Action does not match with current configuration.%0 MessageId: MF_E_WMDRMOTA_INVALID_POLICY MessageText: Invalid policy for WMDRM Output Trust Authority.%0 MessageId: MF_E_POLICY_UNSUPPORTED MessageText: The policies that the Input Trust Authority requires to be enforced are unsupported by the outputs.%0 MessageId: MF_E_OPL_NOT_SUPPORTED MessageText: The OPL that the license requires to be enforced are not supported by the Input Trust Authority.%0 MessageId: MF_E_TOPOLOGY_VERIFICATION_FAILED MessageText: The topology could not be successfully verified.%0 MessageId: MF_E_SIGNATURE_VERIFICATION_FAILED MessageText: Signature verification could not be completed successfully for this component.%0 MessageId: MF_E_DEBUGGING_NOT_ALLOWED MessageText: Running this process under a debugger while using protected content is not allowed.%0 MessageId: MF_E_CODE_EXPIRED MessageText: MF component has expired.%0 MessageId: MF_E_GRL_VERSION_TOO_LOW MessageText: The current GRL on the machine does not meet the minimum version requirements.%0 MessageId: MF_E_GRL_RENEWAL_NOT_FOUND MessageText: The current GRL on the machine does not contain any renewal entries for the specified revocation.%0 MessageId: MF_E_GRL_EXTENSIBLE_ENTRY_NOT_FOUND MessageText: The current GRL on the machine does not contain any extensible entries for the specified extension GUID.%0 MessageId: MF_E_KERNEL_UNTRUSTED MessageText: The kernel isn't secure for high security level content.%0 MessageId: MF_E_PEAUTH_UNTRUSTED MessageText: The response from protected environment driver isn't valid.%0 MessageId: MF_E_NON_PE_PROCESS MessageText: A non-PE process tried to talk to PEAuth.%0 MessageId: MF_E_REBOOT_REQUIRED MessageText: We need to reboot the machine.%0 MessageId: MF_S_WAIT_FOR_POLICY_SET MessageText: Protection for this stream is not guaranteed to be enforced until the MEPolicySet event is fired.%0 MessageId: MF_S_VIDEO_DISABLED_WITH_UNKNOWN_SOFTWARE_OUTPUT MessageText: This video stream is disabled because it is being sent to an unknown software output.%0 MessageId: MF_E_GRL_INVALID_FORMAT MessageText: The GRL file is not correctly formed, it may have been corrupted or overwritten.%0 MessageId: MF_E_GRL_UNRECOGNIZED_FORMAT MessageText: The GRL file is in a format newer than those recognized by this GRL Reader.%0 MessageId: MF_E_ALL_PROCESS_RESTART_REQUIRED MessageText: The GRL was reloaded and required all processes that can run protected media to restart.%0 MessageId: MF_E_PROCESS_RESTART_REQUIRED MessageText: The GRL was reloaded and the current process needs to restart.%0 MessageId: MF_E_USERMODE_UNTRUSTED MessageText: The user space is untrusted for protected content play.%0 MessageId: MF_E_PEAUTH_SESSION_NOT_STARTED MessageText: PEAuth communication session hasn't been started.%0 MessageId: MF_E_PEAUTH_PUBLICKEY_REVOKED MessageText: PEAuth's public key is revoked.%0 MessageId: MF_E_GRL_ABSENT MessageText: The GRL is absent.%0 MessageId: MF_S_PE_TRUSTED MessageText: The Protected Environment is trusted.%0 MessageId: MF_E_PE_UNTRUSTED MessageText: The Protected Environment is untrusted.%0 MessageId: MF_E_PEAUTH_NOT_STARTED MessageText: The Protected Environment Authorization service (PEAUTH) has not been started.%0 MessageId: MF_E_INCOMPATIBLE_SAMPLE_PROTECTION MessageText: The sample protection algorithms supported by components are not compatible.%0 MessageId: MF_E_PE_SESSIONS_MAXED MessageText: No more protected environment sessions can be supported.%0 MessageId: MF_E_HIGH_SECURITY_LEVEL_CONTENT_NOT_ALLOWED MessageText: WMDRM ITA does not allow protected content with high security level for this release.%0 MessageId: MF_E_TEST_SIGNED_COMPONENTS_NOT_ALLOWED MessageText: WMDRM ITA cannot allow the requested action for the content as one or more components is not properly signed.%0 MessageId: MF_E_ITA_UNSUPPORTED_ACTION MessageText: WMDRM ITA does not support the requested action.%0 MessageId: MF_E_ITA_ERROR_PARSING_SAP_PARAMETERS MessageText: WMDRM ITA encountered an error in parsing the Secure Audio Path parameters.%0 MessageId: MF_E_POLICY_MGR_ACTION_OUTOFBOUNDS MessageText: The Policy Manager action passed in is invalid.%0 MessageId: MF_E_BAD_OPL_STRUCTURE_FORMAT MessageText: The structure specifying Output Protection Level is not the correct format.%0 MessageId: MF_E_ITA_UNRECOGNIZED_ANALOG_VIDEO_PROTECTION_GUID MessageText: WMDRM ITA does not recognize the Explicite Analog Video Output Protection guid specified in the license.%0 MessageId: MF_E_NO_PMP_HOST MessageText: IMFPMPHost object not available.%0 MessageId: MF_E_ITA_OPL_DATA_NOT_INITIALIZED MessageText: WMDRM ITA could not initialize the Output Protection Level data.%0 MessageId: MF_E_ITA_UNRECOGNIZED_ANALOG_VIDEO_OUTPUT MessageText: WMDRM ITA does not recognize the Analog Video Output specified by the OTA.%0 MessageId: MF_E_ITA_UNRECOGNIZED_DIGITAL_VIDEO_OUTPUT MessageText: WMDRM ITA does not recognize the Digital Video Output specified by the OTA.%0 MessageId: MF_E_CLOCK_INVALID_CONTINUITY_KEY MessageText: The continuity key supplied is not currently valid.%0 MessageId: MF_E_CLOCK_NO_TIME_SOURCE MessageText: No Presentation Time Source has been specified.%0 MessageId: MF_E_CLOCK_STATE_ALREADY_SET MessageText: The clock is already in the requested state.%0 MessageId: MF_E_CLOCK_NOT_SIMPLE MessageText: The clock has too many advanced features to carry out the request.%0 MessageId: MF_S_CLOCK_STOPPED MessageText: Timer::SetTimer returns this success code if called happened while timer is stopped. Timer is not going to be dispatched until clock is running%0 MessageId: MF_E_NO_MORE_DROP_MODES MessageText: The component does not support any more drop modes.%0 MessageId: MF_E_NO_MORE_QUALITY_LEVELS MessageText: The component does not support any more quality levels.%0 MessageId: MF_E_DROPTIME_NOT_SUPPORTED MessageText: The component does not support drop time functionality.%0 MessageId: MF_E_QUALITYKNOB_WAIT_LONGER MessageText: Quality Manager needs to wait longer before bumping the Quality Level up.%0 MessageId: MF_E_QM_INVALIDSTATE MessageText: Quality Manager is in an invalid state. Quality Management is off at this moment.%0 MessageId: MF_E_TRANSCODE_NO_CONTAINERTYPE MessageText: No transcode output container type is specified.%0 MessageId: MF_E_TRANSCODE_PROFILE_NO_MATCHING_STREAMS MessageText: The profile does not have a media type configuration for any selected source streams.%0 MessageId: MF_E_TRANSCODE_NO_MATCHING_ENCODER MessageText: Cannot find an encoder MFT that accepts the user preferred output type.%0 MessageId: MF_E_ALLOCATOR_NOT_INITIALIZED MessageText: Memory allocator is not initialized.%0 MessageId: MF_E_ALLOCATOR_NOT_COMMITED MessageText: Memory allocator is not committed yet.%0 MessageId: MF_E_ALLOCATOR_ALREADY_COMMITED MessageText: Memory allocator has already been committed.%0 MessageId: MF_E_STREAM_ERROR MessageText: An error occurred in media stream.%0 MessageId: MF_E_INVALID_STREAM_STATE MessageText: Stream is not in a state to handle the request.%0 MessageId: MF_E_HW_STREAM_NOT_CONNECTED MessageText: Hardware stream is not connected yet.%0 Main interface for using Media Foundation with NAudio initializes MediaFoundation - only needs to be called once per process Enumerate the installed MediaFoundation transforms in the specified category A category from MediaFoundationTransformCategories uninitializes MediaFoundation Creates a Media type Creates a media type from a WaveFormat Creates a memory buffer of the specified size Memory buffer size in bytes The memory buffer Creates a sample object The sample object Creates a new attributes store Initial size The attributes store Creates a media foundation byte stream based on a stream object (usable with WinRT streams) The input stream A media foundation byte stream Creates a source reader based on a byte stream The byte stream A media foundation source reader Interop definitions for MediaFoundation thanks to Lucian Wischik for the initial work on many of these definitions (also various interfaces) n.b. the goal is to make as much of this internal as possible, and provide better .NET APIs using the MediaFoundationApi class instead Initializes Microsoft Media Foundation. Shuts down the Microsoft Media Foundation platform Creates an empty media type. Initializes a media type from a WAVEFORMATEX structure. Converts a Media Foundation audio media type to a WAVEFORMATEX structure. TODO: try making second parameter out WaveFormatExtraData Creates the source reader from a URL. Creates the source reader from a byte stream. Creates the sink writer from a URL or byte stream. Creates a Microsoft Media Foundation byte stream that wraps an IRandomAccessStream object. Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. Creates an empty media sample. Allocates system memory and creates a media buffer to manage it. Creates an empty attribute store. Gets a list of output formats from an audio encoder. All streams First audio stream First video stream Media source Media Foundation SDK Version Media Foundation API Version Media Foundation Version An abstract base class for simplifying working with Media Foundation Transforms You need to override the method that actually creates and configures the transform The Source Provider The Output WaveFormat Constructs a new MediaFoundationTransform wrapper Will read one second at a time The source provider for input data to the transform The desired output format To be implemented by overriding classes. Create the transform object, set up its input and output types, and configure any custom properties in here An object implementing IMFTrasform Disposes this MediaFoundation transform Disposes this Media Foundation Transform Destructor The output WaveFormat of this Media Foundation Transform Reads data out of the source, passing it through the transform Output buffer Offset within buffer to write to Desired byte count Number of bytes read Attempts to read from the transform Some useful info here: http://msdn.microsoft.com/en-gb/library/windows/desktop/aa965264%28v=vs.85%29.aspx#process_data Indicate that the source has been repositioned and completely drain out the transforms buffers Media Foundation Transform Categories MFT_CATEGORY_VIDEO_DECODER MFT_CATEGORY_VIDEO_ENCODER MFT_CATEGORY_VIDEO_EFFECT MFT_CATEGORY_MULTIPLEXER MFT_CATEGORY_DEMULTIPLEXER MFT_CATEGORY_AUDIO_DECODER MFT_CATEGORY_AUDIO_ENCODER MFT_CATEGORY_AUDIO_EFFECT MFT_CATEGORY_VIDEO_PROCESSOR MFT_CATEGORY_OTHER Media Type helper class, simplifying working with IMFMediaType (will probably change in the future, to inherit from an attributes class) Currently does not release the COM object, so you must do that yourself Wraps an existing IMFMediaType object The IMFMediaType object Creates and wraps a new IMFMediaType object Creates and wraps a new IMFMediaType object based on a WaveFormat WaveFormat Tries to get a UINT32 value, returning a default value if it doesn't exist Attribute key Default value Value or default if key doesn't exist The Sample Rate (valid for audio media types) The number of Channels (valid for audio media types) The number of bits per sample (n.b. not always valid for compressed audio types) The average bytes per second (valid for audio media types) The Media Subtype. For audio, is a value from the AudioSubtypes class The Major type, e.g. audio or video (from the MediaTypes class) Access to the actual IMFMediaType object Use to pass to MF APIs or Marshal.ReleaseComObject when you are finished with it Major Media Types http://msdn.microsoft.com/en-us/library/windows/desktop/aa367377%28v=vs.85%29.aspx Default Audio Video Protected Media Synchronized Accessible Media Interchange (SAMI) captions. Script stream Still image stream. HTML stream. Binary stream. A stream that contains data files. Contains information about an input stream on a Media Foundation transform (MFT) Maximum amount of time between an input sample and the corresponding output sample, in 100-nanosecond units. Bitwise OR of zero or more flags from the _MFT_INPUT_STREAM_INFO_FLAGS enumeration. The minimum size of each input buffer, in bytes. Maximum amount of input data, in bytes, that the MFT holds to perform lookahead. The memory alignment required for input buffers. If the MFT does not require a specific alignment, the value is zero. Defines messages for a Media Foundation transform (MFT). Requests the MFT to flush all stored data. Requests the MFT to drain any stored data. Sets or clears the Direct3D Device Manager for DirectX Video Accereration (DXVA). Drop samples - requires Windows 7 Command Tick - requires Windows 8 Notifies the MFT that streaming is about to begin. Notifies the MFT that streaming is about to end. Notifies the MFT that an input stream has ended. Notifies the MFT that the first sample is about to be processed. Marks a point in the stream. This message applies only to asynchronous MFTs. Requires Windows 7 Contains information about an output buffer for a Media Foundation transform. Output stream identifier. Pointer to the IMFSample interface. Before calling ProcessOutput, set this member to zero. Before calling ProcessOutput, set this member to NULL. Contains information about an output stream on a Media Foundation transform (MFT). Bitwise OR of zero or more flags from the _MFT_OUTPUT_STREAM_INFO_FLAGS enumeration. Minimum size of each output buffer, in bytes. The memory alignment required for output buffers. Contains media type information for registering a Media Foundation transform (MFT). The major media type. The Media Subtype Contains statistics about the performance of the sink writer. The size of the structure, in bytes. The time stamp of the most recent sample given to the sink writer. The time stamp of the most recent sample to be encoded. The time stamp of the most recent sample given to the media sink. The time stamp of the most recent stream tick. The system time of the most recent sample request from the media sink. The number of samples received. The number of samples encoded. The number of samples given to the media sink. The number of stream ticks received. The amount of data, in bytes, currently waiting to be processed. The total amount of data, in bytes, that has been sent to the media sink. The number of pending sample requests. The average rate, in media samples per 100-nanoseconds, at which the application sent samples to the sink writer. The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the encoder The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the media sink. Contains flags for registering and enumeration Media Foundation transforms (MFTs). None The MFT performs synchronous data processing in software. The MFT performs asynchronous data processing in software. The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. The MFT that must be unlocked by the application before use. For enumeration, include MFTs that were registered in the caller's process. The MFT is optimized for transcoding rather than playback. For enumeration, sort and filter the results. Bitwise OR of all the flags, excluding MFT_ENUM_FLAG_SORTANDFILTER. Indicates the status of an input stream on a Media Foundation transform (MFT). None The input stream can receive more data at this time. Describes an input stream on a Media Foundation transform (MFT). No flags set Each media sample (IMFSample interface) of input data must contain complete, unbroken units of data. Each media sample that the client provides as input must contain exactly one unit of data, as defined for the MFT_INPUT_STREAM_WHOLE_SAMPLES flag. All input samples must be the same size. MTF Input Stream Holds buffers The MFT does not hold input samples after the IMFTransform::ProcessInput method returns. This input stream can be removed by calling IMFTransform::DeleteInputStream. This input stream is optional. The MFT can perform in-place processing. Defines flags for the IMFTransform::ProcessOutput method. None The MFT can still generate output from this stream without receiving any more input. The format has changed on this output stream, or there is a new preferred format for this stream. The MFT has removed this output stream. There is no sample ready for this stream. Indicates whether a Media Foundation transform (MFT) can produce output data. None There is a sample available for at least one output stream. Describes an output stream on a Media Foundation transform (MFT). No flags set Each media sample (IMFSample interface) of output data from the MFT contains complete, unbroken units of data. Each output sample contains exactly one unit of data, as defined for the MFT_OUTPUT_STREAM_WHOLE_SAMPLES flag. All output samples are the same size. The MFT can discard the output data from this output stream, if requested by the client. This output stream is optional. The MFT provides the output samples for this stream, either by allocating them internally or by operating directly on the input samples. The MFT can either provide output samples for this stream or it can use samples that the client allocates. The MFT does not require the client to process the output for this stream. The MFT might remove this output stream during streaming. Defines flags for processing output samples in a Media Foundation transform (MFT). None Do not produce output for streams in which the pSample member of the MFT_OUTPUT_DATA_BUFFER structure is NULL. Regenerates the last output sample. Process Output Status flags None The Media Foundation transform (MFT) has created one or more new output streams. Defines flags for the setting or testing the media type on a Media Foundation transform (MFT). None Test the proposed media type, but do not set it. Represents a MIDI Channel AfterTouch Event. Creates a new ChannelAfterTouchEvent from raw MIDI data A binary reader Creates a new Channel After-Touch Event Absolute time Channel After-touch pressure Calls base class export first, then exports the data specific to this event MidiEvent.Export The aftertouch pressure value Represents a MIDI control change event Reads a control change event from a MIDI stream Binary reader on the MIDI stream Creates a control change event Time MIDI Channel Number The MIDI Controller Controller value Describes this control change event A string describing this event Calls base class export first, then exports the data specific to this event MidiEvent.Export The controller number The controller value Represents a MIDI key signature event event Reads a new track sequence number event from a MIDI stream The MIDI stream the data length Creates a new Key signature event with the specified data Creates a deep clone of this MIDI event. Number of sharps or flats Major or Minor key Describes this event String describing the event Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI meta event Gets the type of this meta event Empty constructor Custom constructor for use by derived types, who will manage the data themselves Meta event type Meta data length Absolute time Creates a deep clone of this MIDI event. Reads a meta-event from a stream A binary reader based on the stream of MIDI data A new MetaEvent object Describes this meta event MIDI MetaEvent Type Track sequence number Text event Copyright Sequence track name Track instrument name Lyric Marker Cue point Program (patch) name Device (port) name MIDI Channel (not official?) MIDI Port (not official?) End track Set tempo SMPTE offset Time signature Key signature Sequencer specific MIDI command codes Note Off Note On Key After-touch Control change Patch change Channel after-touch Pitch wheel change Sysex message Eox (comes at end of a sysex message) Timing clock (used when synchronization is required) Start sequence Continue sequence Stop sequence Auto-Sensing Meta-event MidiController enumeration http://www.midi.org/techspecs/midimessages.php#3 Bank Select (MSB) Modulation (MSB) Breath Controller Foot controller (MSB) Main volume Pan Expression Bank Select LSB Sustain Portamento On/Off Sostenuto On/Off Soft Pedal On/Off Legato Footswitch Reset all controllers All notes off Represents an individual MIDI event The MIDI command code Creates a MidiEvent from a raw message received using the MME MIDI In APIs The short MIDI message A new MIDI Event Constructs a MidiEvent from a BinaryStream The binary stream of MIDI data The previous MIDI event (pass null for first event) A new MidiEvent Converts this MIDI event to a short message (32 bit integer) that can be sent by the Windows MIDI out short message APIs Cannot be implemented for all MIDI messages A short message Default constructor Creates a MIDI event with specified parameters Absolute time of this event MIDI channel number MIDI command code Creates a deep clone of this MIDI event. The MIDI Channel Number for this event (1-16) The Delta time for this event The absolute time for this event The command code for this event Whether this is a note off event Whether this is a note on event Determines if this is an end track event Displays a summary of the MIDI event A string containing a brief description of this MIDI event Utility function that can read a variable length integer from a binary stream The binary stream The integer read Writes a variable length integer to a binary stream Binary stream The value to write Exports this MIDI event's data Overriden in derived classes, but they should call this version Absolute time used to calculate delta. Is updated ready for the next delta calculation Stream to write to A helper class to manage collection of MIDI events It has the ability to organise them in tracks Creates a new Midi Event collection Initial file type Delta Ticks Per Quarter Note The number of tracks The absolute time that should be considered as time zero Not directly used here, but useful for timeshifting applications The number of ticks per quarter note Gets events on a specified track Track number The list of events Gets events on a specific track Track number The list of events Adds a new track The new track event list Adds a new track Initial events to add to the new track The new track event list Removes a track Track number to remove Clears all events The MIDI file type Adds an event to the appropriate track depending on file type The event to be added The original (or desired) track number When adding events in type 0 mode, the originalTrack parameter is ignored. If in type 1 mode, it will use the original track number to store the new events. If the original track was 0 and this is a channel based event, it will create new tracks if necessary and put it on the track corresponding to its channel number Sorts, removes empty tracks and adds end track markers Gets an enumerator for the lists of track events Gets an enumerator for the lists of track events Utility class for comparing MidiEvent objects Compares two MidiEvents Sorts by time, with EndTrack always sorted to the end Class able to read a MIDI file Opens a MIDI file for reading Name of MIDI file MIDI File format Opens a MIDI file for reading Name of MIDI file If true will error on non-paired note events Opens a MIDI file stream for reading The input stream containing a MIDI file If true will error on non-paired note events The collection of events in this MIDI file Number of tracks in this MIDI file Delta Ticks Per Quarter Note Describes the MIDI file A string describing the MIDI file and its events Exports a MIDI file Filename to export to Events to export Represents a MIDI in device Called when a MIDI message is received An invalid MIDI message Gets the number of MIDI input devices available in the system Opens a specified MIDI in device The device number Closes this MIDI in device Closes this MIDI in device Start the MIDI in device Stop the MIDI in device Reset the MIDI in device Gets the MIDI in device info Closes the MIDI out device True if called from Dispose Cleanup MIDI In Device Capabilities wMid wPid vDriverVersion Product Name Support - Reserved Gets the manufacturer of this device Gets the product identifier (manufacturer specific) Gets the product name MIDI In Message Information Create a new MIDI In Message EventArgs The Raw message received from the MIDI In API The raw message interpreted as a MidiEvent The timestamp in milliseconds for this message MIM_OPEN MIM_CLOSE MIM_DATA MIM_LONGDATA MIM_ERROR MIM_LONGERROR MIM_MOREDATA MOM_OPEN MOM_CLOSE MOM_DONE Represents a MIDI message Creates a new MIDI message Status Data parameter 1 Data parameter 2 Creates a new MIDI message from a raw message A packed MIDI message from an MMIO function Creates a Note On message Note number (0 to 127) Volume (0 to 127) MIDI channel (1 to 16) A new MidiMessage object Creates a Note Off message Note number Volume MIDI channel (1-16) A new MidiMessage object Creates a patch change message The patch number The MIDI channel number (1-16) A new MidiMessageObject Creates a Control Change message The controller number to change The value to set the controller to The MIDI channel number (1-16) A new MidiMessageObject Returns the raw MIDI message data Represents a MIDI out device Gets the number of MIDI devices available in the system Gets the MIDI Out device info Opens a specified MIDI out device The device number Closes this MIDI out device Closes this MIDI out device Gets or sets the volume for this MIDI out device Resets the MIDI out device Sends a MIDI out message Message Parameter 1 Parameter 2 Sends a MIDI message to the MIDI out device The message to send Closes the MIDI out device True if called from Dispose Send a long message, for example sysex. The bytes to send. Cleanup class representing the capabilities of a MIDI out device MIDIOUTCAPS: http://msdn.microsoft.com/en-us/library/dd798467%28VS.85%29.aspx MIDICAPS_VOLUME separate left-right volume control MIDICAPS_LRVOLUME MIDICAPS_CACHE MIDICAPS_STREAM driver supports midiStreamOut directly Gets the manufacturer of this device Gets the product identifier (manufacturer specific) Gets the product name Returns the number of supported voices Gets the polyphony of the device Returns true if the device supports all channels Queries whether a particular channel is supported Channel number to test True if the channel is supported Returns true if the device supports patch caching Returns true if the device supports separate left and right volume Returns true if the device supports MIDI stream out Returns true if the device supports volume control Returns the type of technology used by this MIDI out device Represents the different types of technology used by a MIDI out device from mmsystem.h The device is a MIDI port The device is a MIDI synth The device is a square wave synth The device is an FM synth The device is a MIDI mapper The device is a WaveTable synth The device is a software synth Represents a note MIDI event Reads a NoteEvent from a stream of MIDI data Binary Reader for the stream Creates a MIDI Note Event with specified parameters Absolute time of this event MIDI channel number MIDI command code MIDI Note Number MIDI Note Velocity The MIDI note number The note velocity The note name Describes the Note Event Note event as a string Represents a MIDI note on event Reads a new Note On event from a stream of MIDI data Binary reader on the MIDI data stream Creates a NoteOn event with specified parameters Absolute time of this event MIDI channel number MIDI note number MIDI note velocity MIDI note duration Creates a deep clone of this MIDI event. The associated Note off event Get or set the Note Number, updating the off event at the same time Get or set the channel, updating the off event at the same time The duration of this note There must be a note off event Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI patch change event Gets the default MIDI instrument names Reads a new patch change event from a MIDI stream Binary reader for the MIDI stream Creates a new patch change event Time of the event Channel number Patch number The Patch Number Describes this patch change event String describing the patch change event Gets as a short message for sending with the midiOutShortMsg API short message Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI pitch wheel change event Reads a pitch wheel change event from a MIDI stream The MIDI stream to read from Creates a new pitch wheel change event Absolute event time Channel Pitch wheel value Describes this pitch wheel change event String describing this pitch wheel change event Pitch Wheel Value 0 is minimum, 0x2000 (8192) is default, 0x3FFF (16383) is maximum Gets a short message Integer to sent as short message Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI meta event with raw data Raw data contained in the meta event Creates a meta event with raw data Creates a deep clone of this MIDI event. Describes this meta event Represents a Sequencer Specific event Reads a new sequencer specific event from a MIDI stream The MIDI stream The data length Creates a new Sequencer Specific event The sequencer specific data Absolute time of this event Creates a deep clone of this MIDI event. The contents of this sequencer specific Describes this MIDI text event A string describing this event Calls base class export first, then exports the data specific to this event MidiEvent.Export SMPTE Offset Event Creates a new time signature event Reads a new time signature event from a MIDI stream The MIDI stream The data length Creates a deep clone of this MIDI event. Hours Minutes Seconds Frames SubFrames Describes this time signature event A string describing this event Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI sysex message Reads a sysex message from a MIDI stream Stream of MIDI data a new sysex message Creates a deep clone of this MIDI event. Describes this sysex message A string describing the sysex message Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI tempo event Reads a new tempo event from a MIDI stream The MIDI stream the data length Creates a new tempo event with specified settings Microseconds per quarter note Absolute time Creates a deep clone of this MIDI event. Describes this tempo event String describing the tempo event Microseconds per quarter note Tempo Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI text event Reads a new text event from a MIDI stream The MIDI stream The data length Creates a new TextEvent The text in this type MetaEvent type (must be one that is associated with text data) Absolute time of this event Creates a deep clone of this MIDI event. The contents of this text event The raw contents of this text event Describes this MIDI text event A string describing this event Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI time signature event Reads a new time signature event from a MIDI stream The MIDI stream The data length Creates a new TimeSignatureEvent Time at which to create this event Numerator Denominator Ticks in Metronome Click No of 32nd Notes in Quarter Click Creates a deep clone of this MIDI event. Numerator (number of beats in a bar) Denominator (Beat unit), 1 means 2, 2 means 4 (crochet), 3 means 8 (quaver), 4 means 16 and 5 means 32 Ticks in a metronome click Number of 32nd notes in a quarter note The time signature Describes this time signature event A string describing this event Calls base class export first, then exports the data specific to this event MidiEvent.Export Represents a MIDI track sequence number event event Creates a new track sequence number event Reads a new track sequence number event from a MIDI stream The MIDI stream the data length Creates a deep clone of this MIDI event. Describes this event String describing the event Calls base class export first, then exports the data specific to this event MidiEvent.Export Boolean mixer control Gets the details for this control memory pointer The current value of the control Custom Mixer control Get the data for this custom control pointer to memory to receive data List text mixer control Get the details for this control Memory location to read to Represents a Windows mixer device The number of mixer devices available Connects to the specified mixer The index of the mixer to use. This should be between zero and NumberOfDevices - 1 The number of destinations this mixer supports The name of this mixer device The manufacturer code for this mixer device The product identifier code for this mixer device Retrieve the specified MixerDestination object The ID of the destination to use. Should be between 0 and DestinationCount - 1 A way to enumerate the destinations A way to enumerate all available devices Represents a mixer control Mixer Handle Number of Channels Mixer Handle Type Gets all the mixer controls Mixer Handle Mixer Line Mixer Handle Type Gets a specified Mixer Control Mixer Handle Line ID Control ID Number of Channels Flags to use (indicates the meaning of mixerHandle) Gets the control details Gets the control details Mixer control name Mixer control type Returns true if this is a boolean control Control type Is this a boolean control Determines whether a specified mixer control type is a list text control True if this is a list text control True if this is a signed control True if this is an unsigned control True if this is a custom control String representation for debug purposes Mixer control types Custom Boolean meter Signed meter Peak meter Unsigned meter Boolean On Off Mute Mono Loudness Stereo Enhance Button Decibels Signed Unsigned Percent Slider Pan Q-sound pan Fader Volume Bass Treble Equaliser Single Select Mux Multiple select Mixer Micro time Milli time Mixer Interop Flags MIXER_OBJECTF_HANDLE = 0x80000000; MIXER_OBJECTF_MIXER = 0x00000000; MIXER_OBJECTF_HMIXER MIXER_OBJECTF_WAVEOUT MIXER_OBJECTF_HWAVEOUT MIXER_OBJECTF_WAVEIN MIXER_OBJECTF_HWAVEIN MIXER_OBJECTF_MIDIOUT MIXER_OBJECTF_HMIDIOUT MIXER_OBJECTF_MIDIIN MIXER_OBJECTF_HMIDIIN MIXER_OBJECTF_AUX MIXER_GETCONTROLDETAILSF_VALUE = 0x00000000; MIXER_SETCONTROLDETAILSF_VALUE = 0x00000000; MIXER_GETCONTROLDETAILSF_LISTTEXT = 0x00000001; MIXER_SETCONTROLDETAILSF_LISTTEXT = 0x00000001; MIXER_GETCONTROLDETAILSF_QUERYMASK = 0x0000000F; MIXER_SETCONTROLDETAILSF_QUERYMASK = 0x0000000F; MIXER_GETLINECONTROLSF_QUERYMASK = 0x0000000F; MIXER_GETLINECONTROLSF_ALL = 0x00000000; MIXER_GETLINECONTROLSF_ONEBYID = 0x00000001; MIXER_GETLINECONTROLSF_ONEBYTYPE = 0x00000002; MIXER_GETLINEINFOF_DESTINATION = 0x00000000; MIXER_GETLINEINFOF_SOURCE = 0x00000001; MIXER_GETLINEINFOF_LINEID = 0x00000002; MIXER_GETLINEINFOF_COMPONENTTYPE = 0x00000003; MIXER_GETLINEINFOF_TARGETTYPE = 0x00000004; MIXER_GETLINEINFOF_QUERYMASK = 0x0000000F; Mixer Line Flags Audio line is active. An active line indicates that a signal is probably passing through the line. Audio line is disconnected. A disconnected line's associated controls can still be modified, but the changes have no effect until the line is connected. Audio line is an audio source line associated with a single audio destination line. If this flag is not set, this line is an audio destination line associated with zero or more audio source lines. BOUNDS structure dwMinimum / lMinimum / reserved 0 dwMaximum / lMaximum / reserved 1 reserved 2 reserved 3 reserved 4 reserved 5 METRICS structure cSteps / reserved[0] cbCustomData / reserved[1], number of bytes for control details reserved 2 reserved 3 reserved 4 reserved 5 MIXERCONTROL struct http://msdn.microsoft.com/en-us/library/dd757293%28VS.85%29.aspx Represents a mixer line (source or destination) Creates a new mixer destination Mixer Handle Destination Index Mixer Handle Type Creates a new Mixer Source For a Specified Source Mixer Handle Destination Index Source Index Flag indicating the meaning of mixerHandle Creates a new Mixer Source Wave In Device Mixer Line Name Mixer Line short name The line ID Component Type Mixer destination type description Number of channels Number of sources Number of controls Is this destination active Is this destination disconnected Is this destination a source Gets the specified source Enumerator for the controls on this Mixer Limne Enumerator for the sources on this Mixer Line The name of the target output device Describes this Mixer Line (for diagnostic purposes) Mixer Line Component type enumeration Audio line is a destination that cannot be defined by one of the standard component types. A mixer device is required to use this component type for line component types that have not been defined by Microsoft Corporation. MIXERLINE_COMPONENTTYPE_DST_UNDEFINED Audio line is a digital destination (for example, digital input to a DAT or CD audio device). MIXERLINE_COMPONENTTYPE_DST_DIGITAL Audio line is a line level destination (for example, line level input from a CD audio device) that will be the final recording source for the analog-to-digital converter (ADC). Because most audio cards for personal computers provide some sort of gain for the recording audio source line, the mixer device will use the MIXERLINE_COMPONENTTYPE_DST_WAVEIN type. MIXERLINE_COMPONENTTYPE_DST_LINE Audio line is a destination used for a monitor. MIXERLINE_COMPONENTTYPE_DST_MONITOR Audio line is an adjustable (gain and/or attenuation) destination intended to drive speakers. This is the typical component type for the audio output of audio cards for personal computers. MIXERLINE_COMPONENTTYPE_DST_SPEAKERS Audio line is an adjustable (gain and/or attenuation) destination intended to drive headphones. Most audio cards use the same audio destination line for speakers and headphones, in which case the mixer device simply uses the MIXERLINE_COMPONENTTYPE_DST_SPEAKERS type. MIXERLINE_COMPONENTTYPE_DST_HEADPHONES Audio line is a destination that will be routed to a telephone line. MIXERLINE_COMPONENTTYPE_DST_TELEPHONE Audio line is a destination that will be the final recording source for the waveform-audio input (ADC). This line typically provides some sort of gain or attenuation. This is the typical component type for the recording line of most audio cards for personal computers. MIXERLINE_COMPONENTTYPE_DST_WAVEIN Audio line is a destination that will be the final recording source for voice input. This component type is exactly like MIXERLINE_COMPONENTTYPE_DST_WAVEIN but is intended specifically for settings used during voice recording/recognition. Support for this line is optional for a mixer device. Many mixer devices provide only MIXERLINE_COMPONENTTYPE_DST_WAVEIN. MIXERLINE_COMPONENTTYPE_DST_VOICEIN Audio line is a source that cannot be defined by one of the standard component types. A mixer device is required to use this component type for line component types that have not been defined by Microsoft Corporation. MIXERLINE_COMPONENTTYPE_SRC_UNDEFINED Audio line is a digital source (for example, digital output from a DAT or audio CD). MIXERLINE_COMPONENTTYPE_SRC_DIGITAL Audio line is a line-level source (for example, line-level input from an external stereo) that can be used as an optional recording source. Because most audio cards for personal computers provide some sort of gain for the recording source line, the mixer device will use the MIXERLINE_COMPONENTTYPE_SRC_AUXILIARY type. MIXERLINE_COMPONENTTYPE_SRC_LINE Audio line is a microphone recording source. Most audio cards for personal computers provide at least two types of recording sources: an auxiliary audio line and microphone input. A microphone audio line typically provides some sort of gain. Audio cards that use a single input for use with a microphone or auxiliary audio line should use the MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE component type. MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE Audio line is a source originating from the output of an internal synthesizer. Most audio cards for personal computers provide some sort of MIDI synthesizer (for example, an Adlib®-compatible or OPL/3 FM synthesizer). MIXERLINE_COMPONENTTYPE_SRC_SYNTHESIZER Audio line is a source originating from the output of an internal audio CD. This component type is provided for audio cards that provide an audio source line intended to be connected to an audio CD (or CD-ROM playing an audio CD). MIXERLINE_COMPONENTTYPE_SRC_COMPACTDISC Audio line is a source originating from an incoming telephone line. MIXERLINE_COMPONENTTYPE_SRC_TELEPHONE Audio line is a source originating from personal computer speaker. Several audio cards for personal computers provide the ability to mix what would typically be played on the internal speaker with the output of an audio card. Some audio cards support the ability to use this output as a recording source. MIXERLINE_COMPONENTTYPE_SRC_PCSPEAKER Audio line is a source originating from the waveform-audio output digital-to-analog converter (DAC). Most audio cards for personal computers provide this component type as a source to the MIXERLINE_COMPONENTTYPE_DST_SPEAKERS destination. Some cards also allow this source to be routed to the MIXERLINE_COMPONENTTYPE_DST_WAVEIN destination. MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT Audio line is a source originating from the auxiliary audio line. This line type is intended as a source with gain or attenuation that can be routed to the MIXERLINE_COMPONENTTYPE_DST_SPEAKERS destination and/or recorded from the MIXERLINE_COMPONENTTYPE_DST_WAVEIN destination. MIXERLINE_COMPONENTTYPE_SRC_AUXILIARY Audio line is an analog source (for example, analog output from a video-cassette tape). MIXERLINE_COMPONENTTYPE_SRC_ANALOG Represents a signed mixer control Gets details for this contrl The value of the control Minimum value for this control Maximum value for this control Value of the control represented as a percentage String Representation for debugging purposes Represents an unsigned mixer control Gets the details for this control The control value The control's minimum value The control's maximum value Value of the control represented as a percentage String Representation for debugging purposes Helper methods for working with audio buffers Ensures the buffer is big enough Ensures the buffer is big enough these will become extension methods once we move to .NET 3.5 Checks if the buffer passed in is entirely full of nulls Converts to a string containing the buffer described in hex Decodes the buffer using the specified encoding, stopping at the first null Concatenates the given arrays into a single array. The arrays to concatenate The concatenated resulting array. An encoding for use with file types that have one byte per character The one and only instance of this class Chunk Identifier helpers Chunk identifier to Int32 (replaces mmioStringToFOURCC) four character chunk identifier Chunk identifier as int 32 A very basic circular buffer implementation Create a new circular buffer Max buffer size in bytes Write data to the buffer Data to write Offset into data Number of bytes to write number of bytes written Read from the buffer Buffer to read into Offset into read buffer Bytes to read Number of bytes actually read Maximum length of this circular buffer Number of bytes currently stored in the circular buffer Resets the buffer Advances the buffer, discarding bytes Bytes to advance A util class for conversions linear to dB conversion linear value decibel value dB to linear conversion decibel value linear value Allows us to add descriptions to interop members The description Field description String representation Helper to get descriptions Describes the Guid by looking for a FieldDescription attribute on the specified class HResult S_OK S_FALSE E_INVALIDARG (from winerror.h) MAKE_HRESULT macro Helper to deal with the fact that in Win Store apps, the HResult property name has changed COM Exception The HResult Methods for converting between IEEE 80-bit extended double precision and standard C# double precision. Converts a C# double precision number to an 80-bit IEEE extended double precision number (occupying 10 bytes). The double precision number to convert to IEEE extended. An array of 10 bytes containing the IEEE extended number. Converts an IEEE 80-bit extended precision number to a C# double precision number. The 80-bit IEEE extended number (as an array of 10 bytes). A C# double precision number that is a close representation of the IEEE extended number. Pass-through stream that ignores Dispose Useful for dealing with MemoryStreams that you want to re-use The source stream all other methods fall through to If true the Dispose will be ignored, if false, will pass through to the SourceStream Set to true by default Creates a new IgnoreDisposeStream The source stream Can Read Can Seek Can write to the underlying stream Flushes the underlying stream Gets the length of the underlying stream Gets or sets the position of the underlying stream Reads from the underlying stream Seeks on the underlying stream Sets the length of the underlying stream Writes to the underlying stream Dispose - by default (IgnoreDispose = true) will do nothing, leaving the underlying stream undisposed Support for Marshal Methods in both UWP and .NET 3.5 SizeOf a structure Offset of a field in a structure Pointer to Structure In-place and stable implementation of MergeSort MergeSort a list of comparable items MergeSort a list General purpose native methods for internal NAudio use WavePosition extension methods Get Position as timespan Manufacturer codes from mmreg.h Microsoft Corporation Creative Labs, Inc Media Vision, Inc. Fujitsu Corp. Artisoft, Inc. Turtle Beach, Inc. IBM Corporation Vocaltec LTD. Roland DSP Solutions, Inc. NEC ATI Wang Laboratories, Inc Tandy Corporation Voyetra Antex Electronics Corporation ICL Personal Systems Intel Corporation Advanced Gravis Video Associates Labs, Inc. InterActive Inc Yamaha Corporation of America Everex Systems, Inc Echo Speech Corporation Sierra Semiconductor Corp Computer Aided Technologies APPS Software International DSP Group, Inc microEngineering Labs Computer Friends, Inc. ESS Technology Audio, Inc. Motorola, Inc. Canopus, co., Ltd. Seiko Epson Corporation Truevision Aztech Labs, Inc. Videologic SCALACS Korg Inc. Audio Processing Technology Integrated Circuit Systems, Inc. Iterated Systems, Inc. Metheus Logitech, Inc. Winnov, Inc. NCR Corporation EXAN AST Research Inc. Willow Pond Corporation Sonic Foundry Vitec Multimedia MOSCOM Corporation Silicon Soft, Inc. Supermac Audio Processing Technology Speech Compression Ahead, Inc. Dolby Laboratories OKI AuraVision Corporation Ing C. Olivetti & C., S.p.A. I/O Magic Corporation Matsushita Electric Industrial Co., LTD. Control Resources Limited Xebec Multimedia Solutions Limited New Media Corporation Natural MicroSystems Lyrrus Inc. Compusic OPTi Computers Inc. Adlib Accessories Inc. Compaq Computer Corp. Dialogic Corporation InSoft, Inc. M.P. Technologies, Inc. Weitek Lernout & Hauspie Quanta Computer Inc. Apple Computer, Inc. Digital Equipment Corporation Mark of the Unicorn Workbit Corporation Ositech Communications Inc. miro Computer Products AG Cirrus Logic ISOLUTION B.V. Horizons Technology, Inc Computer Concepts Ltd Voice Technologies Group, Inc. Radius Rockwell International Co. XYZ for testing Opcode Systems Voxware Inc Northern Telecom Limited APICOM Grande Software ADDX Wildcat Canyon Software Rhetorex Inc Brooktree Corporation ENSONIQ Corporation FAST Multimedia AG NVidia Corporation OKSORI Co., Ltd. DiAcoustics, Inc. Gulbransen, Inc. Kay Elemetrics, Inc. Crystal Semiconductor Corporation Splash Studios Quarterdeck Corporation TDK Corporation Digital Audio Labs, Inc. Seer Systems, Inc. PictureTel Corporation AT&T Microelectronics Osprey Technologies, Inc. Mediatrix Peripherals SounDesignS M.C.S. Ltd. A.L. Digital Ltd. Spectrum Signal Processing, Inc. Electronic Courseware Systems, Inc. AMD Core Dynamics CANAM Computers Softsound, Ltd. Norris Communications, Inc. Danka Data Devices EuPhonics Precept Software, Inc. Crystal Net Corporation Chromatic Research, Inc Voice Information Systems, Inc Vienna Systems Connectix Corporation Gadget Labs LLC Frontier Design Group LLC Viona Development GmbH Casio Computer Co., LTD Diamond Multimedia S3 Fraunhofer Summary description for MmException. Creates a new MmException The result returned by the Windows API call The name of the Windows API that failed Helper function to automatically raise an exception on failure The result of the API call The API function name Returns the Windows API result Windows multimedia error codes from mmsystem.h. no error, MMSYSERR_NOERROR unspecified error, MMSYSERR_ERROR device ID out of range, MMSYSERR_BADDEVICEID driver failed enable, MMSYSERR_NOTENABLED device already allocated, MMSYSERR_ALLOCATED device handle is invalid, MMSYSERR_INVALHANDLE no device driver present, MMSYSERR_NODRIVER memory allocation error, MMSYSERR_NOMEM function isn't supported, MMSYSERR_NOTSUPPORTED error value out of range, MMSYSERR_BADERRNUM invalid flag passed, MMSYSERR_INVALFLAG invalid parameter passed, MMSYSERR_INVALPARAM handle being used simultaneously on another thread (eg callback),MMSYSERR_HANDLEBUSY specified alias not found, MMSYSERR_INVALIDALIAS bad registry database, MMSYSERR_BADDB registry key not found, MMSYSERR_KEYNOTFOUND registry read error, MMSYSERR_READERROR registry write error, MMSYSERR_WRITEERROR registry delete error, MMSYSERR_DELETEERROR registry value not found, MMSYSERR_VALNOTFOUND driver does not call DriverCallback, MMSYSERR_NODRIVERCB more data to be returned, MMSYSERR_MOREDATA unsupported wave format, WAVERR_BADFORMAT still something playing, WAVERR_STILLPLAYING header not prepared, WAVERR_UNPREPARED device is synchronous, WAVERR_SYNC Conversion not possible (ACMERR_NOTPOSSIBLE) Busy (ACMERR_BUSY) Header Unprepared (ACMERR_UNPREPARED) Cancelled (ACMERR_CANCELED) invalid line (MIXERR_INVALLINE) invalid control (MIXERR_INVALCONTROL) invalid value (MIXERR_INVALVALUE)