diff options
author | henrika@chromium.org <henrika@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2012-12-14 16:08:35 +0000 |
---|---|---|
committer | henrika@chromium.org <henrika@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2012-12-14 16:08:35 +0000 |
commit | f47d99eb44515152996a4386d728d538705769cf (patch) | |
tree | f74dfd15a65ce03378b17bc172954353ac3c20b3 /media | |
parent | 77e939c5e65a3976ac26eb465cf13c3aa60ecc15 (diff) | |
download | chromium_src-f47d99eb44515152996a4386d728d538705769cf.zip chromium_src-f47d99eb44515152996a4386d728d538705769cf.tar.gz chromium_src-f47d99eb44515152996a4386d728d538705769cf.tar.bz2 |
Adds support of rendering a local media stream for audio using HTML5 audio tag.
Overview:
=========
This patch ensures that a user can add a local media stream to an audio/video element and render the captured audio locally (in loopback).
Details:
========
Our current architecture is a bit hairy but I've done my best to add the new code in a structured way. I have an issue in crbug assigned to myself to refactor the code in this area since we really must improve and make it less complex to work with.
One more client now implements the webkit_media::MediaStreamAudioRenderer and this client is
called WebRtcLocalAudioRenderer (WLAR).
The WLAR is created by the WebMediaPlayer when a local media stream is generated and this will ensure that the controls for a media element becomes visible.
The main action takes place in WebRtcLocalAudioRenderer::Start() where I have gathered all the main stuff. This method is the best starting point for understanding the new data flow.
A reference to an existing WebRtcAudioCapturer (WAC) (owned by the WebRTC ADM) is given to the WLAR at construction. Calling Start =>
- WLAR connects itself to the WAC using the WAC pointer from construction
- render audio parameter are copied from the capture side (since output does resampling etc.)
- creates and inits a new AudioOutputDevice (AOD)
- starts the capturer and the new AOD
Media flow:
-----------
Data is recorded and fed to the WAC which knows that it is in "loopback mode". The WAC then stores recorded data in a FIFO. The WLAR consumes audio from the FIFO when the AOD needs data to render. The WLAR reads data from the FIFO using a callback.
Testing procedure:
==================
Main testing was done using a new WebRCT demo at https://www.corp.google.com/~henrika/webrtc/gum4.html.
I also tried all other demos at https://webrtc-demos.appspot.com/ and the htp://apprtc.appspot.com demo.
For all cases, debug filters were used to track things like calling sequences etc.
BUG=157142
Review URL: https://codereview.chromium.org/11450029
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@173164 0039d316-1c4b-4281-b951-d872f2087c98
Diffstat (limited to 'media')
-rw-r--r-- | media/audio/audio_output_device.cc | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/media/audio/audio_output_device.cc b/media/audio/audio_output_device.cc index 567c2f6..cf4ebff 100644 --- a/media/audio/audio_output_device.cc +++ b/media/audio/audio_output_device.cc @@ -179,7 +179,7 @@ void AudioOutputDevice::ShutDownOnIOThread() { void AudioOutputDevice::SetVolumeOnIOThread(double volume) { DCHECK(message_loop()->BelongsToCurrentThread()); - if (state_ >= PAUSED) + if (state_ >= CREATING_STREAM) ipc_->SetVolume(stream_id_, volume); } |