diff options
author | ygorshenin@chromium.org <ygorshenin@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2013-03-27 08:39:18 +0000 |
---|---|---|
committer | ygorshenin@chromium.org <ygorshenin@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2013-03-27 08:39:18 +0000 |
commit | d707fdef095589dda221af41f522bd1a423ed294 (patch) | |
tree | 9d219b2aae6c98f0edae83c486f97c7a686dbc32 | |
parent | 027935380c71208b1d16f0094f5a65470f50f676 (diff) | |
download | chromium_src-d707fdef095589dda221af41f522bd1a423ed294.zip chromium_src-d707fdef095589dda221af41f522bd1a423ed294.tar.gz chromium_src-d707fdef095589dda221af41f522bd1a423ed294.tar.bz2 |
Revert 190807 "Fix a Chrome renderer hang which occurs when a We..."
> Fix a Chrome renderer hang which occurs when a WebMediaPlayerImpl instance is shutting down and hangs due to
> a deadlock between the media pipeline thread and the Renderer main thread.
>
> The media pipeline thread waits on a task(RendererGpuVideoDecoderFactories::AsyncCreateSharedMemory) to be
> executed on the main thread while the main thread waits on the media pipeline thread to exit.
>
> The proposed fix is to introduce the following functions on the media::GpuVideoDecoder::Factories interface.
> 1. Abort:- Called during shutdown.
> 2. IsAborted:- Called to check if shutdown is in progress.
>
> The Abort function is called when the WebMediaPlayerImpl is shutting down. The video decoder factory sets an
> event in this context. All tasks which complete asynchronously check for this event along with the event
> which indicates that the async task completed. This ensures that they don't indefinitely wait during shutdown.
> The asynchronous tasks in the RendererGpuVideoDecoderFactories now use a common event to signal completion
> and return their results in member variables. This prevents a race between RendererGpuVideoDecoderFactories::Abort
> being called and these tasks attempting to copy data to the callers stack which may have unwound at this point.
>
> While debugging this problem, I also found that there are scenarios where the GVD::Reset never completes
> thus hanging the renderer. This problem occurs when the GVD is in the kDrainingDecoder state, i.e. waiting
> for a Flush to complete. The VRB is waiting for the GVD::Reset operation to complete. The VDA is waiting for
> output picture buffers which the VRB is holding on to. Proposed fix for that was to flush out the ready frames
> in the VRB::Flush call which unblocks the VDA.
>
> I changed the ReadPixelsCB to take in a const SkBitmap& instead of void* to address the refcount issues with the
> SkBitmap.
>
> While running my test case with tip two DCHECK's in VideoFrameStream::OnDecoderStopped fired. The first one
> is the case where the decoder is stopped while there is a pending read callback. The second one is for the same
> scenario with a pending reset. These fire when the media pipeline is destroyed on the main renderer thread while
> these operations are pending. I added code in the GpuVideoDecoder::Stop function to fire these callbacks which
> seems like the right thing to do.
>
> The other change in the GpuVideoDecoder::NotifyResetDone function was to remove the check for a NULL vda as there
> are scenarios where in the decoder could be destroyed before this callback runs. We need to fire the read and
> reset callbacks anyway.
>
> BUG=179551
> TEST=The rapid_video_change_test.html file added to media\test\data, when loaded in chrome should hang without
> this patch. Needs some more work to get something similar running in the Chrome OS autotest suite.
> Review URL: https://codereview.chromium.org/12634011
TBR=ananta@chromium.org
Review URL: https://codereview.chromium.org/12942011
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@190863 0039d316-1c4b-4281-b951-d872f2087c98
-rw-r--r-- | content/renderer/media/renderer_gpu_video_decoder_factories.cc | 144 | ||||
-rw-r--r-- | content/renderer/media/renderer_gpu_video_decoder_factories.h | 54 | ||||
-rw-r--r-- | media/base/video_frame.cc | 3 | ||||
-rw-r--r-- | media/base/video_frame.h | 11 | ||||
-rw-r--r-- | media/filters/gpu_video_decoder.cc | 16 | ||||
-rw-r--r-- | media/filters/gpu_video_decoder.h | 13 | ||||
-rw-r--r-- | media/filters/skcanvas_video_renderer.cc | 2 | ||||
-rw-r--r-- | media/filters/video_renderer_base.cc | 9 | ||||
-rw-r--r-- | media/test/data/rapid_video_change_test.html | 47 | ||||
-rw-r--r-- | webkit/media/webmediaplayer_impl.cc | 6 | ||||
-rw-r--r-- | webkit/media/webmediaplayer_impl.h | 3 |
11 files changed, 86 insertions, 222 deletions
diff --git a/content/renderer/media/renderer_gpu_video_decoder_factories.cc b/content/renderer/media/renderer_gpu_video_decoder_factories.cc index 1f0e85a..cdd5804 100644 --- a/content/renderer/media/renderer_gpu_video_decoder_factories.cc +++ b/content/renderer/media/renderer_gpu_video_decoder_factories.cc @@ -8,12 +8,12 @@ #include <GLES2/gl2ext.h> #include "base/bind.h" +#include "base/synchronization/waitable_event.h" #include "content/common/child_thread.h" #include "content/common/gpu/client/gpu_channel_host.h" #include "content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.h" #include "gpu/command_buffer/client/gles2_implementation.h" #include "gpu/ipc/command_buffer_proxy.h" -#include "third_party/skia/include/core/SkPixelRef.h" namespace content { @@ -23,14 +23,13 @@ RendererGpuVideoDecoderFactories::RendererGpuVideoDecoderFactories( const scoped_refptr<base::MessageLoopProxy>& message_loop, WebGraphicsContext3DCommandBufferImpl* context) : message_loop_(message_loop), - gpu_channel_host_(gpu_channel_host), - aborted_waiter_(true, false), - async_waiter_(false, false) { + gpu_channel_host_(gpu_channel_host) { if (message_loop_->BelongsToCurrentThread()) { - AsyncGetContext(context); + AsyncGetContext(context, NULL); return; } // Threaded compositor requires us to wait for the context to be acquired. + base::WaitableEvent waiter(false, false); message_loop_->PostTask(FROM_HERE, base::Bind( &RendererGpuVideoDecoderFactories::AsyncGetContext, // Unretained to avoid ref/deref'ing |*this|, which is not yet stored in a @@ -41,12 +40,14 @@ RendererGpuVideoDecoderFactories::RendererGpuVideoDecoderFactories( // thread, and only as the result of a PostTask from the render thread // which can only happen after this function returns, so our PostTask will // run first. - context)); - async_waiter_.Wait(); + context, + &waiter)); + waiter.Wait(); } void RendererGpuVideoDecoderFactories::AsyncGetContext( - WebGraphicsContext3DCommandBufferImpl* context) { + WebGraphicsContext3DCommandBufferImpl* context, + base::WaitableEvent* waiter) { context_ = context->AsWeakPtr(); if (context_) { if (context_->makeContextCurrent()) { @@ -55,7 +56,8 @@ void RendererGpuVideoDecoderFactories::AsyncGetContext( context_->insertEventMarkerEXT("GpuVDAContext3D"); } } - async_waiter_.Signal(); + if (waiter) + waiter->Signal(); } media::VideoDecodeAccelerator* @@ -63,36 +65,29 @@ RendererGpuVideoDecoderFactories::CreateVideoDecodeAccelerator( media::VideoCodecProfile profile, media::VideoDecodeAccelerator::Client* client) { DCHECK(!message_loop_->BelongsToCurrentThread()); - // The VDA is returned in the vda_ member variable by the - // AsyncCreateVideoDecodeAccelerator() function. + media::VideoDecodeAccelerator* vda = NULL; + base::WaitableEvent waiter(false, false); message_loop_->PostTask(FROM_HERE, base::Bind( &RendererGpuVideoDecoderFactories::AsyncCreateVideoDecodeAccelerator, - this, profile, client)); - - base::WaitableEvent* objects[] = {&aborted_waiter_, &async_waiter_}; - if (base::WaitableEvent::WaitMany(objects, arraysize(objects)) == 0) { - // If we are aborting and the VDA is created by the - // AsyncCreateVideoDecodeAccelerator() function later we need to ensure - // that it is destroyed on the same thread. - message_loop_->PostTask(FROM_HERE, base::Bind( - &RendererGpuVideoDecoderFactories::AsyncDestroyVideoDecodeAccelerator, - this)); - return NULL; - } - return vda_.release(); + this, profile, client, &vda, &waiter)); + waiter.Wait(); + return vda; } void RendererGpuVideoDecoderFactories::AsyncCreateVideoDecodeAccelerator( media::VideoCodecProfile profile, - media::VideoDecodeAccelerator::Client* client) { + media::VideoDecodeAccelerator::Client* client, + media::VideoDecodeAccelerator** vda, + base::WaitableEvent* waiter) { DCHECK(message_loop_->BelongsToCurrentThread()); - if (context_ && context_->GetCommandBufferProxy()) { - vda_.reset(gpu_channel_host_->CreateVideoDecoder( + *vda = gpu_channel_host_->CreateVideoDecoder( context_->GetCommandBufferProxy()->GetRouteID(), - profile, client)); + profile, client); + } else { + *vda = NULL; } - async_waiter_.Signal(); + waiter->Signal(); } bool RendererGpuVideoDecoderFactories::CreateTextures( @@ -100,32 +95,31 @@ bool RendererGpuVideoDecoderFactories::CreateTextures( std::vector<uint32>* texture_ids, uint32 texture_target) { DCHECK(!message_loop_->BelongsToCurrentThread()); + bool success = false; + base::WaitableEvent waiter(false, false); message_loop_->PostTask(FROM_HERE, base::Bind( &RendererGpuVideoDecoderFactories::AsyncCreateTextures, this, - count, size, texture_target)); - - base::WaitableEvent* objects[] = {&aborted_waiter_, &async_waiter_}; - if (base::WaitableEvent::WaitMany(objects, arraysize(objects)) == 0) - return false; - texture_ids->swap(created_textures_); - return true; + count, size, texture_ids, texture_target, &success, &waiter)); + waiter.Wait(); + return success; } void RendererGpuVideoDecoderFactories::AsyncCreateTextures( - int32 count, const gfx::Size& size, uint32 texture_target) { + int32 count, const gfx::Size& size, std::vector<uint32>* texture_ids, + uint32 texture_target, bool* success, base::WaitableEvent* waiter) { DCHECK(message_loop_->BelongsToCurrentThread()); DCHECK(texture_target); - if (!context_) { - async_waiter_.Signal(); + *success = false; + waiter->Signal(); return; } gpu::gles2::GLES2Implementation* gles2 = context_->GetImplementation(); - created_textures_.resize(count); - gles2->GenTextures(count, &created_textures_[0]); + texture_ids->resize(count); + gles2->GenTextures(count, &texture_ids->at(0)); for (int i = 0; i < count; ++i) { gles2->ActiveTexture(GL_TEXTURE0); - uint32 texture_id = created_textures_[i]; + uint32 texture_id = texture_ids->at(i); gles2->BindTexture(texture_target, texture_id); gles2->TexParameteri(texture_target, GL_TEXTURE_MIN_FILTER, GL_LINEAR); gles2->TexParameteri(texture_target, GL_TEXTURE_MAG_FILTER, GL_LINEAR); @@ -141,7 +135,8 @@ void RendererGpuVideoDecoderFactories::AsyncCreateTextures( // reused, this should not be unacceptably expensive. gles2->Flush(); DCHECK_EQ(gles2->GetError(), static_cast<GLenum>(GL_NO_ERROR)); - async_waiter_.Signal(); + *success = true; + waiter->Signal(); } void RendererGpuVideoDecoderFactories::DeleteTexture(uint32 texture_id) { @@ -154,7 +149,6 @@ void RendererGpuVideoDecoderFactories::AsyncDeleteTexture(uint32 texture_id) { DCHECK(message_loop_->BelongsToCurrentThread()); if (!context_) return; - gpu::gles2::GLES2Implementation* gles2 = context_->GetImplementation(); gles2->DeleteTextures(1, &texture_id); DCHECK_EQ(gles2->GetError(), static_cast<GLenum>(GL_NO_ERROR)); @@ -162,33 +156,24 @@ void RendererGpuVideoDecoderFactories::AsyncDeleteTexture(uint32 texture_id) { void RendererGpuVideoDecoderFactories::ReadPixels( uint32 texture_id, uint32 texture_target, const gfx::Size& size, - const SkBitmap& pixels) { - // SkBitmaps use the SkPixelRef object to refcount the underlying pixels. - // Multiple SkBitmaps can share a SkPixelRef instance. We use this to - // ensure that the underlying pixels in the SkBitmap passed in remain valid - // until the AsyncReadPixels() call completes. - read_pixels_bitmap_.setPixelRef(pixels.pixelRef()); - + void* pixels) { + base::WaitableEvent waiter(false, false); if (!message_loop_->BelongsToCurrentThread()) { message_loop_->PostTask(FROM_HERE, base::Bind( &RendererGpuVideoDecoderFactories::AsyncReadPixels, this, - texture_id, texture_target, size)); - base::WaitableEvent* objects[] = {&aborted_waiter_, &async_waiter_}; - if (base::WaitableEvent::WaitMany(objects, arraysize(objects)) == 0) - return; + texture_id, texture_target, size, pixels, &waiter)); + waiter.Wait(); } else { - AsyncReadPixels(texture_id, texture_target, size); + AsyncReadPixels(texture_id, texture_target, size, pixels, &waiter); } - read_pixels_bitmap_.setPixelRef(NULL); } void RendererGpuVideoDecoderFactories::AsyncReadPixels( - uint32 texture_id, uint32 texture_target, const gfx::Size& size) { + uint32 texture_id, uint32 texture_target, const gfx::Size& size, + void* pixels, base::WaitableEvent* waiter) { DCHECK(message_loop_->BelongsToCurrentThread()); - if (!context_) { - async_waiter_.Signal(); + if (!context_) return; - } gpu::gles2::GLES2Implementation* gles2 = context_->GetImplementation(); @@ -209,33 +194,30 @@ void RendererGpuVideoDecoderFactories::AsyncReadPixels( texture_target, tmp_texture, 0); gles2->PixelStorei(GL_PACK_ALIGNMENT, 4); gles2->ReadPixels(0, 0, size.width(), size.height(), GL_BGRA_EXT, - GL_UNSIGNED_BYTE, read_pixels_bitmap_.pixelRef()->pixels()); + GL_UNSIGNED_BYTE, pixels); gles2->DeleteFramebuffers(1, &fb); gles2->DeleteTextures(1, &tmp_texture); DCHECK_EQ(gles2->GetError(), static_cast<GLenum>(GL_NO_ERROR)); - async_waiter_.Signal(); + waiter->Signal(); } base::SharedMemory* RendererGpuVideoDecoderFactories::CreateSharedMemory( size_t size) { DCHECK_NE(MessageLoop::current(), ChildThread::current()->message_loop()); + base::SharedMemory* shm = NULL; + base::WaitableEvent waiter(false, false); ChildThread::current()->message_loop()->PostTask(FROM_HERE, base::Bind( &RendererGpuVideoDecoderFactories::AsyncCreateSharedMemory, this, - size)); - - base::WaitableEvent* objects[] = {&aborted_waiter_, &async_waiter_}; - if (base::WaitableEvent::WaitMany(objects, arraysize(objects)) == 0) - return NULL; - return shared_memory_segment_.release(); + size, &shm, &waiter)); + waiter.Wait(); + return shm; } void RendererGpuVideoDecoderFactories::AsyncCreateSharedMemory( - size_t size) { + size_t size, base::SharedMemory** shm, base::WaitableEvent* waiter) { DCHECK_EQ(MessageLoop::current(), ChildThread::current()->message_loop()); - - shared_memory_segment_.reset( - ChildThread::current()->AllocateSharedMemory(size)); - async_waiter_.Signal(); + *shm = ChildThread::current()->AllocateSharedMemory(size); + waiter->Signal(); } scoped_refptr<base::MessageLoopProxy> @@ -243,18 +225,4 @@ RendererGpuVideoDecoderFactories::GetMessageLoop() { return message_loop_; } -void RendererGpuVideoDecoderFactories::Abort() { - aborted_waiter_.Signal(); -} - -bool RendererGpuVideoDecoderFactories::IsAborted() { - return aborted_waiter_.IsSignaled(); -} - -void RendererGpuVideoDecoderFactories::AsyncDestroyVideoDecodeAccelerator() { - // OK to release because Destroy() will delete the VDA instance. - if (vda_) - vda_.release()->Destroy(); -} - } // namespace content diff --git a/content/renderer/media/renderer_gpu_video_decoder_factories.h b/content/renderer/media/renderer_gpu_video_decoder_factories.h index de5f19f..5512562 100644 --- a/content/renderer/media/renderer_gpu_video_decoder_factories.h +++ b/content/renderer/media/renderer_gpu_video_decoder_factories.h @@ -5,15 +5,11 @@ #ifndef CONTENT_RENDERER_MEDIA_RENDERER_GPU_VIDEO_DECODER_FACTORIES_H_ #define CONTENT_RENDERER_MEDIA_RENDERER_GPU_VIDEO_DECODER_FACTORIES_H_ -#include <vector> - #include "base/basictypes.h" #include "base/memory/ref_counted.h" #include "base/memory/weak_ptr.h" -#include "base/synchronization/waitable_event.h" #include "content/common/content_export.h" #include "media/filters/gpu_video_decoder.h" -#include "third_party/skia/include/core/SkBitmap.h" #include "ui/gfx/size.h" namespace base { @@ -52,14 +48,10 @@ class CONTENT_EXPORT RendererGpuVideoDecoderFactories std::vector<uint32>* texture_ids, uint32 texture_target) OVERRIDE; virtual void DeleteTexture(uint32 texture_id) OVERRIDE; - virtual void ReadPixels(uint32 texture_id, - uint32 texture_target, - const gfx::Size& size, - const SkBitmap& pixels) OVERRIDE; + virtual void ReadPixels(uint32 texture_id, uint32 texture_target, + const gfx::Size& size, void* pixels) OVERRIDE; virtual base::SharedMemory* CreateSharedMemory(size_t size) OVERRIDE; virtual scoped_refptr<base::MessageLoopProxy> GetMessageLoop() OVERRIDE; - virtual void Abort() OVERRIDE; - virtual bool IsAborted() OVERRIDE; protected: friend class base::RefCountedThreadSafe<RendererGpuVideoDecoderFactories>; @@ -68,50 +60,32 @@ class CONTENT_EXPORT RendererGpuVideoDecoderFactories private: // Helper for the constructor to acquire the ContentGLContext on the // compositor thread (when it is enabled). - void AsyncGetContext(WebGraphicsContext3DCommandBufferImpl* context); + void AsyncGetContext(WebGraphicsContext3DCommandBufferImpl* context, + base::WaitableEvent* waiter); // Async versions of the public methods. They use output parameters instead // of return values and each takes a WaitableEvent* param to signal completion // (except for DeleteTexture, which is fire-and-forget). // AsyncCreateSharedMemory runs on the renderer thread and the rest run on // |message_loop_|. - // The AsyncCreateVideoDecodeAccelerator returns its output in the vda_ - // member. void AsyncCreateVideoDecodeAccelerator( media::VideoCodecProfile profile, - media::VideoDecodeAccelerator::Client* client); - void AsyncCreateTextures(int32 count, const gfx::Size& size, - uint32 texture_target); + media::VideoDecodeAccelerator::Client* client, + media::VideoDecodeAccelerator** vda, + base::WaitableEvent* waiter); + void AsyncCreateTextures( + int32 count, const gfx::Size& size, std::vector<uint32>* texture_ids, + uint32 texture_target, bool* success, base::WaitableEvent* waiter); void AsyncDeleteTexture(uint32 texture_id); void AsyncReadPixels(uint32 texture_id, uint32 texture_target, - const gfx::Size& size); - void AsyncCreateSharedMemory(size_t size); - void AsyncDestroyVideoDecodeAccelerator(); + const gfx::Size& size, + void* pixels, base::WaitableEvent* waiter); + void AsyncCreateSharedMemory( + size_t size, base::SharedMemory** shm, base::WaitableEvent* waiter); scoped_refptr<base::MessageLoopProxy> message_loop_; scoped_refptr<GpuChannelHost> gpu_channel_host_; base::WeakPtr<WebGraphicsContext3DCommandBufferImpl> context_; - - // This event is signaled if we have been asked to Abort(). - base::WaitableEvent aborted_waiter_; - - // This event is signaled by asynchronous tasks to indicate their completion. - // e.g. AsyncCreateVideoDecodeAccelerator()/AsyncCreateTextures() etc. - base::WaitableEvent async_waiter_; - - // The vda returned by the CreateVideoAcclelerator function. - scoped_ptr<media::VideoDecodeAccelerator> vda_; - - // Shared memory segment which is returned by the CreateSharedMemory() - // function. - scoped_ptr<base::SharedMemory> shared_memory_segment_; - - // Bitmap returned by ReadPixels(). - SkBitmap read_pixels_bitmap_; - - // Textures returned by the CreateTexture() function. - std::vector<uint32> created_textures_; - DISALLOW_IMPLICIT_CONSTRUCTORS(RendererGpuVideoDecoderFactories); }; diff --git a/media/base/video_frame.cc b/media/base/video_frame.cc index 62e81ca..70b77fb 100644 --- a/media/base/video_frame.cc +++ b/media/base/video_frame.cc @@ -13,7 +13,6 @@ #include "base/string_piece.h" #include "media/base/limits.h" #include "media/base/video_util.h" -#include "third_party/skia/include/core/SkBitmap.h" namespace media { @@ -80,7 +79,7 @@ scoped_refptr<VideoFrame> VideoFrame::WrapNativeTexture( return frame; } -void VideoFrame::ReadPixelsFromNativeTexture(const SkBitmap& pixels) { +void VideoFrame::ReadPixelsFromNativeTexture(void* pixels) { DCHECK_EQ(format_, NATIVE_TEXTURE); if (!read_pixels_cb_.is_null()) read_pixels_cb_.Run(pixels); diff --git a/media/base/video_frame.h b/media/base/video_frame.h index 9a6f0a6..5307df9 100644 --- a/media/base/video_frame.h +++ b/media/base/video_frame.h @@ -11,8 +11,6 @@ #include "ui/gfx/rect.h" #include "ui/gfx/size.h" -class SkBitmap; - namespace media { class MEDIA_EXPORT VideoFrame : public base::RefCountedThreadSafe<VideoFrame> { @@ -71,8 +69,8 @@ class MEDIA_EXPORT VideoFrame : public base::RefCountedThreadSafe<VideoFrame> { const gfx::Size& natural_size); // CB to write pixels from the texture backing this frame into the - // |const SkBitmap&| parameter. - typedef base::Callback<void(const SkBitmap&)> ReadPixelsCB; + // |void*| parameter. + typedef base::Callback<void(void*)> ReadPixelsCB; // Wraps a native texture of the given parameters with a VideoFrame. When the // frame is destroyed |no_longer_needed_cb.Run()| will be called. @@ -81,7 +79,6 @@ class MEDIA_EXPORT VideoFrame : public base::RefCountedThreadSafe<VideoFrame> { // any) is applied. // |natural_size| is the width and height of the frame when the frame's aspect // ratio is applied to |visible_rect|. - // |read_pixels_cb| may be used to do (slow!) readbacks from the // texture to main memory. static scoped_refptr<VideoFrame> WrapNativeTexture( @@ -95,9 +92,9 @@ class MEDIA_EXPORT VideoFrame : public base::RefCountedThreadSafe<VideoFrame> { const base::Closure& no_longer_needed_cb); // Read pixels from the native texture backing |*this| and write - // them to |pixels| as BGRA. |pixels| must point to a buffer at + // them to |*pixels| as BGRA. |pixels| must point to a buffer at // least as large as 4*visible_rect().width()*visible_rect().height(). - void ReadPixelsFromNativeTexture(const SkBitmap& pixels); + void ReadPixelsFromNativeTexture(void* pixels); // Wraps external YUV data of the given parameters with a VideoFrame. // The returned VideoFrame does not own the data passed in. When the frame diff --git a/media/filters/gpu_video_decoder.cc b/media/filters/gpu_video_decoder.cc index 7199415..cbad87d 100644 --- a/media/filters/gpu_video_decoder.cc +++ b/media/filters/gpu_video_decoder.cc @@ -69,7 +69,7 @@ GpuVideoDecoder::GpuVideoDecoder( void GpuVideoDecoder::Reset(const base::Closure& closure) { DCHECK(gvd_loop_proxy_->BelongsToCurrentThread()); - if (state_ == kDrainingDecoder && !factories_->IsAborted()) { + if (state_ == kDrainingDecoder) { gvd_loop_proxy_->PostTask(FROM_HERE, base::Bind( &GpuVideoDecoder::Reset, this, closure)); // NOTE: if we're deferring Reset() until a Flush() completes, return @@ -103,10 +103,6 @@ void GpuVideoDecoder::Stop(const base::Closure& closure) { DCHECK(gvd_loop_proxy_->BelongsToCurrentThread()); if (vda_.get()) DestroyVDA(); - if (!pending_read_cb_.is_null()) - EnqueueFrameAndTriggerFrameDelivery(VideoFrame::CreateEmptyFrame()); - if (!pending_reset_cb_.is_null()) - base::ResetAndReturn(&pending_reset_cb_).Run(); BindToCurrentLoop(closure).Run(); } @@ -287,9 +283,6 @@ void GpuVideoDecoder::RequestBufferDecode( size_t size = buffer->GetDataSize(); SHMBuffer* shm_buffer = GetSHM(size); - if (!shm_buffer) - return; - memcpy(shm_buffer->shm->memory(), buffer->GetData(), size); BitstreamBuffer bitstream_buffer( next_bitstream_buffer_id_, shm_buffer->shm->handle(), size); @@ -486,9 +479,7 @@ GpuVideoDecoder::SHMBuffer* GpuVideoDecoder::GetSHM(size_t min_size) { available_shm_segments_.back()->size < min_size) { size_t size_to_allocate = std::max(min_size, kSharedMemorySegmentBytes); base::SharedMemory* shm = factories_->CreateSharedMemory(size_to_allocate); - // CreateSharedMemory() can return NULL during Shutdown. - if (!shm) - return NULL; + DCHECK(shm); return new SHMBuffer(shm, size_to_allocate); } SHMBuffer* ret = available_shm_segments_.back(); @@ -577,6 +568,9 @@ void GpuVideoDecoder::NotifyResetDone() { return; } + if (!vda_.get()) + return; + DCHECK(ready_video_frames_.empty()); // This needs to happen after the Reset() on vda_ is done to ensure pictures diff --git a/media/filters/gpu_video_decoder.h b/media/filters/gpu_video_decoder.h index d92c045..61c8b56 100644 --- a/media/filters/gpu_video_decoder.h +++ b/media/filters/gpu_video_decoder.h @@ -22,8 +22,6 @@ class MessageLoopProxy; class SharedMemory; } -class SkBitmap; - namespace media { class DecoderBuffer; @@ -49,9 +47,9 @@ class MEDIA_EXPORT GpuVideoDecoder uint32 texture_target) = 0; virtual void DeleteTexture(uint32 texture_id) = 0; - // Read pixels from a native texture and store into |pixels| as RGBA. + // Read pixels from a native texture and store into |*pixels| as RGBA. virtual void ReadPixels(uint32 texture_id, uint32 texture_target, - const gfx::Size& size, const SkBitmap& pixels) = 0; + const gfx::Size& size, void* pixels) = 0; // Allocate & return a shared memory segment. Caller is responsible for // Close()ing the returned pointer. @@ -60,13 +58,6 @@ class MEDIA_EXPORT GpuVideoDecoder // Returns the message loop the VideoDecodeAccelerator runs on. virtual scoped_refptr<base::MessageLoopProxy> GetMessageLoop() = 0; - // Abort any outstanding factory operations and error any future - // attempts at factory operations - virtual void Abort() = 0; - - // Returns true if Abort has been called. - virtual bool IsAborted() = 0; - protected: friend class base::RefCountedThreadSafe<Factories>; virtual ~Factories(); diff --git a/media/filters/skcanvas_video_renderer.cc b/media/filters/skcanvas_video_renderer.cc index b478a73..53b5b2d 100644 --- a/media/filters/skcanvas_video_renderer.cc +++ b/media/filters/skcanvas_video_renderer.cc @@ -227,7 +227,7 @@ static void ConvertVideoFrameToBitmap( yuv_type); } else { DCHECK_EQ(video_frame->format(), media::VideoFrame::NATIVE_TEXTURE); - video_frame->ReadPixelsFromNativeTexture(*bitmap); + video_frame->ReadPixelsFromNativeTexture(bitmap->getPixels()); } bitmap->notifyPixelsChanged(); bitmap->unlockPixels(); diff --git a/media/filters/video_renderer_base.cc b/media/filters/video_renderer_base.cc index 78f27b3..4a5af4d 100644 --- a/media/filters/video_renderer_base.cc +++ b/media/filters/video_renderer_base.cc @@ -72,10 +72,6 @@ void VideoRendererBase::Flush(const base::Closure& callback) { flush_cb_ = callback; state_ = kFlushingDecoder; - // This is necessary if the decoder has already seen an end of stream and - // needs to drain it before flushing it. - ready_frames_.clear(); - received_end_of_stream_ = false; video_frame_stream_.Reset(base::Bind( &VideoRendererBase::OnVideoFrameStreamResetDone, weak_this_)); } @@ -492,8 +488,9 @@ void VideoRendererBase::OnVideoFrameStreamResetDone() { void VideoRendererBase::AttemptFlush_Locked() { lock_.AssertAcquired(); DCHECK_EQ(kFlushing, state_); - DCHECK(ready_frames_.empty()); - DCHECK(!received_end_of_stream_); + + ready_frames_.clear(); + received_end_of_stream_ = false; if (pending_read_) return; diff --git a/media/test/data/rapid_video_change_test.html b/media/test/data/rapid_video_change_test.html deleted file mode 100644 index 81a853b..0000000 --- a/media/test/data/rapid_video_change_test.html +++ /dev/null @@ -1,47 +0,0 @@ -<!DOCTYPE html> -<html> - <head> - <title>HTML 5 Change H.264 Video Test</title> - <script> - var changeVideoInterval; - var changeVideoCounter = 0; - - function changeVideo() { - try { - if (changeVideoCounter == 40) { - alert('40 video changes done. Test over'); - window.clearInterval(changeVideoInterval); - return; - } - var video = document.getElementById('video'); - video.pause(); - video.src = 'bear-1280x720.mp4?counter=' + - changeVideoCounter.toString(); - ++changeVideoCounter; - video.play(); - video.currentTime = 1; - } - - catch (e) { - } - } - - function onLoad() { - var video = document.getElementById('video'); - video.play(); - video.currentTime = 1; - changeVideoInterval = setInterval(changeVideo, 200); - } - </script> - </head> - - <body onload='onLoad();'> <b> This test tests the case where in H.264 H/W - decoded videos are added and removed a number of times from the page, - while they are playing. <br> This should not cause the browser to hang. - <div id='videoDiv'> - <video id='video' width=320 height=240 src='bear-1280x720.mp4' - controls='controls'> - </video> - </div> - </body> -</html> diff --git a/webkit/media/webmediaplayer_impl.cc b/webkit/media/webmediaplayer_impl.cc index 4008ffc..0b76985 100644 --- a/webkit/media/webmediaplayer_impl.cc +++ b/webkit/media/webmediaplayer_impl.cc @@ -188,7 +188,6 @@ WebMediaPlayerImpl::WebMediaPlayerImpl( new media::GpuVideoDecoder( media_thread_.message_loop_proxy(), params.gpu_factories())); - gpu_factories_ = params.gpu_factories(); } // Create default video renderer. @@ -1157,11 +1156,6 @@ void WebMediaPlayerImpl::Destroy() { if (chunk_demuxer_) chunk_demuxer_->Shutdown(); - if (gpu_factories_) { - gpu_factories_->Abort(); - gpu_factories_ = NULL; - } - // Make sure to kill the pipeline so there's no more media threads running. // Note: stopping the pipeline might block for a long time. base::WaitableEvent waiter(false, false); diff --git a/webkit/media/webmediaplayer_impl.h b/webkit/media/webmediaplayer_impl.h index 68e6f3c..3e1799a 100644 --- a/webkit/media/webmediaplayer_impl.h +++ b/webkit/media/webmediaplayer_impl.h @@ -31,7 +31,6 @@ #include "media/base/audio_renderer_sink.h" #include "media/base/decryptor.h" #include "media/base/pipeline.h" -#include "media/filters/gpu_video_decoder.h" #include "media/filters/skcanvas_video_renderer.h" #include "skia/ext/platform_canvas.h" #include "third_party/WebKit/Source/Platform/chromium/public/WebGraphicsContext3D.h" @@ -360,8 +359,6 @@ class WebMediaPlayerImpl // not NULL while the compositor is actively using this webmediaplayer. cc::VideoFrameProvider::Client* video_frame_provider_client_; - scoped_refptr<media::GpuVideoDecoder::Factories> gpu_factories_; - DISALLOW_COPY_AND_ASSIGN(WebMediaPlayerImpl); }; |