Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The problem of drawing frame images on the canvas #6389

Open
yzydeveloper opened this issue May 1, 2024 · 7 comments
Open

The problem of drawing frame images on the canvas #6389

yzydeveloper opened this issue May 1, 2024 · 7 comments

Comments

@yzydeveloper
Copy link

Is your feature request related to a problem? Please describe.

None

Describe the solution you'd like

In hls.js, using MediaSource combined with Video to display videos, but is there a way to obtain YUV data for each frame? Then use Canvas and AudioContext to play the video

Additional context

No response

@yzydeveloper yzydeveloper added Feature proposal Needs Triage If there is a suspected stream issue, apply this label to triage if it is something we should fix. labels May 1, 2024
@yzydeveloper
Copy link
Author

@mangui

@robwalch robwalch added Wontdo and removed Needs Triage If there is a suspected stream issue, apply this label to triage if it is something we should fix. labels May 3, 2024
@robwalch
Copy link
Collaborator

robwalch commented May 3, 2024

HLS.js does not provide methods for interacting with HTMLMediaElement that are already available as part of the Web API.

@yzydeveloper
Copy link
Author

HLS.js does not provide methods for interacting with HTMLMediaElement that are already available as part of the Web API.

Is there any other way for hls.js to play on browsers that do not support mse? Using canvas and audiocontext

@robwalch
Copy link
Collaborator

robwalch commented May 5, 2024

Is there any other way for hls.js to play on browsers that do not support mse?

HLS.js only uses MSE.

@kedanielwu
Copy link

drawing VideoFrame via canvas + AudioFrame (PCM data) using WebAudio does not directly related to hls.js (or any other streaming lib on MSE), and it is achievable througth various way depending on your actual needs. The only problem is you get YUV data only when the frame being rendered, which is suitable for postprocessing.

  1. WebCodec API provides a way to directly get currently rendered VideoFrame from video element const frame = new VideoFrame(HTMLVIdeoElement), then you can use VideoFrame.copyTo with VideoFrame.format and VideoFrame.allocationSize to get YUV data for most of the content (normally 8bit 420 content should be fine)

  2. From (1), WebGPU also allows you to directly import VideoFrame as texture, then you can use simple marix to transform rgb back to YUV to do some custom processing in shader, or directly render the texture into canvas. For normal 2d canvas, you should also be able to directly draw ImageBitmap to canvas.

  3. For audio data, ScriptNode & AudioWorklet of WebAudio API should be enough for you, the data provider can directly be the HTMLVideoElement as well.

@yzydeveloper
Copy link
Author

Can I obtain a buffer for processing during hls.js decoding?

@kedanielwu
Copy link

kedanielwu commented May 6, 2024

Can I obtain a buffer for processing during hls.js decoding?

first of all, hls.js or other similar library does not provide "decoding" functionality, video decoding is not something directly exposed to js context by normal approach. MSE on the other hand, also not standalone "decoder", you can think of it as a source provider, for web devloper to customize the way of "streaming" media data to browser.

from your previous descrption I think you are on a wrong track. If device/os does not support MSE, it is likely does not support any of the newer API for decoding/rendering, rely on native hls support probably is your only choice.

And if you really just want to control the decoding/rendering process:

simple solution: no

complex solution: yes, you can build custom MSE and custom HTMLVideoElement using WebCodec or even WASM, as long as you followed MSE spec, then modify hls.js to use your custom modules. In that case hls.js will push remuxed FMP4 segment to your MSE interface, and you can do your work after that (e.g using WebCodec for decoding then output YUV data and manage your own frame buffer, before sending to canvas)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants