I am using getUserDisplay api to record the current tab of the user, Now I want to capture the users mouse move positions at the rate it is capturing the screen in (so that each frame could have one object of x and y coordinates)
const constraints = {
audio: false,
video: true,
videoConstraints:{
mandatory:
{minFrameRate:60,maxFrameRate:90,maxWidth:1920,maxHeight:1080}}
};
navigator.mediaDevices.getUserDisplay(constraints)
.then(stream => {
const mediaStream = stream; // 30 frames per second
const mediaRecorder = new MediaRecorder(mediaStream, {
mimeType: 'video/webm',
videoBitsPerSecond: 3000000
});
in short > I recorded the screen and send the recorded video to the backend, in the backend decoded the frames, now for each frame there should the mouse x and y coordinates just like doing in real, then I would stich the frames together and form a video wants to do some editing with the recording
I don't wanna add the cursor on the video in the frontend js, but rather save the mouse coordiates and recorded video seperate and send both to backend
I try using requestAnimationFrame, but its not equal to the number of frames in the video, I tested and number of frames in the recorded video were like 570 and the array only contains 194 items.
function tests() {
testf.push('test');
window.requestAnimationFrame(tests)
}
Thank you so much for reading, any advice would be greatly appreciated :)
from How to push items inside an array at the rate at which the screen is being getting captured
No comments:
Post a Comment