I have a requirement where I need to build something like a streaming program, where a video which is divided into multiple chunks is received by my program at some regular interval; say 2s. And each of my video which contains audio as well is of 10s. Now whenever I receive the video I add it to a global or shared container from which I keep reading and play the video in python continuously without the user experiencing any lag.
The program is not built yet and I am still figuring out the bits and pieces. In the below code I have tried something similar by going though multiple answers and blogs. I am doing it for a single video currently looping over entire video/audio frames and storing it in a container. However this approach using ffpyplayer
does not work and for some reason there is no way to go through audio frames one by one like we do for video frames.
How can I achieve the same? In the below program how can I store both video and audio frames first and then go over them and make them play/display?
from ffpyplayer.player import MediaPlayer
from ffpyplayer.pic import Image
import cv2
video_path = "output_1.mp4"
player = MediaPlayer(video_path)
# Initialize a list to store frames and audio frames
media_queue = [] # A temporary queue to store frames and audio frames
frame_index = 0 # To keep track of the frame index
cap = cv2.VideoCapture(video_path)
while True:
ret, frame = cap.read()
audio_frame, val_audio = player.get_frame() # Get the next audio frame
if ret != 'eof' and frame is not None:
img = frame
media_queue.append(('video', img))
if val_audio != 'eof' and audio_frame is not None:
audio = audio_frame[0]
media_queue.append(('audio', audio))
if ret == 'eof' and val_audio == 'eof':
break
frame_index += 1
print(f"Total frames and audio frames processed: {frame_index}")
# Play Frames and Audio from Queue
for media_type, media_data in media_queue:
if media_type == 'video':
img_data = Image(media_data[0], media_data[1])
cv2.imshow("Frame", img_data.image) # Display the frame
elif media_type == 'audio':
player.set_volume(1.0)
player.set_audio_frame(media_data)
if cv2.waitKey(25) & 0xFF == ord("q"):
break
cv2.destroyAllWindows()
I know the above program is wrong. Is there any way to stream both video and audio by storing them into some container first and then looping through it? Even if someone could help me with an algorithm and explanation of what I should look at I can try making it work.
from Loop over audio frames and store in a queue/container
No comments:
Post a Comment