I'm trying to capture a single image from H.264 video streaming in my Raspberry Pi. The streaming is using raspivid with websocket. So, I'm trying to follow this post. But, cannot show a correct image in imshow(). I also tried to set the .reshape(), but got ValueError: cannot reshape array of size 3607 into shape (480,640,3)
In client side, I successfully connect to the video streaming and get incoming bytes. I guess the first byte can be decoded to image? So, I do the following code.
async def get_image_from_h264_streaming():
uri = "ws://127.0.0.1:8080"
async with websockets.connect(uri) as websocket:
frame = json.loads(await websocket.recv())
print(frame)
width, height = frame["width"], frame["height"]
response = await websocket.recv()
print(response)
# transform the byte read into a numpy array
in_frame = (
numpy
.frombuffer(response, numpy.uint8)
# .reshape([height, width, 3])
)
# #Display the frame
cv2.imshow('in_frame', in_frame)
cv2.waitKey(0)
asyncio.get_event_loop().run_until_complete(get_image_from_h264_streaming())
print(frame) shows
{'action': 'init', 'width': 640, 'height': 480}
print(response) shows
b"\x00\x00\x00\x01'B\x80(\x95\xa0(\x0fh\x0..............xfc\x9f\xff\xf9?\xff\xf2\x7f\xff\xe4\x80"
Any suggestions?
from Capture first image from h.264 video streaming using websocket - Python
No comments:
Post a Comment