I am using Python with OpenCV 3.4.
I have a system composed of 2 cameras that I want to use to track an object and get its trajectory, then its speed.
I am currently able to calibrate intrinsically and extrinsically each of my cameras. I can track my object through the video and get the 2d coordinates in my video plan.
My problem now is that I would like to project my points from my both 2D plan into 3D points. I've tried functions as triangulatePoints
but it seems it's not working in a proper way. Here is my actual function to get 3d coords. It returns some coordinates that seems a little bit off compared to the actual coordinates
def get_3d_coord(left_two_d_coords, right_two_d_coords):
pt1 = left_two_d_coords.reshape((len(left_two_d_coords), 1, 2))
pt2 = right_two_d_coords.reshape((len(right_two_d_coords), 1, 2))
extrinsic_left_camera_matrix, left_distortion_coeffs, extrinsic_left_rotation_vector, \
extrinsic_left_translation_vector = trajectory_utils.get_extrinsic_parameters(
1)
extrinsic_right_camera_matrix, right_distortion_coeffs, extrinsic_right_rotation_vector, \
extrinsic_right_translation_vector = trajectory_utils.get_extrinsic_parameters(
2)
#returns arrays of the same size
(pt1, pt2) = correspondingPoints(pt1, pt2)
projection1 = computeProjMat(extrinsic_left_camera_matrix,
extrinsic_left_rotation_vector, extrinsic_left_translation_vector)
projection2 = computeProjMat(extrinsic_right_camera_matrix,
extrinsic_right_rotation_vector, extrinsic_right_translation_vector)
out = cv2.triangulatePoints(projection1, projection2, pt1, pt2)
oc = []
for idx, elem in enumerate(out[0]):
oc.append((out[0][idx], out[1][idx], out[2][idx], out[3][idx]))
oc = np.array(oc, dtype=np.float32)
point3D = []
for idx, elem in enumerate(oc):
W = out[3][idx]
obj = [None] * 4
obj[0] = out[0][idx] / W
obj[1] = out[1][idx] / W
obj[2] = out[2][idx] / W
obj[3] = 1
pt3d = [obj[0], obj[1], obj[2]]
point3D.append(pt3d)
return point3D
from 3D points projection using multiple camera
No comments:
Post a Comment