Monday, 22 February 2021

3D Recontruction from only two 2D images w/ iPhone 7 camera

The Main Problem:

Reconstruct an object from only 2 images (at least what can be perceived from the 2 images).

About Camera:

I'm using an iPhone 7 camera and I'm taking my own pictures meaning I can calibrate my camera. I can obtain the focal length (4mm) and sensor width (3.99mm) from : https://www.anandtech.com/show/10685/the-iphone-7-and-iphone-7-plus-review/6. I also know I can get my focal length in pixels from these known values and my c_x, and c_y from the width and height I'm not sure if calibration is correct.

My current approach:

I'm following a very similar approach as the one used in this post: 3D reconstruction from 2 images with baseline and single camera calibration

Algorithm:

criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03)

feature_params = dict( maxCorners = 100, qualityLevel = 0.3, minDistance = 7, blockSize = 7 )

lk_params = dict( winSize = (15,15),maxLevel = 20, criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

  • Corner detection from left image using enter code hereenter code here cv2.goodFeaturesToTrack(gray,mask = None, **feature_params)
  • Refine corners found using corners1= cv2.cornerSubPix(gray,corners1,(11,11),(-1,-1),criteria)
  • Find corners on the right image using optical flow corners_r, st, err = cv2.calcOpticalFlowPyrLK(gray, r_gray, corners1, None, **lk_params)
  • Keep only the good points good_left = corners1[st==1] and good_right = corners_r[st==1] ( Feature match images)
  • Find the fundamental matrix from the selected points F, mask = cv2.findFundamentalMat(good_left,good_right,cv2.FM_RANSAC)
  • Calculate H1, H2 for rectification of the images using _,H1,H2=cv2.stereoRectifyUncalibrated(good_left, good_right, F, right.shape[:2], threshold=5.0)
  • Rectify images new_left = cv2.warpPerspective(img5,H1,img5.shape[:2],cv2.INTER_LINEAR or cv2.INTER_NEAREST) new_right= cv2.warpPerspective(img3,H2,img3.shape[:2],cv2.INTER_LINEAR or cv2.INTER_NEAREST) (Images depicted below Fig.1)
  • Calculate disparity map using sgbm in opencv.
  • Reconstruct 3D object using reprojectImageTo3D

Problems:

  • Currently not un-distorting images since using my camera matrix array([[ 3.25203085e+03, 0.00000000e+00, 1.54093581e+03], [ 0.00000000e+00, 3.26746422e+03, 1.91792736e+03], [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]) and dist_coeffs array([[ 0.16860882, -1.25881837, -0.01130431, -0.01046869, 2.09480747]]) distorts my images. Calculated these values using 46 images of the chessboard pattern at different angles and perspectives.
  • Using warpPerspective with H1 and H2 for corresponding images shears the image a bit much. Look at sheared images here. (Fig.1)

Questions:

  • I obtained total error: 0.457120388(not bad i think) for camera calibration following the steps here: https://docs.opencv.org/3.1.0/dc/dbb/tutorial_py_calibration.html Is this a good error ?
  • Is the shearing in my images how it actually should look? I'm thinking this extra shearing comes from the feature matching using optical flow and maybe there are a couple features that don't truly match. What are good ways of filtering them even more to be more precise? I heard I can also use Zhang's algorithm to correct this shearing but not sure how I would apply that. This needs to work for other images as well meaning I want to have a more robust approach not just to these two images if possible.

Sorry if wasn't as precise, please let me know if you need more information. I have also been searching around for answers for a while so I'm only asking this question because I have nowhere else to turn so far.

Any help is much appreciated in advance. Thank you guys!



from 3D Recontruction from only two 2D images w/ iPhone 7 camera

No comments:

Post a Comment