I'm trying to make an app to process a set of frames, stored as jpg
into the app using Google-vision API.
The pipeline is simple.
1) I create the detector with some options:
_options = @{
GMVDetectorFaceLandmarkType : @(GMVDetectorFaceLandmarkAll),
GMVDetectorFaceClassificationType : @(GMVDetectorFaceClassificationAll),
GMVDetectorFaceTrackingEnabled : @(NO)
};
_faceDetector = [GMVDetector detectorOfType:GMVDetectorTypeFace options:_options];
2) I read a frame with this method:
UIImage *image = [UIImage imageWithContentsOfFile:imFile];
The path contained in imFile is correct, I can see the Image representation
3) At last, I process the frame:
NSArray<GMVFaceFeature *> *faces = [_faceDetector featuresInImage:image options:nil];
With this code I can process some frames, but when analyzing a lot of them, the memory usage of the app keeps increasing and the app is killed automatically.
I've tried to track the memory leak, but as far as I tracked it, it comes from inside the last part, inside the [detector featuresInImage...]
Is there something I am doing wrong, or is there a memory leak inside it? I have tried to find any issue from google but couldn't manage to find it.
from Google Vision API possible memory leak
No comments:
Post a Comment