I am trying to load a tflite model and run it on an image.
My tflite model has the dimensions you see in the image. 
Right now, I am receiving:
Cannot copy to a TensorFlowLite tensor (input_1) with 49152 bytes from a Java Buffer with 175584 bytes.
I can't understand how to work with input and output tensor sizes. Right now, I am initializing using the input image size and the output image size will be input * 4.
At which point do I have to "add" the 1 * 64 * 64 * 3 dimensions since I need to manipulate every input image size?
try {
tflitemodel = loadModelFile()
tflite = Interpreter(tflitemodel, options)
} catch (e: IOException) {
Log.e(TAG, "Fail to load model", e)
}
val imageTensorIndex = 0
val imageShape: IntArray =
tflite.getInputTensor(imageTensorIndex).shape()
val imageDataType: DataType = tflite.getInputTensor(imageTensorIndex).dataType()
// Build a TensorImage object
var inputImageBuffer = TensorImage(imageDataType);
// Load the Bitmap
inputImageBuffer.load(bitmap)
// Preprocess image
val imgprocessor = ImageProcessor.Builder()
.add(ResizeOp(inputImageBuffer.height,
inputImageBuffer.width,
ResizeOp.ResizeMethod.NEAREST_NEIGHBOR))
//.add(NormalizeOp(127.5f, 127.5f))
//.add(QuantizeOp(128.0f, 1 / 128.0f))
.build()
// Process the image
val processedImage = imgprocessor.process(inputImageBuffer)
// Access the buffer ( byte[] ) of the processedImage
val imageBuffer = processedImage.buffer
val imageTensorBuffer = processedImage.tensorBuffer
// output result
val outputImageBuffer = TensorBuffer.createFixedSize(
intArrayOf( inputImageBuffer.height * 4 ,
inputImageBuffer.width * 4 ) ,
DataType.FLOAT32 )
// Normalize image
val tensorProcessor = TensorProcessor.Builder()
// Normalize the tensor given the mean and the standard deviation
.add( NormalizeOp( 127.5f, 127.5f ) )
.add( CastOp( DataType.FLOAT32 ) )
.build()
val processedOutputTensor = tensorProcessor.process(outputImageBuffer)
tflite.run(imageTensorBuffer.buffer, processedOutputTensor.buffer)
I tried to cast the output tensor either to FLOAT32 or UINT8.
UPDATE
I also tried this :
try {
tflitemodel = loadModelFile()
tflite = Interpreter(tflitemodel, options)
} catch (e: IOException) {
Log.e(TAG, "Fail to load model", e)
}
val imageTensorIndex = 0
val imageDataType: DataType = tflite.getInputTensor(imageTensorIndex).dataType()
val imgprocessor = ImageProcessor.Builder()
.add(ResizeOp(64,
64,
ResizeOp.ResizeMethod.NEAREST_NEIGHBOR)
)
// .add( NormalizeOp( 0.0f, 255.0f ) )
.add( CastOp( DataType.FLOAT32 ) )
.build()
val inpIm = TensorImage(imageDataType)
inpIm.load(bitmap)
val processedImage = imgprocessor.process(inpIm)
val output = TensorBuffer.createFixedSize(
intArrayOf(
124 * 4,
118 * 4,
3,
1
),
DataType.FLOAT32
)
val tensorProcessor = TensorProcessor.Builder()
.add( NormalizeOp( 0.0f, 255.0f ) )
.add( CastOp( DataType.FLOAT32 ) )
.build()
val processedOutputTensor = tensorProcessor.process(output)
tflite.run(processedImage.buffer, processedOutputTensor.buffer)
which produces:
Note, that the current image I am using as input has 124 * 118 * 3 dimensions.
The output image will have (124 * 4) * (118 * 4) * 3 dimensions.
The model needs 64 * 64 * 3 as input layer.
from Image produced is incomplete - Cannot copy to a TensorFlowLite tensor (input_1) with bytes

No comments:
Post a Comment