I've developed an app on Android, where you can add images on the screen and pan/rotate/scale them around. Technically, I have a fixed-aspect parent layout, which contains these images (ImageViews). They have set width and height to match_parent. Then, I can save their positions on the server, which I can then load later on same or other device. Basically, I simply save Matrix values. However, translation x/y have absolute values (in pixels), but screen sizes differ, so instead I set them as percentage values (so translation x is divided by parent width and translation y is divided by parent height). When loading images I just multiply translation x/y by parent layout width/height and everything looks the same even on the different device.
Here's how I do it:
float[] values = new float[9];
parentLayout.matrix.getValues(values);
values[Matrix.MTRANS_X] /= parentLayout.getWidth();
values[Matrix.MTRANS_Y] /= parentLayout.getHeight();
Then in loading function, I simply multiply these values.
Images manipulation is done by using postTranslate/postScale/postRotate with anchor point (for 2-finger gesture it's midpoint between these fingers).
Now, I want to port this app to iOS and use the similar technique to achieve correct images positions on iOS devices as well. CGAffineTransform seems to be very similar to Matrix class. The order of fields is different, but they seem to work the same way.
Let's say this array is the sample values array from Android app:
let a: [CGFloat] = [0.35355338, -0.35355338, 0.44107446, 0.35355338, 0.35355338, 0.058058262]
I don't save last 3 fields, because they're always 0, 0 and 1 and on iOS they can't be even set (documentation also states it's [0, 0, 1]). So this is how the "conversion" looks like:
let tx = a[2] * width //multiply percentage translation x with parent width
let ty = a[5] * height //multiply percentage translation y with parent height
let transform = CGAffineTransform(a: a[0], b: a[3], c: a[1], d: a[4], tx: tx, ty: ty)
However, this only work for very simple cases. For other ones image is usually misplaced or even completely messed up.
I've noticed that, by default, on Android I have anchor point in [0, 0], but on iOS it's image's center.
For example:
float midX = parentLayout.getWidth() / 2.0f;
float midY = parentLayout.getHeight() / 2.0f;
parentLayout.matrix.postScale(0.5f, 0.5f, midX, midY);
Gives the following matrix:
0.5, 0.0, 360.0,
0.0, 0.5, 240.0,
0.0, 0.0, 1.0
[360.0, 240.0] is simply a vector from the left-top corner.
However, on iOS I don't have to provide mid point, because it's already transformed around center:
let transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
and it gives me:
CGAffineTransform(a: 0.5, b: 0.0, c: 0.0, d: 0.5, tx: 0.0, ty: 0.0)
I've tried setting a different anchor point on iOS:
parentView.layer.anchorPoint = CGPoint(x: 0, y: 0)
However, I cannot get the right results. I've tried translation by -width * 0.5, -height * 0.5, scale and translate back, but I don't get the same result as on Android.
In short: how can I modify Matrix from Android to CGAffineTransform from iOS to achieve the same look?
I'm also attaching as-simple-as-possible demo projects. The objective is to copy output array from Android into "a" array in iOS project and modify CGAffineTransform calculation (and/or parentView setup) in way that image looks the same on both platforms no matter what's the log output from Android.
Android: https://github.com/piotrros/MatrixAndroid
iOS: https://github.com/piotrros/MatrixIOS
from Converting Matrix to CGAffineTransform
No comments:
Post a Comment