iOSMotion questions

Working on my first iOS project for which I need to extract device orientation. Some questions:

  1. It appear that iOSMotion implements the arbitraryX motion frame: CMAttitudeReferenceFrame.xArbitraryZVertical. Is it possible to use other reference frames, preferably: CMAttitudeReferenceFrame.xTrueNorthZVertical? This would be essential for compass-style apps.

  2. Is it possible to access device orientation using the rotation matrix or quaternions (which iOS can provide directly)? These are less susceptible to gimbal lock than the Euler angles, pitch, roll, and yaw.

  3. Is it possible to access the magnetometerData.magnetic field info in Xojo?

Thanks in advance for any help!

Okay, I’ve figured out how to back out the the iOS rotation matrix from the roll, pitch, and yaw. For others who might be interested, it is:

r11 = cos(roll)*cos(yaw) - sin(roll)*sin(pitch)*sin(yaw) r12 = cos(yaw)*sin(roll)*sin(pitch) + cos(roll)*sin(yaw) r13 = -sin(roll)*cos(pitch) r21 = -cos(pitch)*sin(yaw) r22 = cos(pitch)*cos(yaw) r23 = sin(pitch) r31 = cos(roll)*sin(pitch)*sin(yaw) + cos(yaw)*sin(roll) r32 = sin(yaw)*sin(roll) - cos(roll)*cos(yaw)*sin(pitch) r33 = cos(roll)*cos(pitch)

Still really hoping that someone can tell me how to set the reference frame to CMAttitudeReferenceFrame.xTrueNorthZVertical. This stuff is pretty essential for many types of apps that need to use device orientation in a real environment (navigation, using sensors to collect data, etc.

Hi Richard,
Here is an untested method that may be able to change the reference frame to what you need. I haven’t tested it, but its straightforward to implement with two declares:

[code] m = iOSMotion.GetObject
dim motionObj as ptr = m.Handle
declare sub startDeviceMotionUpdatesUsingReferenceFrame_ lib “CoreMotion.framework” selector “startDeviceMotionUpdatesUsingReferenceFrame:” (obj_id as ptr, referenceFrame as Integer)
declare sub stopDeviceMotionUpdates_ lib “CoreMotion.framework” selector “stopDeviceMotionUpdates” (obj_id as ptr)
const CMAttitudeReferenceFrame_XTrueNorthZVertical as Integer = 8 //found using a Swift playground
//get the updates using the custom reference frame
startDeviceMotionUpdatesUsingReferenceFrame_(motionObj, CMAttitudeReferenceFrame_XTrueNorthZVertical)

//must call custom stop function and not use Xojo’s enabled property, bad things will
//likely happen since we dont know what framework calls are made behind the scenes
//this can be in a separate method (you need to move the declare in that case too)
stopDeviceMotionUpdates_(motionObj)[/code]

Please let me know if this does the trick for you.
Jason

It works! Awesome, Jason, thanks so much. For what it’s worth, I develop free apps for academics so there’s a whole user community out there who will thank you as well. May I acknowledge your contribution?

What a great forum :slight_smile:

Rick

[quote=309225:@Richard Allmendinger]It works! Awesome, Jason, thanks so much. For what it’s worth, I develop free apps for academics so there’s a whole user community out there who will thank you as well. May I acknowledge your contribution?

What a great forum :slight_smile:

Rick[/quote]
Great I’m glad that solved it! Of course you may acknowledge me.

Hey Richard,

How do I read the rotation matrix? I plotted the values on my screen. But I have a hard time to see what is going on.

What I want to do is that I hold my iPhone in front of me. And by rotating the device I can control several things.
My iPhone and iPad views I only have the portrait mode (home button) enabled. What I want to detect is the rotation so my phone is in portrait mode or landscape, and all the degrees (radians) in between. On my screen, I have a background image that I want to stay level.
The rotation should be detected whether I have the phone laying flat on my desk, or holding upright. Is that at all possible? Have you figured that out? Or is the matrix not what I am looking for?

Thanks in advance :slight_smile:

With the rotation matrix, you can determine the position of any vector relative to the default coordinate system. That is, the rotation matrix performs a linear transformation between two coordinate system. The direction cosines of a vector in the old coordinate system, multiplied by the components of the transformation matrix give you the direction cosines of the same vector in the new coordinate system.

I need a true north reference frame because I’m working on a geological app where I use the orientation of the phone itself to measure things. Thus, the vector perpendicular to the phone (the z-axis in Apple’s coordinate system; y is the long axis of the phone and x the short axis both parallel to phone face) gives the orientation of the plane of the phone (i.e., the face) relative to the default coordinate system. In Xojo’s implementation of Core motion, the initial reference frame is -Z downward (parallel to gravity) and X and Y determined by the orientation of the phone when iOSMotion was first enabled. For a user holding the phone in portrait mode would want to check that a vector parallel to the Y axis was approximately vertical, the x-axis horizontal and +Z pointing towards the user.

If all you want is to determine whether the phone is held in portrait or landscape mode, I would think that using the GravityAccelerationX (Y and Z) would be sufficient. For landscape you would see the largest component of gravity acceleration parallel to X and in portrait mode parallel to Y. You would only need the rotation matrix if you wanted to determine the exact number of degrees that the phone has been rotated. If you want to get the rotation of a line parallel to the Y axis (i.e., the long axis of the phone) relative to its initial reference frame, you would multiply [0, 1, 0] by the transpose of the rotation matrix that I listed in my previous post. For the face of the phone you would multiply the unit vector perpendicular to the face [0, 0, 1]

[quote=309427:@Richard Allmendinger]With the rotation matrix, you can determine the position of any vector relative to the default coordinate system. That is, the rotation matrix performs a linear transformation between two coordinate system. The direction cosines of a vector in the old coordinate system, multiplied by the components of the transformation matrix give you the direction cosines of the same vector in the new coordinate system.

I need a true north reference frame because I’m working on a geological app where I use the orientation of the phone itself to measure things. Thus, the vector perpendicular to the phone (the z-axis in Apple’s coordinate system; y is the long axis of the phone and x the short axis both parallel to phone face) gives the orientation of the plane of the phone (i.e., the face) relative to the default coordinate system. In Xojo’s implementation of Core motion, the initial reference frame is -Z downward (parallel to gravity) and X and Y determined by the orientation of the phone when iOSMotion was first enabled. For a user holding the phone in portrait mode would want to check that a vector parallel to the Y axis was approximately vertical, the x-axis horizontal and +Z pointing towards the user.

If all you want is to determine whether the phone is held in portrait or landscape mode, I would think that using the GravityAccelerationX (Y and Z) would be sufficient. For landscape you would see the largest component of gravity acceleration parallel to X and in portrait mode parallel to Y. You would only need the rotation matrix if you wanted to determine the exact number of degrees that the phone has been rotated. If you want to get the rotation of a line parallel to the Y axis (i.e., the long axis of the phone) relative to its initial reference frame, you would multiply [0, 1, 0] by the transpose of the rotation matrix that I listed in my previous post. For the face of the phone you would multiply the unit vector perpendicular to the face [0, 0, 1][/quote]

Wow… that looks probably more complex than it actually is. I guess I just have to re-read it a couple of times in order to get it. I started a new thread explaining my situation.
You made many things a lot clearer to me though. I guess I have to do a lot more testing to get everything right.

And as mentioned… I want to display an image that stays level.
Actually… I want to create a “virtual world” that stays level, except for a few elements that are needed to control those level elements.

Is it possible to see an example to clarify some of these motion principals? For example if we put an object in the middle of a view and tilt the screen the object slides in that direction.