Internet of Things (IOT) and embedded device developers often come from a software engineering background. We’re not prepared to answer some fundamental robotics questions that are fundamental to our devices: “where am I pointed?” or “how am I oriented?”
Fortunately there is an existing large body of research and code surrounding these problems. As input, your device will require a 6 or 9 Degree of Freedom (DOF) inertial measurement unit (IMU). An IMU is a combination of sensors, potentially including an accelerometer, a gyroscope, and a magnetometer. Given the three major geometric axis (X, Y, and Z) for each sensor, you have the total number of degrees of freedom. A 6 DOF IMU is usually an accelerometer and a gyroscope. A 9 DOF IMU also includes a magnetometer.
Great, what can you do with these raw sensor readings? The most obvious output is an Attitude and Heading Reference System (AHRS). Normally, this is roll, pitch, and yaw. You can think of each one as a rotation around one of the major geometric axis. Stack overflow shows you some great code for deriving these AHRS values from raw IMU data. However, this assumes that you can trust each IMU reading. You can’t. Depending on the sample rate and external stimuli, you may find your readings from your accelerometer, gyroscope, and magnetometer incredibly variable. How do you get stable readings? Sensor fusion.
Sensor fusion algorithms combine the input from multiple sensors and often implement gradient descent to stabilize and smooth AHRS output. For a 6 DOF IMU, the primary sensor fusion algorithm is the Kalman Filter. For a 9 DOF IMU, the primary algorithm is the Madgwick Filter. Luckily, you don’t have to implement these yourself. Mario Garcia has a thorough and well documented python library to help you through this. In addition to implementing these filters, his library is built to handle 3D rotations in a variety of formats. Really impressive and thorough.