Craig Federighi on building the 3D Touch screen, a marvel of complication :
It starts with the idea that, on a device this thin, you want to detect force. I mean, you think you want to detect force, but really what you’re trying to do is sense intent. You’re trying to read minds. And yet you have a user who might be using his thumb, his finger, might be emotional at the moment, might be walking, might be laying on the couch. These things don’t affect intent, but they do affect what a sensor [inside the phone] sees. So there are a huge number of technical hurdles. We have to do sensor fusion with accelerometers to cancel out gravity—but when you turn [the device] a different way, we have to subtract out gravity. … Your thumb can read differently to the touch sensor than your finger would. That difference is important to understanding how to interpret the force. And so we’re fusing both what the force sensor is giving us with what the touch sensor is giving us about the nature of your interaction. So down at even just the lowest level of hardware and algorithms—I mean, this is just one basic thing. And if you don’t get it right, none of it works.