In Sparrow — and in UIKit, which Sparrow mimics in that respect — the "target" of a touch is defined at the moment a touch begins. No matter where you move your finger then, that target will stay the same.
I think the reason why Apple chose this behavior is that your finger often occludes the object you are actually touching, so you sometimes don't see if you have hit the object. With the touch logic used on iOS, you can touch the object, then move your finger a way (still touching) to "peek" where you have touched, and if it was correct.
On an iOS button, you can move the finger away a little, and the button will still be pressed. To abort the button click, you move the finger even further away. That's also how a Sparrow button will work if you use the "SP_EVENT_TYPE_TRIGGERED" event instead of a touch event; I recommend to use that event type for a button.
So, many situations will be easier to handle thanks to that logic. However, some other situations will become harder to handle; e.g. when you've got (say) a line of buttons and moving the finger over all of them (without losing contact with the screen) should trigger all of them. In that case, I recommend to add a transparent quad over those objects as a "touch catcher". In its touch handler, you look at the position of the touch(es) on the screen and change the state of the object that are touched manually.
I hope that has made it a little clearer! 😉