Lately I ran into small problems with iPhone and iPad screencasting. Obviously all current solutions for making iPhone or iPad screencasts require a jailbroken device. The programm that I’m using to display and record my touches on the phone doesn’t work very reliably in its current version. So I was looking for alternatives. Having already spent a short amount of time creating my own click effects I realized that I had to come up with my own system for iOS as well.
Here’s what I did.
How would would I want to visualize a click on an iPhone? I didn’t know, but I remembered Matt Gemmell stumbling into the same kind of problem earlier. I figured that instead of reinventing the wheel, if there was one, I could just take his approach and build upon it. It turned out that it doesn’t work that way. His system was ought to be used in written documents. Whereas my system should work in a video. Having his advice though, turned out to be a really great help and starting point for my tap effects.
I liked the look of Matt’s Tap and Tap and Hold:
Tap & Hold
In a screencast every tap is basically a Tap & Hold – at a certain point at least – because you have to tap and then release. My idea was to use his Tap & Hold and animate it in a way that it looks like a finger would touch a device, without hold.
(Please remember this is just a first attempt and the look of the final version will be improved)
You can see that I slightly increase the size of the inner circle before it goes back to its “zoomed-in” level. That’s an important detail when it comes to double-taps!
Tap & Hold
How would I display tap & hold then? That’s rather simply explained. As the inner circle sort of takes the role of a finger it can also act as an identifier for “finger on screen”. The inner circle can just stay at the zoomed-in level until the finger is released:
This demo shows a still image of my Twitter timeline in Osfoora
Double Taps turned out to be much more difficult. In fact I realized again how different screencasting on an iPhone is from a desktop OS. I find that iPad or iPhone screencasting is “slower” than on the desktop. A tap needs time to get recognized.
On a desktop OS the mouse pointer is always visible. It wouldn’t make sense making it disappear. It is part of the whole controlling economy of a desktop operating system anyway. It is visible to the users, they know that. It would look odd if there wasn’t a mouse pointer somewhere!
On an iPhone, however, it wouldn’t make sense to have a finger always on screen. Therefore making that finger visible all time also wouldn’t make sense. And that means, when displaying a simple tap, there needs to be time to a) make people realize there was a tap and then b) making them recognize which tap was performed.
The only solution I have found is first making a simple single tap visual, in all its lengthness, and then display another tap.
That’s the reason why I said screencasting is slower. People don’t see my finger all the time. They just see it when I’m pressing. Meaning that when something would happen as soon as I’m pressing, they would just be confused what I was just doing.
Whenever screencasting an iPhone or iPad app I try to put my finger on the device first, thus making my finger visible, wait a little bit so that people see that change and then confirm my action.
Displaying two or more taps or even swipes will result in two animations running at the same time. But as I don’t know many apps that make use of double or even triple taps I haven’t tested this one yet. Speaking of that, Matt has already figured out a very interesting way making use of multi-touch gestures on multi-touch interfaces.
The downside of this new, inspiring, yet unknown, touch-device world is that nobody hasn’t figured out, how standard user interaction works. On the desktop we are familiar with things like double-click, right-click, single-left-click, menus, Dock, etc. We know that already, but how does it gonna work like on an iPhone?
Do human prefer double-taps over touch-and-hold gestures? Nobody has figured that out yet. Apple hasn’t, but it’s interesting to observe where certain implementations are heading. Exciting times indeed!