The prospect of gesture operation

Personally, when analyzing the advantages and disadvantages of iOS products, I once put forward the concept of key compound operation, which is like the concept of combining a single-button mouse with the CTRL key. For example, we can drag, grab and lock by pressing the button to paddle. Apple's trend is to cancel the button, and other manufacturers can do the opposite. More gesture operation modes can effectively distinguish products, break through Apple's patent restrictions and give users more choices. After all, this operation is also feasible and possible. If you paddle with a single finger, you can use two buttons to perform common gestures.

This operation is easy to realize using keyboard and touchpad under Mac system.

If the iPad and iPhone can also use a large number of click operations, that is, fixed-point click operations, then it is difficult for AppleTV and Mac to introduce gesture operations. Because the operation surface of the finger and the visual operation surface are separated.

In the traditional conversion operation mode based on mouse positioning, clicking is the most basic operation. As a result, a huge UI instruction system was developed. Using buttons and shortcut keys to achieve complex operations, iPad and iPhone are the transition to this operation.

The direct operation of tablet and the operation of screen+dashboard are two completely different types of operation.

The advantage of tablet computer is intuition. Because information switching marks are needed between the control panel and the screen to communicate with each other, it is impossible for the control to directly lock the operation object. Therefore, there will be many innovations in the product control interface under Mac and TV in the future.

Scaling, rotating, multi-fingered paddling and other operations gradually make this air operation possible. Kinect products also have obvious advantages in this respect. Not relying on the control panel is also an innovation.

Relatively speaking, Mac is a desktop product, so it is impossible to have complicated posture operation. The control panel is basically attached to the desktop, but the TV can be more flexible.

For remote operation, the explicit display of the operation object should appear with the sketch operation, rather than the static response of the graphic object.

For example, in a board game, with the appearance of strokes, the focus of the manipulated chess piece shifts or a clear graphic response of the operable position appears. Focus and highlight operable objects. This is different from mouse operation. This feature has a corresponding design structure when using the remote control of iPhone to control ATV interface. In this way, the transfer and confirmation of the control object does not depend on the corresponding position of the screen. The original design of the mouse is not like this. The mouse responds to position events according to the position of the cursor. This attracts many interface design patterns today, but it is not completely suitable for future gesture operation.

There are two objects of gesture operation, one is the gesture itself, and the other is the object itself. All gestures are instructions to an object or the whole world. In the photo album design, Apple clearly demonstrated the advantages of this operation. For example, the zoom gesture can play two roles, one is to zoom in and out the picture, and the other is to open or close the folder. That is to say, when the list appears, shrinking is to return to the folder, and zooming in is to enter the picture display.

However, due to the characteristics of tablet computers, zoom gestures can act on clear screen objects. For mapping structures, such as screens and touchpads, focus objects need to be provided, and the focus can be moved by sliding without the traditional click operation.

The same pattern is also reflected in the sketch book. For example, if three fingers slide left, undo, slide right, and redo. When three fingers point down, the pen becomes narrow, and when they point up, the pen becomes bigger. Two fingers indicate the movement of the drawing board, and one finger is the drawing operation. It can be said that Autodesk deeply understands the connotation of gesture operation interface.

The API of iOS does not have a complete system to respond to the focus shift of objects. I believe that with the deepening of gesture operation, similar APIs will continue to appear.