The ability to recognize the shape and movement of hands can help improve the user experience in a wide range of technical domains and platforms. It can help you understand sign language and move your hands in the right way, for example. It can also make it possible for digital information and materials to be added on top of the real world in augmented reality. Here, I talk about a real-time, on-device hand gesture recognition solution that lets us control our system’s graphical user interface (GUI) with static and dynamic hand gestures that can be trained to do a set of actions that are similar to what we do with our mouse and keyboard. It is built with MediaPipe-Hands, which finds the palm landmark point. The data is then sent through a pipeline of data-preprocessing functions and trained with two models: one for static gesture recognition and one for dynamic gesture recognition. In real-time, the models are then used to detect similar gestures on-device from a video-capturing device like a webcam.