The goal of this project is to provide a Human Computer Interaction system to resolve the problem faced by the deaf and dumb people. Since the algorithm is not designed on the base of background hand gestures, it is immune to changes in the background image. It can process a variety of hand types, identify a number of fingers, and perform tasks as needed. The key objectives were met, as stated in this paper. Real-time gesture recognition is possible with this programme. There are certain obstacles that must be addressed in the future. For human-computer interaction, hand gesture recognition is critical. The hand region is extracted from the context in our system using background subtraction. The palm and fingers are then segmented so that the fingers can be detected and recognised. Finally, a rule classifier is used to predict hand gesture labels. Experiments on a 1300 image data set show that our method works well and is very effective. Furthermore, on another data collection of hand movements, our approach outperforms a method called state of the art. Gesture recognition is one of the essential techniques to build user-friendly interfaces.