In this paper, we propose a hybrid image based visual servoing for 6 degrees of freedom (DoF) robot manipulators. It avoids the drawbacks of classical position-based visual servoing. Contrary to the position-based visual servoing, this method does not require any knowledge of the geometric 3 DoF model of the object. On the other hand, the depth information of the object is required. In the proposed approach, a Kinect sensor is used as a camera, which provides depth information of the object from the point cloud. .is method tracks the position of the target object. .e method is simulated in Gazebo platform with Kinect sensor mounted on 6 DoF Universal Robot 5 (UR5) manipulator, where all the physical parameters of the robot and Kinect sensor is considered. .e solution is developed in C++ integrated with ROS, OpenCV. .e method illustrated with a variety of simulation results with an eye-in-hand robotic system which shows the convergence of the system and potential of our method.