In shared autonomy, human-robot handovers for product delivery are crucial. Accurate robot predictions of human hand motion and intentions enhance collaboration efficiency. However, low prediction accuracy increases mental and physical demands on the user. In this work, we propose a system for predicting human hand motion and intended target during handovers for a reaching task using Inverse Reinforcement Learning (IRL). We propose a set of feature functions that explicitly capture users’ preferences during the task. The proposed approach was experimentally validated through user studies. Results indicate that the proposed method outperformed other state-of-the-art methods (PI-IRL, BP-HMT, RNNIK-MKF and CMk=5) with users feeling comfortable reaching upto 60% of the total distance to the target for handover with target prediction accuracy reaching 95%.