With recent advancements in data analytics, healthcare predictive analytics (HPA) is garnering growing interest among practitioners and researchers. However, it is risky to blindly accept the results and users will not accept the HPA model if transparency is not guaranteed. To address this challenge, we propose the RObust Local EXplanations (ROLEX) method, which provides robust, instance-level explanations for any HPA model. The applicability of the ROLEX method is demonstrated using the fragility fracture prediction problem. Analysis with a large real-world dataset demonstrates that our method outperforms state-of-the-art methods in terms of local fidelity. The ROLEX method is applicable to various types of HPA problems beyond the fragility fracture problem. It is applicable to any type of supervised learning model and provides fine-grained explanations that can improve understanding of the phenomenon of interest. Finally, we discuss theoretical implications of our study in light of healthcare IS, big data, and design science.