Modern digital platforms rely on recommender systems to match users with content, yet these systems face a fundamental trade-off between predictive accuracy and explainability. While black-box models achieve high performance, their lack of transparency limits trust and adoption in many business settings. Existing approaches to explainable AI typically treat explanations as post-hoc outputs and often reduce predictive accuracy, leaving this trade-off unresolved. In this paper, we propose that explanations can instead be designed as an integral component of AI decision systems and, when properly aligned with prediction outcomes, can improve \emph{both} interpretability and performance. We introduce RecPIE (Recommendation with Prediction-Informed Explanations), a framework that jointly optimizes recommendation predictions and natural-language explanations generated by large language models (LLMs). RecPIE embeds explanation generation into the learning loop: predictions guide the generation of explanations (\emph{prediction-informed explanations}), which are then fed back to improve subsequent predictions (\emph{explanation-informed predictions}) through an alternating training procedure. In particular, the LLM is fine-tuned using LoRA, a parameter-efficient fine-tuning approach, and reinforcement learning with a customized learning signal derived from recommendation accuracy. Drawing on multi-environment statistical learning theory, we show that explanation generation and prediction can be mutually reinforcing. Empirically, using large-scale point-of-interest recommendation data from Google Maps, RecPIE improves predictive accuracy by 3–4\% over state-of-the-art baselines and matches the best-performing model using only about 12\% of the training data. Human evaluations with 566 participants show that RecPIE’s explanations are preferred 61.5\% of the time (versus 16.6\% for the best baseline) and are closer to human-generated explanations. Our findings demonstrate that explanations need not be a constraint on performance but can instead serve as a tool for improving learning in AI systems. This perspective reframes explainability as a design lever in digital platforms, with implications for trust, data efficiency, and the deployment of AI in marketplace environments.