Recommender systems are central to how modern digital platforms connect users with content, yet they face a fundamental trade-off between predictive accuracy and explainability. Black-box models achieve strong performance but lack the interpretability needed for trust and adoption in many business settings. Existing explainable AI approaches typically treat explanations as post-hoc additions, which often comes at the cost of predictive accuracy, leaving this trade-off unresolved. We challenge this view and propose that explanations, when designed as an integral component of a learning system and aligned with prediction outcomes, can improve \emph{both} interpretability and performance. We introduce RecPIE (Recommendation with Prediction-Informed Explanations), a framework that jointly optimizes recommendation predictions and natural-language explanations generated by large language models (LLMs). At its core, RecPIE embeds explanation generation into the learning loop: predictions guide the generation of explanations (\emph{prediction-informed explanations}), which are then fed back to refine subsequent predictions (\emph{explanation-informed predictions}) through an alternating training procedure. The LLM is fine-tuned using LoRA and reinforcement learning with a customized reward derived from recommendation accuracy. Drawing on multi-environment statistical learning theory, we provide formal grounding for why explanation generation and prediction can be mutually reinforcing. We evaluate RecPIE on large-scale point-of-interest recommendation data from Google Maps, a challenging setting where user preferences span diverse place categories rather than concentrating within a single one. RecPIE improves predictive accuracy by 3—4\% over state-of-the-art baselines and matches the best-performing model using only about 12\% of the training data. In human evaluations with 566 participants, RecPIE’s explanations are preferred 61.5\% of the time (versus 16.6\% for the best baseline) and are rated closer to human-generated explanations. Together, these results reframe explainability not as a constraint on performance but as a design lever for improving AI systems, with broad implications for trust, data efficiency, and AI deployment in marketplace environments.