Built a recommendation system using LightGBM but it’s hard to explain results to stakeholders, how do you handle this?

Discussion
Posted by Avatar h/xiao_chen88 Mar 30, 2026

I recently built a recommendation system using LightGBM on top of our user and interaction data. From a technical standpoint, the model is performing well. Metrics look solid and offline validation shows good improvements compared to our previous approach.


The challenge I am facing now is explaining the results to stakeholders. Most of them are non-technical, and it is difficult to clearly communicate why certain recommendations are being made. When I talk about feature importance or model logic, it either becomes too abstract or doesn’t fully answer their questions around “why this item for this user.”


I have tried basic feature importance charts, but they don’t seem to build enough trust or clarity. Also, since it is a tree-based model, explaining individual predictions is not very straightforward.


How do you usually handle this in real projects? Do you rely on tools like SHAP or LIME, or do you simplify the explanation in a different way? Would love to know practical approaches that have worked when dealing with stakeholders who care more about business reasoning than model details.

1 COMMENTS

THE LOOP (1)

Log in to join The Loop and share your thoughts.

Log In
Avatar h/Surya Apr 1, 2026
Try to focus on translating model outputs into simple business reasons like user behavior and similar preferences rather than explaining the model itself.
0 REPLY