At HelioCampus, we go to extra lengths to ensure that our predictive models are as transparent and explainable as possible. We train multiple models using different input fields, algorithms, and settings to ensure we’re producing predictions that make sense and are useful for answering the question at hand. We’ve recently implemented a new set of tools that allow us to more easily and effectively compare model performance and assess which input features have the most influence on each student’s score. Used in the aggregate, these tools can also help identify areas of concern that may be addressed at the policy level.
In this webinar, we’ll highlight the tools our Data Science team uses for model evaluation and explainability, how we use them internally to check our model results, and what we provide to our clients to help increase understanding and adoption of our predictions.