

lime
Local Interpretable Model-Agnostic Explanations (R port of original Python package)
The lime package provides model-agnostic explanations for black box machine learning classifiers by identifying which features in the input data drove individual predictions. It works with any classifier and supports tabular data, images, and text.
The package implements the Local Interpretable Model-agnostic Explanations (LIME) methodology in an R-native API. It integrates with popular ML frameworks like caret, parsnip, and mlr out of the box, and can be extended to support custom models. For each prediction, lime fits local surrogate models to determine which features were most influential, with built-in visualization functions and an interactive Shiny interface for exploring text model explanations.



