I’m interested in whether it’s possible to deploy XGBoost models for inference on an AWS Lambda function. AWS Lambda has strict limitations in the size of deployment packages. My code depends on xgboost and several other libraries and currently exceeds this limit. Can anyone provide some advice on building xgboost in such a way that’s it only contains what’s necessary for inference? I’m hoping it’ll then fit comfortably with the AWS Lambda limits.
Might be possible through Treelite I guess, if Lambda supports a C runtime.