Deploying AI into Production with FastAPI
Matt Eckerle
Software and Data Engineering Leader
from fastapi import FastAPI
sentiment_model = None
def load_model():
global sentiment_model
sentiment_model = SentimentAnalyzer("trained_model.joblib")
print("Model loaded successfully")
load_model()
Model loaded successfully
from contextlib import asynccontextmanager
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup: Load the ML model
load_model()
yield
app = FastAPI(lifespan=lifespan)
@app.get("/health")
def health_check():
if sentiment_model is not None:
return {"status": "healthy",
"model_loaded": True}
return {"status": "unhealthy",
"model_loaded": False}
Curl command:
curl -X GET \
"http://localhost:8080/health" \
-H "accept: application/json"
Output:
{
"status": "healthy",
"model_loaded": true
}
Deploying AI into Production with FastAPI