End-to-End Machine Learning
Joshua Stapleton
Machine Learning Engineer

Logistische Regression
sklearn.linear_model.LogisticRegressionSupport Vector Classifier
sklearn.svm.SVCEntscheidungsbaum
sklearn.tree.DecisionTreeClassifierRandom Forest
sklearn.ensemble.RandomForestClassifierDeep-Learning-Modelle
K-Nearest Neighbors (KNN)
XGBoost
Modell:
Prinzipien:
sklearn.model_selection.train_test_split nutzen# Importing necessary libraries from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression# Split the data into training and testing sets (80:20) X_train, X_test, y_train, y_test = train_test_split(features, heart_disease_y, test_size=0.2, random_state=42)# Define the models logistic_model = LogisticRegression(max_iter=200)# Train the model logistic_model.fit(X_train, y_train)
# Jane Does Gesundheitsdaten, z. B.: [Alter, Cholesterin, Blutdruck, ...] jane_doe_data = [45, 230, 120, ...]# In 2D umformen, da scikit-learn ein 2D-Array erwartet jane_doe_data = jane_doe_data.reshape(1, -1)# Mit dem Modell Janes Herzkrankheitsdiagnose-Wahrscheinlichkeiten vorhersagen jane_doe_probabilities = logistic_model.predict_proba(jane_doe_data) jane_doe_prediction = logistic_model.predict(jane_doe_data)
# Print the probabilities
print(f"Jane Doe's predicted probabilities: {jane_doe_probabilities[0]}")
print(f"Jane Doe's predicted health condition: {jane_doe_prediction[0]}")
Jane Does vorhergesagte Zustandswahrscheinlichkeiten: [0.2 0.8]Jane Does vorhergesagter Gesundheitszustand: 1
End-to-End Machine Learning