Multi-Modal Models with Hugging Face
James Chapman
Curriculum Manager, DataCamp



pip install huggingface_hub[cli]
Log in to access the models in your account:
>>> huggingface-cli login
from huggingface_hub import HfApi api = HfApi()models = api.list_models()
task: "image-classification", "text-to-image", etc.sort: e.g., "likes" or "downloads"limit: Maximum entriestags: Associated extra info of the modelmodels = api.list_models(task="text-to-image",author="CompVis",tags="diffusers:StableDiffusionPipeline",sort="downloads")
top_model = list(models)[0]
print(top_model)
ModelInfo(id='CompVis/stable-diffusion-v1-4', private=False, downloads=1097285,
likes=6718, library_name='diffusers', ...
top_model_id = top_model.id
print(top_model_id)
CompVis/stable-diffusion-v1-4
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(top_model_id)
import json
from urllib.request import urlopen
url = "https://huggingface.co/api/tasks"
with urlopen(url) as url:
tasks = json.load(url)
print(tasks.keys())
dict_keys(['any-to-any',
'audio-classification',
'audio-to-audio', ...'])
Multi-Modal Models with Hugging Face