Sitemap - 2023 - Julien’s Newsletter
Building a Retrieval-Augmented Generation (RAG) Chatbot with LangChain, Hugging Face, and AWS
Maximize Hugging Face training efficiency with QLoRA
For a fistful of dollars: fine-tune LLaMA 2 7B with QLoRA
Fine-tune Stable Diffusion with LoRA for as low as $1
Azure ML: start experimenting with Hugging Face models in minutes!
SageMaker JumpStart: start experimenting with large language models in minutes!
Accelerating Stable Diffusion with Optimum Neuron and AWS Inferentia2
Video: Transformer training shootout, part 2: AWS Trainium vs. NVIDIA V100
Video: Accelerating Transformers with Optimum Neuron, AWS Trainium and AWS Inferentia2
Video: Keynote @ PyCon Sweden 2022
Video: Accelerate Transformer inference with AWS Inferentia 2
Video: Summarizing legal documents with Hugging Face and Amazon SageMaker
Video: Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2)
Video: Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face
Video — Transformer training shootout: AWS Trainium vs. NVIDIA A10G
Video: Training Transformers with AWS Trainium and the Hugging Face Neuron AMI
Video: Fast and accurate language identification with Hugging Face and Intel OpenVINO
Video: an Introduction to Computer Vision with Hugging Face
Video: Accelerate PyTorch Transformers with Intel Sapphire Rapids, part 1