Duration: (14:54) ?Subscribe5835 2025-02-19T07:51:33+00:00
LLM Chronicles #1: Introduction
(3:42)
LLM Chronicles #4.3: Language Modelling
(9:44)
LLM Chronicles #6.4: LLM Agents with ReAct (Reason + Act)
(15:46)
LLM Chronicles #5.6: Limitations \u0026 Challenges of LLMs
(15:13)
LLM Chronicles #6.3: Multi-Modal LLMs for Image, Sound and Video
(23:52)
The Limits of LLMs (and where traditional ML / data science comes in)
(1:26)
LLM Chronicles #6.1: RAG (Retrieval Augmented Generation) - Part 1
(24:27)
LangChain vs LangGraph: A Tale of Two Frameworks
(9:55)
Self-Driving Car with JavaScript Course – Neural Networks and Machine Learning
(2:32:40)
LLM Course – Build a Semantic Book Recommender (Python, OpenAI, LangChain, Gradio)
(2:15:4)
I Built an AI Agent That Does EVERYTHING for You! (100% Automated)
(48:35)
Create a Large Language Model from Scratch with Python – Tutorial
(5:43:41)
Large Language Models (LLMs) - Everything You NEED To Know
(25:20)
How to Fine-Tune and Train LLMs With Your Own Data EASILY and FAST- GPT-LLM-Trainer
(10:41)
Hugging Face + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps
(9:48)
LangSmith Tutorial - LLM Evaluation for Beginners
(36:10)
Unlock AI Agent real power?! Long term memory \u0026 Self improving
(22:10)
LLM Chronicles #4.5: Encoder / Decoder RNN for Language Translation
(9:47)
LLM Chronicles #5.1: The Transformer Architecture
(14:54)
LLM Chronicles #3.3: Training with Gradient Descent, Mini-batch Updates, momentum, SGD, ADAM)
(8:49)
LLM Chronicles #5.4: GPT, Instruction Fine-Tuning, RLHF
(18:28)
LLM Chronicles #3.5: Evaluation, Overfitting and Underfitting + Bonus Lab
(16:29)
LLM Chronicles #2.1: Neural Networks and Multi-Layer Perceptrons
(13:14)
LLM Chronicles #:6.2 RAG (Self-Query/Parent Document/HyDE) - Part 2
(26:12)
LLM Chronicles: #5.2: Making LLMs from Transformers Part 1: BERT, Encoder-based
(19:25)
LLM Chronicles #6.6: Hallucination Detection and Evaluation for RAG systems (RAGAS, Lynx)
(12:43)
llm chronicles 5 1 the transformer architecture
(14:38)