Skip to content
This repository was archived by the owner on Jul 30, 2025. It is now read-only.

mantavyam/appdevLLM-NVIDIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rapid Application Development with Large Language Models (LLMs)

Resources and Learnings from NVIDIA Deep Learning Institute Training on "Rapid Applications Development with Large Language Model" by Vadim Kudlay

Course Prerequisites:

Introductory deep learning, with comfort with PyTorch and transfer learning preferred. Content covered by DLI’s Getting Started with Deep Learning or Fundamentals of Deep Learning courses, or similar experience is sufficient. Intermediate Python experience, including object-oriented programming and libraries. Content covered by Python Tutorial (w3schools.com) or similar experience is sufficient.

Tools, libraries, frameworks used: Python, PyTorch, HuggingFace, Transformers, LangChain, and LangGraph

Learning Objectives

You will:

  • Find, pull in, and experiment with the HuggingFace model repository and Transformers API.
  • Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification.
  • Work with conditioned decoder-style models to take in and generate interesting data formats, styles, and modalities.
  • Kickstart and guide generative AI solutions for safe, effective, and scalable natural data tasks.
  • Explore the use of LangChain and LangGraph for orchestrating data pipelines and environment-enabled agents.

Topics Covered

The workshop covers large language models from beginning to end, starting with fundamentals of transformers, progression into foundational large language models, and finishing in model/agentic orchestration. Each of these sections is designed to equip participants with the knowledge and skills necessary to progress further in developing useful LLM-powered applications.

Session Topics/Activities
Course Introduction Overview of workshop topics and schedule.Introduction to HuggingFace and Transformers.Discuss how LLMs can enhance enterprise applications.
Transformers and LLMs Introduce and motivate the transformer-style architecture from deep learning first principles.Understand input-output processing with tokenizers, embeddings, and attention mechanisms.
Task-Specific Pipelines Profile encoder models for different NLP tasks where they are most useful.Investigate the use of lightweight models for natural language embedding, classification, subsetting, and zero-shot prediction.
Seq2Seq with Decoders Introduce GPT-style decoder models for sequence generation and autoregressive tasks.Apply encoder-decoder architectures for applications like machine translation and few-shot task completion.
Multimodal Architectures Integrate different data modalities (text, images, audio) into LLM workflows.Explore multimodal models like CLIP for cross-modal learning, visual language models for image question-answering, and diffusion models for text-guided image generation.
Scaling Text Generation Explore LLM inference challenges and deployment strategies, including optimized server deployments.Incorporate LLMs into interesting applications that can scale to larger repositories and user bases.
Orchestration and Agentics Introduce LangChain for LLM orchestration and agentic workflows.Investigate use of agentics and tool-calling for integrating natural language with standard applications and data.
Final Assessment Build an LLM-based application integrating text generation, multimodal learning, and agentic orchestration.

About

Resources and Learnings from NVIDIA Deep Learning Institute Training on "Rapid Applications Development with Large Language Model" by Vadim Kudlay

Topics

Resources

Stars

Watchers

Forks

Contributors

Languages