Application Development with LLMs on Google Cloud
Contact us to book this course
Learning Track
Generative AI
Delivery methods
On-Site, Virtual
Duration
1 day
In this course, you'll dive into the details of using Large Language Models (LLMs) in your applications. You'll start by exploring the core principles that underpin prompting LLMs. Next, you will focus on Google's latest family of models, Gemini. You'll explore the various Gemini models and their multimodal capabilities. This includes a deep dive into effective prompt design and engineering within the Vertex AI Studio environment. Then, the course moves to application development frameworks and how to implement these concepts into your applications.
Course objectives
- Explore the different options available for using generative AI on Google Cloud
- Use Vertex AI Studio to test prompts for large language models
- Develop LLM-powered applications using generative AI
- Apply prompt engineering techniques to improve the output from LLMs
- Build a multi-turn chat application using the Gemini API and LangChain.
Audience
Application developers and others who wish to leverage LLMs in applications.
Prerequisites
Completion of Introduction to Developer Efficiency on Google Cloud or equivalent knowledge.
Course outline
- What is Generative AI?
- Vertex AI on Google Cloud
- Generative AI Options on Google Cloud
- Introduction to the Course Use Case
- Introduction to Vertex AI Studio
- Designing and Testing Prompts
- Data Governance in Vertex AI Studio
- Lab: Getting Started with the Vertex AI Studio User Interface
- Introduction to Grounding
- Integrating the Vertex AI Gemini APIs
- Chat, Memory, and Grounding
- Search Principles
- Lab: Getting Started with LangChain + Vertex AI Gemini API
- Review of few-shot prompting
- Chain-of-thought prompting and thinking budgets
- Meta prompting, multistep, and panel prompts
- RAG and ReAct
- Lab: Advanced Prompt Architectures
- LangChain for Chatbots
- ADK for Chatbots
- Chat Retrieval
- Lab: Implementing RAG Using LangChain