DEV Community

Cover image for 🚀 My Learning Journey in the Google AI Agents Intensive — Building a Multi-Agent Concierge System
Rakshit Raj Singh
Rakshit Raj Singh

Posted on

🚀 My Learning Journey in the Google AI Agents Intensive — Building a Multi-Agent Concierge System

Hi Dev community,
This post is my submission for the Google AI Agents Writing Challenge
: Learning Reflections.

Over the past few days, I participated in the Google x Kaggle 5-Day AI Agents Intensive, and it turned out to be one of the most transformative learning experiences I’ve had this year. The program helped me understand how modern AI agents work—how they reason, call tools, store memory, and interact with users.

To apply everything I learned, I built a full Concierge Multi-Agent System using Python, Google’s Gemini API, and SQLite. In this article, I’m sharing my experience, my key takeaways, and how my understanding of agents evolved throughout the challenge.

🌟 What I Built: The Concierge Multi-Agent System

As my capstone project, I developed a multi-agent application that acts like an AI concierge, capable of helping users with different tasks using specialized agents.

✅ The system includes 5 dedicated agents:

Meal Planner Agent – Generates healthy meal plans for any day.

Travel Planner Agent – Creates full 2-day itineraries for any city.

Study Companion Agent – Provides simple explanations & study notes.

Routine Automator Agent – Builds a productive daily routine for students.

Health Agent – Gives general health advice based on user input.

Each agent inherits core functionality (like state management and API access) from a shared BaseAgent class.

🧠 Tech stack I used

Python 3.10+

Google AI Studio (Gemini 2.5 Flash Lite model)

SQLite database for storing agent state

Command-Line Interface menu

OOP architecture with agent classes

📌 Key Features from My Code

Persistent agent memory using SQLite

Modular agent design (every agent has its own methods)

Real-time content generation via Google AI

CLI-based agent selection and interaction

Automatic saving/loading of all agent states

This project helped me truly understand how real-world AI agents are structured.

🧩 What I Learned During the 5-Day Intensive
1️⃣ Understanding Agent Architecture

Before this course, I thought agents were just “chatbots with extra steps.”
Now I understand:

Agents have states

Agents can call tools

Agents can store and retrieve memory

Agents can act in multiple steps, not just respond to text

Building the BaseAgent class helped me see how inheritance lets multiple agents share core logic while staying specialized.

2️⃣ Tool Calling & Reasoning

One of the most powerful things I learned was how agents think before they act.
The hands-on labs showed how:

The model identifies the user intent

It decides whether to call a tool

It waits for the tool output

Then generates the final response

In my project, the “tool” was the Google AI model itself, accessed through the ask_google_ai() method.

3️⃣ Memory & State Persistence

Using SQLite to save agent states taught me:

Not all memory has to be in the model

Agents can store context locally

You can reload previous conversations or plans

This made my system feel more like a “real assistant” instead of a one-time chatbot.

4️⃣ Hands-on Experience with Google’s Gemini Models

The labs helped me understand:

How to structure prompts

Why concise instructions matter

How to handle exceptions and API errors

By the time I implemented all 5 agents, I felt confident using Gemini for multiple workflows.

🔧 Challenges I Faced (and Overcame)

Like any real project, I hit a few roadblocks:

❌ API Errors

Sometimes the model didn’t return output or the request failed.
I added exception handling in ask_google_ai() to prevent app crashes.

❌ JSON & State Management

Saving and loading agents required converting states to/from JSON.
This helped me understand serialization better.

❌ Designing a Clean CLI

I wanted something simple but useful, so I built a clean 1–7 menu with interactive prompts.

These challenges helped me improve my debugging, problem-solving, and Python architecture skills.

🚀 How This Intensive Changed My Understanding of AI Agents

This wasn’t just a coding challenge—it completely changed how I understand AI systems.

Now I see that agents are:

Modular

Context-aware

Tool-driven

Memory-enabled

Multi-step reasoners

I also realized that even a simple Python project can become a powerful agent system if designed with the right structure.

🎯 Final Thoughts

The Google x Kaggle AI Agents Intensive helped me grow as a developer and gave me the confidence to build real AI-powered applications. My Concierge Multi-Agent System was the perfect playground to apply the concepts taught in the labs.

This challenge taught me:

How to build agents

How to structure them

How to use real tools

How to integrate AI with external systems

How to store memory and create persistent assistants

I’m excited to keep improving this system and explore more advanced agent workflows in the future.

Thanks for reading — and big thanks to Google and Kaggle for this amazing learning experience! 🙌✨

Top comments (0)