🧠 I Built a Telegram Weather Bot Using n8n + LLM + OpenWeather (And Here’s What I Learned)

If you want to understand how AI agents, automation workflows, and API orchestration actually work in production — this guide is for you.

I recently built a Telegram bot that:

  • Detects user intent using a local LLM

  • Extracts city dynamically

  • Routes logic programmatically

  • Calls the OpenWeather API

  • Sends structured climate data back to the user

All using:

No SaaS AI dependency.
No hardcoded decision trees.
No Zapier shortcuts.

This post breaks down the architecture, lessons, and product-level thinking behind it.

Why This Project Matters (Beyond a Weather Bot)

This isn’t about temperature.

This is about:

  • Intent routing

  • LLM output normalization

  • Deterministic branching

  • API orchestration

  • Automation system design

These are the same building blocks used in:

  • AI assistants

  • Internal enterprise tools

  • Multi-step agent systems

  • Conversational SaaS products

If you’re a Product Manager exploring AI systems — this is foundational knowledge.

System Architecture Overview

Telegram Trigger 

Basic LLM Chain (Intent Detection)


Edit Fields (Structured JSON Parsing)

Code Node (Deterministic Routing)

HTTP Request (OpenWeather API)

Telegram Send Message

Step 1: Using LLM as an Intent Routing Engine

Instead of manually checking keywords, I used a structured LLM prompt.
You are a routing engine.
Respond ONLY with valid JSON.
Do not include explanations or text outside JSON.
Schema:
{
"intent": "weather | chat",
"city": ""
}
Rules:
- If the user asks about weather, intent = "weather"
- Extract city if mentioned, else city = ""
- For anything else, intent = "chat"

This ensures:

  • Structured output

  • Controlled schema

  • Reduced hallucination

  • Predictable routing behavior

Example response:

 
{
"intent": "weather",
"city": "Delhi"
}

This converts unstructured chat into structured machine-readable input.

That’s AI orchestration.

Step 2: Parsing LLM JSON Safely in n8n

One critical issue:

LLMs often return JSON as string.

If you don’t normalize it, your IF conditions fail.

In the Edit Fields node, I parsed safely:

 
{{$json.text.intent || JSON.parse($json.text).intent}}
{{$json.text.city || JSON.parse($json.text).city}}

This prevents silent failures and makes the workflow production-safe.

Lesson:
AI output must be normalized before routing.

Step 3: Deterministic Routing Using Code Node

Instead of relying on fragile IF configurations, I used a Code node:

if ($json.intent === 'weather') {
return [{ json: { route: 'weather', city: $json.city } }];
}
return [{ json: { route: 'chat' } }];

Why?

Because product systems need explicit logic.

LLMs decide intent.
Code enforces execution.

That separation is important.

Step 4: Weather API Integration (OpenWeather)

The HTTP Request node calls:

https://api.openweathermap.org/data/2.5/weather

Query parameters:

  • q = {{$json.city}}

  • units = metric

  • appid = YOUR_API_KEY

Important failure case:

If city is empty → API returns:

400 - Nothing to geocode

So routing must prevent weather API calls when city is missing.

This is real-world error handling.

Step 5: Structured Telegram Output

Final formatted response: 

🌤 Weather today : Singapore
🌡 Temp: 23.54°C
🤒 Feels like: 23.02°C
⬇️ Min Temp: 22.5°C
⬆️ Max Temp: 24.62°C
💧 Humidity: 41%
👀 Visibility: 6000m

Clean

Readable.

Dynamic.

This improves user experience significantly compared to raw JSON.

Product Lessons From This Build

1. LLMs Are Not Controllers

They are classifiers and extractors.
Execution logic must remain deterministic.


2. JSON Validation Is Critical

Never trust AI output blindly.
Normalize it before branching.


3. API Guardrails Are Non-Negotiable

Validate inputs before calling external services.


4. Debugging Is System Learning

The errors I hit were not failures.
They were architecture lessons.


Am I On the Right Path?

Yes — and here’s why.

You are moving from:

“Using automation tools”

to

“Designing AI-driven systems.”

You are learning:

  • Prompt engineering for structured output

  • Workflow orchestration

  • External API integration

  • Error containment

  • Modular routing design

This is AI Product Builder territory.

Not beginner automation.

What’s Next If I Were Scaling This?

If Post Production Release, My plan of action are:

  • Add memory (last used city)

  • Add conversation history

  • Implement caching

  • Add retry logic

  • Add centralized error handler

  • Store logs in database

  • Deploy as reusable AI agent template

That’s how bots evolve into platforms.

Final Thoughts

Building AI workflows is not about stacking nodes.

It’s about:

  • Clear intent extraction

  • Structured outputs

  • Deterministic logic

  • Clean API orchestration

  • Thoughtful error handling

This weather bot is a small system.

But the architecture thinking behind it is what matters.

And that’s where real AI product building begins.

🚀 If you found this useful, don’t forget to share it with fellow product enthusiasts!