by Keith Uy
What it's for: This is a base template for anyone trying to develop a telegram AI Agent. This base allows for multiple inputs (Voice, Picture, Video, and Text inputs) to be processed by an AI model of their choosing to a get a User started. From here, the User may connect any tools that they see fit to the AI Agent for their n8n workflows. How it works: Input: Telegram message to a bot chat n8n Processing: Switch node determines the type: Voice Message Picture Message Video Message Text Message (Currently uses OpenAI and Gemini to analyze Voice/Photo/Video content but feel free to change these nodes with other models) AI Agent Proccessing: LLM of your choosing examines message and based on system prompt, generates an output Output: AI Output is sent back in telegram Message How to use: Create your chat bot and generate access token -> Search Bot father in telegram -> Type "/newbot" -> follow instructions and create access token -> Copy access token Create Credentials in n8n -> Open telegram trigger node -> Click create credential -> Paste access token -> Save Create LLM access token (Different per LLM but search your LLM + API in google) -> (will have to create an account with the LLM platform) -> buy credits to use LLM API -> Generate Access token -> Paste token in LLM node Requirements: Telegram Bot Access Token Google Gemini Access Token (For Picture and Video messages) OpenAI Access Token (For Voice messages) LLM Access Token (Your preference for the AI Agent) Customizing this workflow: To personalize the AI Output, adjust the system prompt (give context or directions on the AI's role) Add tools to the AI agent to give it more utility besides a personalied LLM (Example: Calendars, Databases, etc).
by Davide
This workflow automates the process of transforming user-submitted photos (also bad selfie) into professional CV and LinkedIn headshots using the Nano Banana Pro AI model. | From selfie | To CV/Linkedin Headshot | |:----------------:|:-----------------------------------------:| | | | Key Advantages 1. ✅ Fully Automated Professional Image Enhancement From receiving a photo to delivering a polished LinkedIn-style headshot, the workflow requires zero manual intervention. 2. ✅ Seamless Telegram Integration Users can simply send a picture via Telegram—no need to log into dashboards or upload images manually. 3. ✅ Secure Access Control Only the authorized Telegram user can trigger the workflow, preventing unauthorized usage. 4. ✅ Reliable API Handling with Auto-Polling The workflow includes a robust status-checking mechanism that: Waits for the Fal.ai model to finish Automatically retries until the result is ready Minimizes the chance of failures or partial results 5. ✅ Flexible Input Options You can run the workflow either: Via Telegram Or manually by setting the image URL if no FTP space is available This makes it usable in multiple environments. 6. ✅ Dual Storage Output (Google Drive + FTP) Processed images are automatically stored in: Google Drive (organized and timestamped) FTP (ideal for websites, CDN delivery, or automated systems) 7. ✅ Clean and Professional Output Thanks to detailed prompt engineering, the workflow consistently produces: Realistic headshots Studio-style lighting Clean backgrounds Professional attire adjustments Perfect for LinkedIn, CVs, or corporate profiles. 8. ✅ Modular and Easy to Customize Each step is isolated and can be modified: Change the prompt Replace the storage destination Add extra validation Modify resolution or output formats How It Works The workflow supports two input methods: Telegram Trigger Path: Users can send photos via Telegram, which are then processed through FTP upload and transformed into professional headshots. Manual Trigger Path: Users can manually trigger the workflow with an image URL, bypassing the Telegram/FTP steps for direct processing. The core process involves: Receiving an input image (from Telegram or manual URL) Sending the image to Fal.ai's Nano Banana Pro API with specific prompts for professional headshot transformation Polling the API for completion status Downloading the generated image and uploading it to both Google Drive and FTP storage Using a conditional check to ensure processing is complete before downloading results Set Up Steps Authorization Setup: Replace in the "Sanitaze" node with your actual Telegram user ID Configure Fal.ai API key in the "Create Image" node (Header Auth: Authorization: Key YOURAPIKEY) Set up Google Drive and FTP credentials in their respective nodes Storage Configuration: In the "Set FTP params" node, configure: ftp_path: Your server directory path (e.g., /public_html/images/) base_url: Corresponding base URL (e.g., https://website.com/images/) Configure Google Drive folder ID in the "Upload Image" node Input Method Selection: For Telegram usage: Ensure Telegram bot is properly configured For manual usage: Set the image URL in the "Fix Image Url" node or use the manual trigger API Endpoints: Ensure all Fal.ai API endpoints are correctly configured in the HTTP Request nodes for creating images, checking status, and retrieving results File Naming: Generated files use timestamp-based naming: yyyyLLddHHmmss-filename.ext Output format is set to PNG with 1K resolution The workflow handles the complete pipeline from image submission through AI processing to storage distribution, with proper error handling and status checking throughout. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Toshiya Minami
Who’s it for Teams building health/fitness apps, coaches running check-ins in chat, and anyone who needs quick, structured nutrition insights from food photos—without manual logging. What it does / How it works This workflow accepts a food image (URL or Base64), uses a vision-capable LLM to infer likely ingredients and rough gram amounts, estimates per-ingredient calories, and returns a strict JSON summary with total calories and a short nutrition note. It normalizes different payloads (e.g., Telegram/LINE/Webhook) into a common format, handles transient errors with retries, and avoids hardcoded secrets by using credentials/env vars. Requirements Vision-capable LLM credentials (e.g., gpt-4o or equivalent) One input channel (Webhook, Telegram, or LINE) Environment variables for model name/temperature and optional request validation How to set up Connect your input channel and enable the Webhook (copy the test URL). Add LLM credentials and set LLM_MODEL and LLM_TEMPERATURE (e.g., 0.3). Turn on the workflow, send a sample payload with imageUrl, and confirm the strict JSON output. (Optional) Configure a reply node (Telegram/Slack or HTTP Response) and a logger (Google Sheets/Notion). How to customize the workflow Outputs**: Add macros (protein/fat/carb) or micronutrient fields. Units**: Convert portion descriptions (piece/slice) to grams with your own mapping. Languages**: Toggle multilingual output (ja/en). Policies**: Tighten validation (reject low-confidence parses) or add manual review steps. Security**: Use signed/temporary URLs for private images; mask PII in logs. Data model (strict JSON) { "dishName": "string", "ingredients": [{ "name": "string", "amount": 0, "calories": 0 }], "totalCalories": 0, "nutritionEvaluation": "string" } Notes Rename all nodes clearly, include sticky notes explaining the setup, and never commit real IDs, tokens, or API keys.
by Ronald
Sometimes you need the rich text field to be in HTML instead of Markdown. This template either syncs a single record or all records at once. Youtube tutorial
by Ronald
Sometimes you need the rich text field to be in HTML instead of Markdown. This template either syncs a single record or all records at once. Youtube tutorial
by Mo AlBarrak
A professional AI equity analysis automation built on n8n that transforms structured financial data and real-time news into disciplined, risk-adjusted price targets and actionable BUY/HOLD/SELL signals — delivered through automation channels like Telegram or dashboards. Key Features Automated Fundamental & News Parsing: Ingests financial metrics and analyst-grade news streams into a unified valuation engine. Phase-Aware Valuation Logic: Recognizes growth vs mature companies, applying appropriate valuation methods (EPS, revenue multiples, fundamentals) to avoid unrealistic targets. Implied P/E Sanity Gate: Prevents misleading EPS-based valuations on growth/transition phase stocks. Bear/Base/Bull Scenario Generation: Produces three price targets with institutional-standard bands and logic. Risk & Confidence Scoring: Combines F-Score with qualitative risk extraction from news to produce a confidence index (20–90). Structured JSON Output: Designed for automation, feeds dashboards, alerts, APIs, or downstream analytics. Cross-Model Verification (optional): Works with multi-model LLM consensus (e.g., GPT + Gemini) for enhanced reliability. Ideal For Asset managers & analysts who want automated equity valuations Retail platforms seeking a disciplined valuation engine Fintech products integrating AI-powered stock insights Educators and research teams needing structured valuation tools ⚙️ Technical Notes (Best Practice for Production) Rate-Limit & Timers for Long Lists If processing a long watchlist via HTTP requests (e.g., financial APIs, news APIs), you should add timers (Wait nodes) or rate-limit controls before each HTTP request to: Respect API quotas & avoid throttling Reduce workflow errors under heavy loads Improve reliability for automated batch runs This is especially important when using workflows that fetch quotes, historical data, or news articles for multiple stocks in sequence. 📌 Use Cases 🔹 Daily Watchlist Runner Run nightly analysis on a portfolio and distribute targets + risk insights via Telegram or email. 🔹 API Feed Expose JSON results via webhook/API for downstream apps and dashboards. 🔹 Research & Alerts Trigger alerts when confidence shifts, price targets are breached, or news alters thesis. 🧠 Why This Is Valuable Unlike simple “chat” bots that give generic responses, this workflow encodes institutional valuation discipline — no hallucinated price points, no fuzzy narratives — just structured, defensible outputs. This makes it compelling to: Professional users Startup investors SaaS subscription customers Fintech integrators
by Yoshino Haruki
Overview This template is ideal for photographers, graphic designers, and creative professionals who manage large volumes of visual assets. It is also perfect for Digital Asset Managers looking for a customizable, automated solution to organize files without manual tagging. What it does When a new image is uploaded to a designated "Inbox" folder in Google Drive, the workflow performs the following actions: AI Analysis**: Uses GPT-4o to analyze the image content, generating a description, extracting dominant colors, and determining the category (e.g., Portrait vs. Landscape). Safety Check**: Runs an AI-based NSFW filter. If inappropriate content is detected, the process stops, and a warning is sent to Slack. Smart Sorting**: Automatically moves the file into the correct subfolder based on its category. Contextual Tagging**: Generates specific tags (e.g., "smile, natural light" for portraits) and updates the file metadata. Archiving**: Creates a comprehensive entry in a Notion Database with the image link, tags, and description. Notification**: Sends a success alert to Slack with a summary of the archived asset. How to set up This workflow is designed to be plug-and-play using a central configuration node. Credentials: Connect your Google Drive, OpenAI, Notion, and Slack accounts in n8n. Set Variables: Open the node named "Workflow Configuration". Replace the placeholder IDs with your actual Folder IDs (for Inbox, Portraits, and Landscapes), Notion Database ID, and Slack Channel ID. Prepare Notion: Create a Database in Notion with the following properties: Category (Select) Description (Rich Text) Image URL (URL) Tags (Rich Text) Date (Date) Requirements n8n Version**: 1.0 or later. OpenAI API: Access to the **gpt-4o model is recommended for accurate vision analysis. Google Drive**: A specific folder structure (Inbox, Portraits, Landscapes). Notion**: A dedicated database for the portfolio. Slack**: A channel for notifications. How to customize Add Categories**: You can expand the "Category Router" (Switch node) to include more specific genres like "Architecture," "Macro," or "Street," and add corresponding paths. Adjust Prompts**: Modify the system prompts in the AI nodes to change the language of the output or the style of the generated tags. Change Output**: Connect to Airtable or Excel instead of Notion if you prefer a different database system.
by Vedad Sose
Generate 50 Meta ad copy variations informed by target audience insights, then validate with real human feedback to identify the top 10 performers — all before spending a dollar on ads. Why This Matters Traditional ad copy generation relies on AI guesswork about what might resonate. This workflow grounds every variation in real human insight from your target audience — what actually matters to them, their specific concerns, the language they respond to, the benefits they care about. Instead of burning ad budget testing generic variations, you start with copy shaped by authentic audience perspectives, then pre-validated by those same people before you spend a single dollar. The top 10 ads aren't AI's best guesses — they're ranked by real human feedback on what would genuinely make your audience stop scrolling and click. How It Works This workflow uses real human perspectives at two critical stages: 1. Generation (Audience-Informed) — AI queries Digital Twins from your target demographic to understand their preferences, concerns, and emotional drivers. These insights directly shape the 50 ad variations, ensuring copy that speaks to real human motivations. 2. Validation (Pre-Tested) — Each variation is evaluated by Digital Twins matching your audience. They score and provide specific feedback on what resonates and what falls flat. Only the top 10 make it to your Google Sheet. The result: Pre-validated ad copy ranked by actual target audience feedback, not AI assumptions. The Process Submit your brief — Product details and target audience description AI gathers audience insights — Queries Digital Twins to understand what matters to your demographic Generate 50 informed variations — Copy crafted around real preferences, pain points, and emotional triggers Digital Twins validate — Target audience evaluates each ad for resonance and effectiveness Top 10 ranked output — Google Sheet with scores and specific human feedback Each variation includes: Primary Text** — Main ad copy (first 125 chars are the crucial hook) Headline** — Bold text below creative (under 40 chars)
by satoshi
Analyze productivity metrics from Google Calendar and Todoist to Slack This workflow acts as an automated personal productivity coach. It aggregates data from your daily tools (Google Calendar, Todoist, and Slack) to provide AI-driven insights into your work habits. It runs daily to log metrics to Google Sheets and sends a summary to Slack. Additionally, every Friday, it generates a comprehensive strategic weekly review. Who is this for? Remote Workers & Freelancers** who want to track their focus time and meeting load. Productivity Enthusiasts** looking to automate their "Quantified Self" data collection. Managers** who want a high-level overview of their weekly throughput and communication volume without manual tracking. What it does Daily Trigger: Runs automatically every weekday morning (default: 8 AM). Data Collection: Fetches today's meetings from Google Calendar. Retrieves high-priority and overdue tasks from Todoist. Analyzes recent message activity from Slack. AI Analysis: Uses OpenAI to analyze the data, identifying focus blocks and potential overload risks. Logging: Saves raw metrics (meeting hours, task counts, message volume) to a Google Sheet for historical tracking. Reporting: Sends a "Daily Productivity Summary" to Slack with actionable advice. On Fridays, it pulls the last 7 days of data from Google Sheets to generate and send a Weekly Strategic Report to Slack. Requirements n8n** (Self-hosted or Cloud) Google Cloud Console** project with Calendar and Sheets APIs enabled. Todoist** account. Slack** workspace. OpenAI** API Key (GPT-4 is recommended for better analysis). How to set up Configure Credentials: Set up your credentials in n8n for Google (OAuth2), Todoist, Slack, and OpenAI. Prepare Google Sheet: Create a new Google Sheet. Create the following header columns in the first row: date, meetingHours, tasksCount, slackMessages. Update Nodes: Log Daily Metrics node: Select your Spreadsheet and Sheet name. Fetch Last 7 Days Data node: Select the same Spreadsheet. Slack nodes: Select the channel where you want to receive reports. Activate: Toggle the workflow to Active. How to customize Adjust Schedule:* Change the *Schedule Daily Execution node to fit your preferred reporting time. Modify AI Persona:* Edit the system prompt in the *AI Analysis node to change the tone of the report (e.g., make it more strict or more encouraging). Add Data Sources:* You can easily chain additional nodes (like GitHub or Jira) into the *Aggregate Data code node to include coding or project management metrics.
by Robert Breen
A hands-on starter workflow that teaches beginners how to: Pull rows from a Google Sheet Append a new record that mimics a form submission Generate AI-powered text with GPT-4o based on a “Topic” column Write the AI output back into the correct row using an update operation Along the way you’ll learn the three essential Google Sheets operations in n8n (read → append → update), see how to pass sheet data into an OpenAI node, and document each step with sticky-note instructions—perfect for anyone taking their first steps in no-code automation. 0️⃣ Prerequisites Google Sheets** Open Google Cloud Console → create / select a project. Enable Google Sheets API under APIs & Services. Create an OAuth Desktop credential and connect it in n8n. Share the spreadsheet with the Google account linked to the credential. OpenAI** Create a secret key at <https://platform.openai.com/account/api-keys>. In n8n → Credentials → New → choose OpenAI API and paste the key. Sample sheet to copy** (make your own copy and use its link) <https://docs.google.com/spreadsheets/d/15i9WIYpqc5lNd5T4VyM0RRptFPdi9doCbEEDn8QglN4/edit?usp=sharing> 1️⃣ Trigger Manual Trigger – lets you run on demand while learning. (Swap for a Schedule or Webhook once you automate.) 2️⃣ Read existing rows Node:** Get Rows from Google Sheets Reads every row from Sheet1 of your copied file. 3️⃣ Generate a demo row Node:** Generate 1 Row of Data (Set node) Pretends a form was submitted: Name, Email, Topic, Submitted = "Yes" 4️⃣ Append the new row Node:** Append Data to Google Operation append → writes to the first empty line. 5️⃣ Create a description with GPT-4o OpenAI Chat Model – uses your OpenAI credential. Write description (AI Agent) – prompt = the Topic. Structured Output Parser – forces JSON like: { "description": "…" }. 6️⃣ Update that same row Node:** Update Sheets data Operation update. Matches on column Email to update the correct line. Writes the new Description cell returned by GPT-4o. 7️⃣ Why this matters Demonstrates the three core Google Sheets operations: read → append → update. Shows how to enrich sheet data with an AI step and push the result right back. Sticky Notes provide inline docs so anyone opening the workflow understands the flow instantly. 👤 Need help? Robert Breen – Automation Consultant ✉️ robert.j.breen@gmail.com 🔗 <https://www.linkedin.com/in/robert-breen-29429625/>
by Nishant
Automated daily swing‑trade ideas from end‑of‑day (EOD) data, scored by an LLM, logged to Google Sheets, and pushed to Telegram. What this workflow does Fetches EOD quotes* for a chosen stock universe (example: *NSE‑100** via RapidAPI). Cleans & filters** the universe using simple technical/quality gates (e.g., price/volume sanity, avoid illiquid names). Packages market context* and feeds it to *OpenAI* with a strict *JSON schema* to produce *top swing‑trade recommendations** (entry, target, stop, rationale). Splits structured output* into rows and *logs* them to a *Google Sheet** for tracking. Sends an alert* with the day’s trade ideas to *Telegram** (channel or DM). Ideal for Retail traders who want a daily, hands‑off idea generator. PMs/engineers prototyping LLM‑assisted quant sidekicks. Creators who publish daily trade notes to their audience. Tech stack n8n** (orchestration) RapidAPI** (EOD quotes; pluggable data source) OpenAI** (LLM for idea generation) Google Sheets** (logging & performance tracker) Telegram** (alerts) Prerequisites RapidAPI key with access to an EOD quotes endpoint for your exchange. OpenAI API key. Google account with a Sheet named Trade_Recommendations_Tracker (or update the node). Telegram bot token (via @BotFather) and destination chat ID. > You can replace any of the above vendors with equivalents (e.g., Alpha Vantage, Twelve Data, Polygon, etc.). Only the HTTP Request + Format nodes need tweaks. Environment variables | Key | Example | Used in | | -------------------- | -------------------------- | --------------------- | | RAPIDAPI_KEY | xxxxxxxxxxxxxxxxxxxxxxxx | HTTP Request (quotes) | | OPENAI_API_KEY | sk-… | OpenAI node | | TELEGRAM_BOT_TOKEN | 123456:ABC-DEF… | Telegram node | | TELEGRAM_CHAT_ID | 5357385827 | Telegram node | Google Sheet schema Create a Sheet (tab: EOD_Ideas) with the headers: Date, Symbol, Direction, Entry, Target, StopLoss, Confidence, Reason, SourceModel, UniverseTag Node map (name → purpose) Trigger – Daily Market Close → Fires daily after market close (e.g., 4:15 PM IST). Prepare Stock List (NSE 100) → Provides stock symbols to analyze (static list or from a Sheet/API). Fetch EOD Data (RapidAPI) → Gets EOD data for all symbols in one or batched calls. Format EOD Data → Normalizes API response to a clean array (symbol, close, high, low, volume, etc.). Filter Valid Stock Data → Drops illiquid/invalid rows (e.g., volume > 200k, close > 50). Build LLM Prompt Input → Creates compact market context & JSON instructions for the model. Generate Swing Trade Ideas (OpenAI) → Returns strict JSON with top ideas. Split JSON Output (Trade‑wise) → Explodes the JSON array into individual items. Log Trade to Google Sheet → Appends each idea as a row. Send Trade Alert to Telegram → Publishes a concise summary to Telegram.
by Punit
This n8n workflow automates the process of generating and publishing LinkedIn posts that align with your personal brand tone and trending tech topics. It uses OpenAI to create engaging content and matching visuals, posts it directly to LinkedIn, and sends a confirmation via Telegram with post details. 🔑 Key Features 🏷️ Random Hashtag Selection Picks a trending tag from a custom list for post inspiration. ✍️ AI-Generated Content GPT-4o crafts a LinkedIn-optimized post in your personal writing style. 🖼️ Custom Image Generation Uses OpenAI to generate a relevant image for visual appeal. 📤 Direct LinkedIn Publishing Posts are made automatically to your profile with public visibility. 📩 Telegram Notification You get a real-time Telegram alert with the post URL, tag, and timestamp. 📚 Writing Style Alignment Past posts are injected as examples to maintain a consistent tone. Ideal Use Case: Automate your daily or weekly LinkedIn presence with minimal manual effort while maintaining high-quality, relevant, and visually engaging posts.