by Ishan Gandhi
🚀 Description Turn Telegram into your personal AI-powered LinkedIn content assistant. This workflow lets you create high-quality LinkedIn posts from a simple chat message, refine them with natural feedback, generate matching AI images, and publish directly to LinkedIn—all without leaving Telegram. Perfect for creators, founders, and professionals who want to stay consistent on LinkedIn without switching between tools. ⚙️ How it works 1. Start with an idea: Send a topic to your Telegram bot (e.g., “Write about AI in marketing”). 2. Instant LinkedIn draft: AI generates a structured, engaging post with a strong hook, insights, and CTA—ready to publish. 3. Refine with feedback: Ask for changes like “make it shorter” or “more professional,” and the workflow updates your draft instantly. 4. Add an AI-generated image (optional): Describe the visual you want, and the workflow creates a polished, LinkedIn-ready image. 5. Approve before publishing: Review and approve both the post and image to maintain full control. 6. Publish in one step: The workflow posts directly to LinkedIn with or without the image. 7. Clean and reset automatically: Your draft is cleared after publishing, and inactive sessions are cleaned up automatically. 🛠️ Set up steps Estimated time: 10–15 minutes Create a Telegram bot and connect it in n8n Add your OpenAI API key (for content and image generation) Connect your LinkedIn account via OAuth Activate the workflow and start chatting with your bot
by WeblineIndia
Zoho CRM Deal Forecasting with External Market Factor This workflow automatically fetches active deals from Zoho CRM, retrieves real-time market signals, calculates AI-enhanced forecast metrics, evaluates deal-market alignment, stores data in a database, updates CRM, and sends a summary alert to Slack. This workflow runs weekly to help sales teams make data-driven decisions. It fetches all open deals from Zoho, calculates expected revenue using deal amount, probability, seasonal trends, and market signals. An AI node evaluates each deal’s match ratio against current market conditions. Forecasts and AI insights are stored in a database and written back into Zoho. A Slack message summarizes the key metrics for easy review. You receive: Weekly automated deal forecast**. AI-powered deal-market alignment insights**. Database storage for historical trends**. Slack summary notifications**. Ideal for sales teams wanting real-time insights into pipeline health and market alignment without manual calculations. Quick Start – Implementation Steps Import the provided n8n workflow JSON file. Add your Zoho CRM credentials in all relevant nodes. Add your AlphaVantage API key in the Market Signal node. Connect your Slack credentials and select the channel for alerts. Connect your Supabase (or preferred database) account for storing forecasts. Activate the workflow — it will run automatically on the configured weekly schedule. What It Does This workflow automates deal forecasting with AI-enhanced insights: Fetches all active deals from Zoho CRM. Retrieves real-time market data (SPY index) from AlphaVantage. Combines deal and market data for forecast calculations. Calculates expected revenue using: Deal amount Probability Seasonal factors Market signals Sends deal data to an AI node for match ratio, confidence level, and reasoning. Parses AI output and merges it with forecast data. Stores forecast & AI metrics in a database (Supabase). Updates Zoho CRM with adjusted forecast and AI insights. Sends a summary alert to Slack including: Deal name and stage Amount, probability, and expected revenue Market signal and seasonal factor AI match ratio and confidence This ensures teams see clear, actionable sales insights every week. Who’s It For This workflow is ideal for: Sales managers and CRM admins Revenue operations teams Forecasting analysts Teams using Zoho CRM and Slack for pipeline management Anyone wanting AI insights on market alignment for deals Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Zoho CRM account** with API access AlphaVantage API key** for market data Slack workspace** with API permissions Supabase or other database** for storing forecasts Basic understanding of deals, probabilities, and seasonal forecasting How It Works Weekly Trigger – Workflow runs automatically once a week. Fetch Deals – Retrieves all active deals from Zoho CRM. Get Market Signal – Fetches real-time market data. Combine Deal & Market Info – Merges deal and market datasets. Generate Forecast Metrics – Calculates expected revenue using deal info, seasonality, and market influence. AI Deal Match Evaluator – AI evaluates alignment of each deal with market conditions. Parse AI Output & Merge Forecast – Parses AI response and combines with forecast data. Store Forecast in Database – Saves forecast and AI insights to Supabase. Update Deal Forecast in Zoho – Updates deals with adjusted forecast and AI insights. Send Forecast Summary to Slack – Sends a clear summary with key metrics. Setup Steps Import the workflow JSON file into n8n. Add Zoho credentials for deal fetch and update nodes. Add AlphaVantage API key for market signal node. Configure Supabase node to store forecast data. Add Slack credentials and choose a channel for notifications. Test the workflow manually to ensure metrics are calculated correctly. Activate the weekly trigger. How To Customize Nodes Forecast Calculation Modify Generate Forecast Metrics node to adjust seasonal factors or calculation logic. AI Match Evaluation You can tweak prompts in Message a Model to adjust AI scoring logic or reasoning output. Database Storage Supabase node can include additional fields: Timestamp Deal owner Notes or comments Additional KPIs Slack Alerts Customize message format, emojis, or mentions for team readability. Add-Ons (Optional Enhancements) Integrate multiple market indices for more accurate forecasting. Add multi-stage probability adjustments. Create dashboards using stored forecast data. Extend AI evaluation for risk scoring or priority recommendations. Use Case Examples 1. Pipeline Health Quickly see which deals are aligned with market conditions. 2. Forecast Accuracy Track historical vs AI-enhanced forecasts for trend analysis. 3. Team Notifications Slack summary alerts keep sales and leadership informed weekly. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|---------| | No Slack alerts | Invalid credentials | Re-check Slack API key and channel | | Forecast not updating | Zoho API error | Verify Zoho OAuth credentials | | AI node fails | Model misconfiguration | Check OpenAI API credentials & prompt format | | Data not stored | Supabase connection issue | Verify credentials and table mapping | Need Help? If you need assistance setting up the workflow, modifying the AI forecast logic or integrating Slack summaries our n8n workflow development team at WeblineIndia can help. We provide workflow customization, advanced forecasting and reporting solutions for Zoho CRM pipelines.
by Mohamed Abdelwahab
An end-to-end Retrieval-Augmented Generation (RAG) customer support workflow for n8n, using a cache-first strategy (LangCache) combined with a Redis vector store powered by OpenAI embeddings. This template is designed for fast, accurate, and cost-efficient customer support chatbots, internal help desks, and knowledge-base assistants. Overview This workflow implements a production-ready RAG architecture optimized for customer support use cases. Incoming chat messages are processed through a structured pipeline that prioritizes cached answers, falls back to semantic vector search when needed, and validates response quality before returning a final answer. The workflow supports: Multi-question user inputs Intelligent query decomposition Cache reuse to reduce latency and cost High-precision retrieval from a Redis vector database Quality evaluation and controlled retries Final answer synthesis into a single, coherent response Key Features Chat-based RAG pipeline** using n8n’s Chat Trigger Query decomposition** for multi-topic questions LangCache integration** (search + save) Redis Vector Store** for semantic retrieval OpenAI embeddings and chat models** Quality scoring** with retry logic Session memory buffers** for contextual continuity Fallback-safe behavior** (no hallucinations) How the Workflow Works 1. Chat Trigger The workflow starts when a new chat message is received. 2. Configuration Setup A centralized configuration node defines: LangCache base URL Cache ID Similarity threshold (default: 0.75) Maximum retrieval iterations (default: 2) 3. Query Decomposition The user message is analyzed and decomposed into: A single focused question, or Multiple independent sub-questions This improves retrieval accuracy and cache reuse. 4. Cache-First Retrieval Each sub-question is processed independently: The workflow first searches LangCache If a high-similarity cached answer is found, it is reused immediately 5. Vector Retrieval (Cache Miss) If no cache hit exists: The query is embedded using OpenAI embeddings A semantic search is executed against the Redis vector index Retrieved knowledge-base documents are passed to a research-only agent 6. Knowledge-Only Answering The research agent: Answers strictly from the retrieved knowledge Returns "no info found" if no relevant data exists 7. Quality Evaluation Each generated answer is evaluated by a dedicated quality-check node: Outputs a numerical SCORE (0.0 – 1.0) Provides textual feedback Low scores can trigger limited retries 8. Cache Update High-quality answers are saved back to LangCache for future reuse. 9. Aggregation & Synthesis All sub-answers are aggregated and synthesized into: One final, user-facing response, or A polite fallback message if information is insufficient Main Nodes & Responsibilities When Chat Message Received** — Entry point for user messages LangCache Config** — Centralized configuration values Decompose Query (LangChain Agent)** — Splits complex queries Structured Output Parser** — Ensures valid JSON output Search LangCache** — Cache lookup via HTTP Redis Vector Store** — Semantic retrieval from Redis Embeddings OpenAI** — Vector generation Research Agent** — KB-only answering (no hallucinations) Quality Evaluator** — Scores answer relevance Save to LangCache** — Stores validated answers Memory Buffers** — Session context handling Response Synthesizer** — Final message generation Setup Instructions 1. Configure Credentials Create the following credentials in n8n: OpenAI API** Redis** HTTP Bearer Auth** (for LangCache) 2. Prepare the Knowledge Base Embed your documents using OpenAI embeddings Insert them into the configured Redis vector index Ensure documents are concise and well-structured 3. Configure LangCache Update the configuration node with: langcacheBaseUrl langcacheCacheId Optional tuning for similarity threshold and iterations 4. Test the Workflow Use the example data loader or schedule trigger Send test chat messages Validate cache hits, vector retrieval, and final responses Recommended Tuning Similarity Threshold:** 0.7 – 0.85 Max Iterations:** 1 – 3 Quality Score Cutoff:** 0.7 Model Choice:** Use faster models for low latency, stronger models for accuracy Cache Policy:** Cache only high-confidence answers Security & Compliance Notes Store API keys securely using n8n credentials Avoid caching sensitive or personally identifiable information Apply least-privilege access to Redis and LangCache Consider logging cache writes for audit purposes Common Use Cases Customer support chatbots Internal help desks Knowledge-base assistants Self-service support portals AI-powered FAQ systems Template Metadata (Recommended) Template Name:** AI Customer Support — Redis RAG (LangCache + OpenAI) Category:** Customer Support / AI / RAG Tags:** customer-support, RAG, knowledge-base, redis, openai, langcache, chatbot, n8n-template Difficulty Level:** Intermediate Required Integrations:** OpenAI, Redis, LangCache
by NODA shuichi
Description: Your personal AI Book Curator that reads reviews, recommends books, and supports affiliate links. 📚🤖 This advanced workflow acts as a complete "Reading Assistant Application" with monetization features. It takes a book title via form, researches it using Google APIs, and employs an OpenAI Agent to generate a summary and recommendations. Why use this template? Monetization Support: Just enter your Amazon Affiliate Tag in the config node, and all email links will automatically include your tag. Organized & Scalable: The workflow is clearly grouped into 4 sections (Input, Enrichment, AI, Delivery) with sticky notes for easy navigation. How it works: Input: User submits a book title (e.g., "Atomic Habits"). Research: The workflow fetches book metadata and searches for real-world reviews. Analyze: GPT-4o explains why the book is interesting and suggests 3 related reads. Deliver: Generates a beautiful HTML email with purchase links and logs the request to Google Sheets. Setup Requirements: Google Sheets: Create headers: date, book_title, author, ai_comment, user_email. Credentials: OpenAI, Google Custom Search, Gmail, Google Sheets. Config: Open the "1. Input & Config" section to enter API Keys and IDs.
by Cheng Siong Chin
How It Works Automates daily project monitoring by fetching project data, analyzing tasks and team capacity with anthropic models, and generating resource optimization recommendations. Target audience: project managers, engineering leads, and resource planners managing complex team assignments. Problem solved: manual capacity planning misses bottlenecks; AI analysis identifies effort mismatches and delays proactively. Workflow runs daily checks, merges project and team profiles, analyzes tasks via multiple anthropic agents (breakdown, estimation, assignment), calculates effort allocation, detects delays, generates rebalancing recommendations, notifies stakeholders, and tracks milestones. Setup Steps Configure daily trigger schedule. Connect project management system APIs. Set anthropic API keys with task analysis prompts. Enable email notifications for managers. Connect reporting database for tracking. Prerequisites Anthropic API access, project management tool credentials, team capacity database Use Cases SaaS teams managing feature backlogs, consulting firms balancing client projects Customization Adjust effort estimation models. Add Slack notifications for urgency. Benefits Detects delays 2-3 weeks early, improves team utilization by 25%
by Chandan Singh
This workflow creates a daily, automated backup of all workflows in a self-hosted n8n instance and stores them in Google Drive. Instead of exporting every workflow on every run, it uses content hashing to detect meaningful changes and only updates backups when a workflow has actually been modified. To keep Google Drive clean and predictable, the workflow intentionally deletes the existing backup file before uploading the updated version. This avoids duplicate files and ensures there is always one authoritative backup per workflow. A Data Table is used as an index to track workflow IDs, hash values, and timestamps. This allows the workflow to quickly determine whether a workflow already exists, whether its content has changed, or whether it should be skipped entirely. How it works Runs daily using a Cron Trigger. Fetches all workflows from the n8n API. Processes workflows one-by-one for reliability. Generates a SHA-256 hash for each workflow. Compares hashes against a stored Data Table. Deletes existing Google Drive backups when changes are detected. Uploads updated workflows and skips unchanged ones. Store new or updated workflows details in Data Table. Filters workflows based on the configured backup scope (all | active | tagged ). Backs up all workflows, only active workflows, or only workflows matching a specific tag. Applies the scope filter before hashing and comparison, ensuring only relevant workflows are processed. Setup steps Set the Cron schedule** Open the Cron Trigger node and choose the time you want the backup to run (for example, once daily during off-peak hours). Create a Data Table** Create a new n8n Data Table with the title defined in dataTableTitle. This table stores workflowId, workflowName, hashCode, and DriveFiveId. Configure the Set node** In the Set Backup Configuration node, provide the following values: { "n8nHost": "https://your-n8n-domain", "apiKey": "your-n8n-api-key", "backupFolder": "/n8n/workflow-backups", "hashAlgorithm": "sha256", "dataTableTitle": "n8n_workflow_backup_index", "backupScope" : "", "requiredTag" : "" } In the Set Backup Configuration node, choose how workflows should be selected for backup: all – backs up every workflow (default) active – backs up only enabled workflows tagged – backs up only workflows containing a specific tag If using the tagged option, provide the required tag name to match. { "backupScope": "tagged", "requiredTag": "production" } Connect Google Drive credentials** Authorize your Google Drive account and ensure the backup folder exists. Activate the workflow** Once enabled, backups run automatically with no further action required.
by Cheng Siong Chin
How It Works This workflow automates the complete end-to-end processing of daily revenue transactions for finance and accounting teams. It systematically retrieves, validates, and standardizes transaction data from multiple sources, computes applicable tax obligations, identifies anomalies, and generates regulatory compliance reports. Designed primarily for accountants and financial analysts, it significantly reduces manual workload, improves the accuracy of tax calculations, and automates submission to relevant authorities. Transaction data flows through integrated sources, undergoes validation and AI-driven tax assessment, and ultimately produces well-formatted reports ready for secure archiving or automated email distribution. Setup Steps Connect Google Sheets/SQL for transactions Configure tax rules in workflow Set Gmail/Drive for report delivery Activate schedule for daily execution Prerequisites Accounts and API credentials for Google Sheets, Gmail, Drive; access to transaction database; tax rule configuration. Use Cases Daily financial reconciliation, automated tax calculation, anomaly detection in revenue streams. Customization Adjust connectors, validation rules, and tax logic to match local regulations or additional data sources. Benefits Reduces manual effort, improves accuracy, ensures timely compliance, and enables proactive anomaly detection.
by Incrementors
Description Automatically analyze Upwork SEO job posts, detect hidden screening questions, generate personalized cover letters with portfolio examples using GPT-4 Turbo, DeepSeek & Claude AI — all saved to Google Docs instantly. Auto-Generate Winning Upwork SEO Proposals with GPT-4, DeepSeek & Claude AI Automate the entire Upwork proposal process — from analyzing a job post and detecting hidden screening questions, to generating a personalized cover letter backed by your real portfolio data, running it through a 10-point quality check, and saving the final polished version to Google Docs — all without writing a single word manually. Perfect for SEO freelancers, agencies, and Upwork consultants who want to send high-quality, personalized proposals at scale without spending 45–60 minutes on each one. What This Workflow Does This automation handles five key tasks: Analyzes job posts — GPT-4 Turbo extracts structured job data including title, industry, client history, budget, required skills, and client's SEO pain points from raw Upwork job text Detects hidden screening questions — Automatically identifies and highlights any hidden verification tests clients embed in job descriptions (e.g., "Start your proposal with the word Avocado"), which most freelancers miss Generates cover letters with portfolio proof — DeepSeek writes a 150–250 word personalized cover letter, then pulls relevant ranking keyword examples and industry case studies from your Pinecone vector database to add real proof Runs a 10-point quality check — Another DeepSeek agent evaluates the cover against a strict checklist and flags only the missing or weak elements for improvement Polishes and saves to Google Docs — Claude 3.7 Sonnet applies QC feedback with minimal changes and saves both the final cover letter and screening Q&A answers to your Google Doc, ready to copy-paste How It Works The workflow begins when you submit a job through a simple form — paste the Upwork job URL, copy-paste the raw job post text, and select the job type (SEO, Agency, or Automation). GPT-4 Turbo analyzes the job post and outputs a fully structured breakdown: job title, industry focus, primary SEO problems, client's current SEO status, required skills, client history patterns, and strategic notes. It also detects any hidden screening questions and marks them prominently with ⚠️ ATTENTION markers. Once analysis is complete, the workflow splits into three parallel branches that run simultaneously: Branch A — Screening Q&A Writer: DeepSeek reads the detected screening questions and writes direct, concise answers (under 200 words each). It pulls up to 3 relevant examples from your Pinecone databases when helpful. The answers are formatted in clean HTML and saved immediately to your Google Doc. Branch B — Cover Letter Generator: DeepSeek generates a personalized 150–250 word cover letter that mirrors the client's exact language, tone, and terminology. It searches your Pinecone vector databases — one holding case studies with Google Doc URLs, one holding portfolio websites with their ranking keywords — and adds 2 portfolio examples plus 1 industry-matched case study in a structured format. All URLs are validated to ensure no angle brackets or broken formatting. Both the job analysis output and the generated cover then flow into the Quality Control pipeline. A Merge node combines them, an Aggregate node bundles everything into a single input, and DeepSeek's Cover Quality Checker evaluates the proposal against a 10-point checklist covering client name, job terminology, opening strength, keyword usage, industry relevance, skills match, process outline, and call to action. It outputs only the specific changes needed. Finally, the QC feedback and original cover are merged again and passed to Claude 3.7 Sonnet for the final polish. Claude applies the suggestions with minimal edits — preserving the client's vocabulary and tone — formats the output in clean HTML, and the workflow saves it to your Google Doc. A 1-minute read-ready cover letter, complete with real portfolio proof, is waiting for you. Setup Requirements Accounts needed: n8n instance (self-hosted or cloud) OpenAI account with GPT-4 Turbo API access (for Job Analysis + Embeddings) DeepSeek account with API access (for Cover Writing, Q&A, and QC) Anthropic API key for Claude 3.7 Sonnet (for Final Polish) Pinecone account with two indexes: casestudiesdatabase and websitewithrankingkeywords-v2 Google account with Google Docs access Estimated setup time: 15–20 minutes Setup Steps 1. Import Workflow Copy the workflow JSON Open n8n → Workflows → Import from JSON Paste and import Verify all nodes are properly connected across the three parallel branches 2. Configure OpenAI (GPT-4 Turbo + Embeddings) Add OpenAI API credential in n8n Enter your API key Credential is used by three nodes: GPT-4 Turbo LLM (Job Analyzer), OpenAI Embeddings (Case Studies), and OpenAI Embeddings (Keywords) Test the connection before proceeding 3. Configure DeepSeek Add DeepSeek API credential in n8n Enter your DeepSeek API key Credential is used by three nodes: DeepSeek LLM (Cover Writer), DeepSeek LLM (Q&A Writer), and DeepSeek LLM (QC Checker) Test the connection 4. Configure Anthropic (Claude 3.7 Sonnet) Add Anthropic API credential in n8n Enter your Anthropic API key Model is set to claude-3-7-sonnet-20250219 Credential is used by: Claude 3.7 Sonnet LLM (Final Cover Polish node) Test the connection 5. Set Up Pinecone Vector Databases Create two Pinecone indexes: casestudiesdatabase and websitewithrankingkeywords-v2 Add your Pinecone API credential in n8n Case Studies DB: Upload your industry case studies with Google Doc URLs — do NOT modify these URLs or the links will break Ranking Keywords DB: Upload your portfolio websites with their ranking keywords (the workflow retrieves top 20 results per query) Verify both indexes appear in the Case Studies DB (Pinecone) and Ranking Keywords DB (Pinecone) nodes 6. Connect Google Docs Create two Google Docs — one for cover letters, one for Q&A answers Add Google Docs OAuth2 credential in n8n and complete the OAuth flow Paste your Cover Letter Google Doc URL in the Save Final Cover to Docs node Paste your Q&A Google Doc URL in the Save Q&A to Docs node Test by triggering the workflow and verifying content appears in both documents 7. Test and Activate Open the Job Input Form webhook URL in your browser Paste a real Upwork SEO job post text and submit Check execution logs for all three parallel branches Verify your Google Doc shows both the final cover letter and the Q&A answers Activate the workflow once output is confirmed correct What Gets Analyzed and Generated From the Upwork job post: Job title, industry focus, and niche Primary SEO problems the client wants solved Client's current SEO status and gaps Required skills ranked by importance Client country (for regional SEO approach) Client hiring history and industry patterns with confidence scores Budget and preferred engagement model Hidden screening questions (with ⚠️ ATTENTION markers) Strategic SEO project type (technical / content / link building) AI-generated outputs: Structured job analysis with industry pattern matching 150–250 word personalized cover letter with portfolio examples 2 portfolio website examples with 3 ranking keywords each 1 industry-matched case study with metrics and Google Doc link Direct answers to all screening questions (under 200 words each) 10-point QC evaluation with specific improvement suggestions Final HTML-formatted cover letter ready to copy-paste Use Cases High-volume Upwork freelancers: Send 5–10 personalized, data-backed proposals daily without manual writing — each one tailored to the client's exact industry and pain points SEO agencies on Upwork: Scale proposal output across multiple team members using a shared workflow — everyone gets consistent, on-brand proposals New Upwork SEO freelancers: Never miss a hidden screening question again and always include relevant portfolio proof that matches the client's industry Freelance business automation: Eliminate the most time-consuming part of freelancing — proposal writing — and redirect that time to client work Important Notes Replace all placeholder API keys and credential IDs before activating the workflow Ensure all five credential types are tested successfully: OpenAI, DeepSeek, Anthropic, Pinecone, and Google Docs Case study Google Doc URLs in Pinecone must never be modified — the workflow uses them as-is The Pinecone databases must be populated with your own portfolio data before the workflow produces accurate examples DeepSeek handles the majority of AI tasks for cost efficiency; Claude 3.7 Sonnet is used only for the final polish step Each job submission generates one complete proposal set (cover letter + Q&A) in your Google Doc Processing time is typically 60–120 seconds depending on Pinecone retrieval speed and AI response time Form Access Access the workflow via the built-in n8n form at: https://your-n8n-instance.com/webhook/upwork-proposal-generator Paste any Upwork job post text and submit to start the automation instantly. Support For questions or assistance: Email: info@incrementors.com Contact: https://www.incrementors.com/contact-us/
by Cheng Siong Chin
How It Works This workflow automates intelligent routing of user queries to optimal AI models (Anthropic, OpenAI) based on complexity analysis, then validates outputs through multi-stage quality assessment. Designed for teams managing high-volume AI operations, it solves the critical problem of balancing cost-efficiency with output quality—automatically selecting budget-friendly models for simple tasks while routing complex requests to premium models. The system analyzes incoming queries via validation tools, routes them through specialized AI agents based on assessment scores, executes parallel quality checks across compliance, bias, and risk dimensions, aggregates validation results, and stores flagged responses for human review. This ensures consistent, high-quality AI responses while optimizing computational costs and maintaining governance standards across diverse use cases. Setup Steps Connect Anthropic and OpenAI API credentials in n8n credentials manager Configure Google Sheets connection for storing validation results and flagged responses Set Schedule Trigger interval (recommended: hourly or daily based on volume) Customize classification thresholds in validation nodes (confidence scores, risk levels) Update agent prompt templates to match your domain requirements Configure Slack/Gmail notifications for high-priority quality flags Prerequisites Active API accounts for Anthropic Claude and OpenAI. Use Cases Customer support ticket routing and quality monitoring. Customization Adjust classification logic by modifying validation node expressions. Benefits Reduces AI costs by 40-60% through intelligent model selection.
by noda
AI Recommender: From Food Photo to Restaurant and Book (Google Books Integrated) What it does Analyzes a food photo with an AI vision model to extract dish name + category Searches nearby restaurants with Google Places and selects the single best (rating → reviews tie-break) Finds a matching book via Google Books and posts a tidy summary to Slack Who it’s for Foodies, bloggers, and teams who want a plug-and-play flow that turns a single food photo into a dining pick + themed reading. How it works Google Drive Trigger detects a new photo Dish Classifier (Vision LLM) → JSON (dish_name, category, basic macros) Search Google Places near your origin; Select Best Place (AI) Recommend Book (AI) → Search Google Books → format details Post to Slack (JP/EN both possible) Requirements Google Drive / Google Places / Google Books credentials, LLM access (OpenRouter/OpenAI), Slack OAuth. Customize Edit origin/radius in Set Origin & Radius, tweak category→keyword mapping in Normalize Classification, adjust Slack channel & message in Post to Slack.
by Alena - Prodigy AI Sol
How it works A form trigger accepts any YouTube URL: youtube.com/watch?v=ID, youtu.be/ID, youtube.com/embed/ID, or just a plain video ID A code node normalises the URL and extracts a clean video ID for the YouTube Data API v3 Calls the API to fetch video stats (title, description, tags, views, likes, comment count) Fetches comments from the videos commentThreads endpoint, capped by the form's Max Comments to Process field Flattens each comment into its own item, then merges two ranked lists deduped by commentId: the 5 newest comments by publishedAt and the 5 highest-engagement comments by likes plus replies A GPT-4 agent writes a natural, human-sounding reply for each top comment, responding as a fellow viewer (not as the channel owner) In parallel, a second GPT-4 agent writes one original top-level comment based on the videos title and description A structured output parser guarantees clean JSON for both agents Replies are merged with the top-level comment by videoId, normalised into row shape, and appended to Google Sheets - one row per reply Set up steps Setup takes about 5 to 10 minutes Connect a YouTube Data API OAuth2 credential (used by both the Get Video Statistics and Get Video Comments nodes) Connect an OpenAI API credential (defaults to gpt-4.1-mini; swap to any model you prefer) Connect a Google Sheets OAuth2 credential and pick your destination spreadsheet plus tab in the Save to Google Sheets node Create a tab with these column headers in row 1: videoId, videoUrl, videoTitle, commentAuthor, commentText, myReply, myVideoComment, selectionType, videoTags, viewCount, engagementScore, videoDescription, likeCount, commentCount Optional: tweak the system prompts inside each AI agent to match your niche, voice, and brand Submit a YouTube URL via the form trigger to test Detailed per-step notes live inside the workflow as sticky notes
by Calvin Cunningham
Use Cases -Personal or family budget tracking. -Small business expense logging via Telegram -Hands-free logging (using voice messages) How it works: -Trigger receives text or voice. -Optional branch transcribes audio to text. -AI parses into a structured array (SOP enforces schema). -Split Out produces 1 item per expense. -Loop Over Items appends rows sequentially with a Wait, preventing missed writes. -In parallel, Item Lists (Aggregate) builds a single summary string; Merge (Wait for Both) releases one final Telegram confirmation. Setup Instructions Connect credentials: Telegram, Google, OpenAI. Sheets: Create a sheet with headers Date, Category, Merchant, Amount, Note. Copy Spreadsheet ID + sheet name. Map columns in Append to Google Sheet. Pick models: set Chat model (e.g., gpt-4o-mini) and Whisper for transcription if using audio. Wait time: keep 500–1000 ms to avoid API race conditions. Run: Send a Telegram message like: Gas 34.67, Groceries 82.45, Coffee 6.25, Lunch 14.90. Customization ideas: -Add categories map (Memory/Set) for consistent labeling. -Add currency detection/formatting. -Add error-to-Telegram path for invalid schema.