by Juan Carlos Cavero Gracia
This workflow transforms any video you drop into a Google Drive folder into a ready-to-publish YouTube upload. It analyzes the video with AI to craft 3 high-CTR title ideas, 3 long SEO-friendly descriptions (with timestamps), and 10–15 optimized tags. It then generates 4 thumbnail options using your face and lets you pick your favorite before auto-publishing to YouTube via Upload-Post. Who Is This For? YouTube Creators & Editors:** Ship videos with winning titles, thumbnails, and SEO in minutes. Agencies & Media Teams:** Standardize output and speed across channels and clients. Founders & Solo Makers:** Maintain consistent publishing with minimal manual work. What Problem Does It Solve? Producing SEO metadata and high-performing thumbnails is slow and inconsistent. This flow: Generates High-CTR Options:** 3 distinct angles for title/description/tags. Creates Thumbnails with Your Face:** 4 options ready for review in one pass. Auto-Publishes Safely:** Human selection gates reduce risk before going live. How It Works Google Drive Trigger: Watches a folder for new video files. AI Video Analysis (Gemini): Produces an in-depth Spanish description and timestamps. Concept Generation: Returns 3 JSON concepts (title, thumbnail prompt, description, tags). User Review #1: Pick your favorite concept in a simple form. Thumbnail Generation (fal.ai): Creates 4 thumbnails using your face (provided image URL). User Review #2: Choose the best thumbnail. Upload to YouTube (Upload-Post): Publishes the video with your chosen title, description, tags, and thumbnail. Setup Credentials (all offer free trials, no credit card required): Google Gemini (chat/vision for analysis) fal.ai API (thumbnail generation) Upload-Post ( Connect your Youtube channel and generate api keys) Google Drive OAuth (folder watch + file download) Provide Your Face Image URL(s): Used by fal.ai to integrate your face into thumbnails. Select the Google Drive Folder: Where you’ll drop videos to process. Pick & Publish: Use the built-in forms to choose concept and thumbnail. Requirements Accounts:** Google (Drive + Gemini), fal.ai, Upload-Post, n8n. API Keys:** Gemini, fal.ai; Upload-Post credentials; Google Drive OAuth. Assets:** At least one clear face image for thumbnails. Features Three SEO Angles:** Distinct title/description sets to test different intents. Rich Descriptions with Timestamps:** Ready for YouTube SEO and viewer navigation. Face-Integrated Thumbnails:** 4 options aligned with the selected title. Human-in-the-Loop Controls:** Approve concepts and thumbnails before publishing. Auto-Publish via Upload-Post:** One click to push live to YouTube. Start Free:** All API calls can run on free trials, no credit card required. Video demo https://www.youtube.com/watch?v=EOOgFveae-U
by Trung Tran
Multi-Agent Architecture Free Bootstrap Template for Beginners Free template to learn and reuse a multi-agent architecture in n8n. The company metaphor: a CEO (orchestrator) delegates to Marketing, Operations, Finance to produce a short sales-season plan, export it to PDF, and share it. Who’s it for Builders who want a clear, minimal pattern for multi-agent orchestration in n8n. Teams demoing/teaching agent collaboration with one coordinator + three specialists. Anyone needing a repeatable template to generate plans from multiple “departments”. How it works / What it does Trigger (Manual) — Click Execute workflow to start. Edit Fields — Provide brief inputs (company, products, dates, constraints, channels, goals). CEO Agent (Orchestrator) — Reads the brief, calls 3 tool agents once, merges results, resolves conflicts. Marketing Agent — Proposes top campaigns + channels + content calendar. Operations Agent — Outlines inventory/staffing readiness, fulfillment steps, risks. Finance Agent — Suggests pricing/discounts, budget split, targets. Compose Document — CEO produces Markdown; node converts to Google Doc → PDF. Share — Upload the PDF to Slack (or Drive) for review. Outputs Markdown plan** with sections (Summary, Timeline, Marketing, Ops, Pricing, Risks, Next Actions). Compact JSON** for automation (campaigns, budget, dates, actions). PDF** file for stakeholders. How to set up Add credentials OpenAI (or your LLM provider) for all agents. Google (Drive/Docs) to create the document and export PDF. Slack (optional) to upload/share the PDF. Map nodes (suggested) When clicking ‘Execute workflow’ → Edit Fields (form with: company, products, audience, start_date, end_date, channels, constraints, metrics). CEO Agent (AI Tool Node) → calls Marketing Agent, Operations Agent, Finance Agent (AI Tool Nodes). Configure metadata (doc title from company + window). Create document file (Google Docs API) with CEO Markdown. Convert to PDF (export). Upload a file (Slack) to share. Prompts (drop-in) CEO (system): orchestrate 3 tools; request concise JSON+Markdown; merge & resolve; output sections + JSON. Marketing / Operations / Finance (system): each returns a small JSON per its scope (campaigns/calendar; staffing/steps/risks; discounts/budget/targets). Test — Run once; verify the PDF and Slack message. Requirements n8n (current version with AI Tool Node). LLM credentials (e.g., OpenAI). Google credentials for Docs/Drive (to create & export). Optional Slack bot token for file uploads. How to customize the workflow Swap roles**: Replace departments (e.g., Product, Legal, Support) or add more tool agents. Change outputs: Export to **DOCX/HTML/Notion; add a cover page; attach brand styles. Approval step: Insert **Slack “Send & Wait” before PDF generation for review/edits. Data grounding**: Add RAG (Sheets/DB/Docs) so agents cite inventory, pricing, or past campaign KPIs. Automation JSON**: Extend the schema to match your CRM/PM tool and push next_actions into Jira/Asana. Scheduling: Replace manual trigger with a **cron (weekly/monthly planning). Localization**: Add a Translation agent or set language via input field. Guardrails**: Add length limits, cost caps (max tokens), and validation on agent JSON.
by Roshan Ramani
🛒 Smart Telegram Shopping Assistant with AI Product Recommendations Workflow Overview Target User Role: E-commerce Business Owners, Affiliate Marketers, Customer Support Teams Problem Solved: Businesses need an automated way to help customers find products on Telegram without manual intervention, while providing intelligent recommendations that increase conversion rates. Opportunity Created: Transform any Telegram channel into a smart shopping assistant that can handle both product queries and customer conversations automatically. What This Workflow Does This workflow creates an intelligent Telegram bot that: 🤖 Automatically detects** whether users are asking about products or just chatting 🛒 Scrapes Amazon** in real-time to find the best matching products 🎯 Uses AI to analyze and rank** products based on price, ratings, and user needs 📱 Delivers perfectly formatted** recommendations optimized for Telegram 💬 Handles casual conversations** professionally when users aren't shopping Real-World Use Cases E-commerce Support**: Reduce customer service workload by 70% Affiliate Marketing**: Automatically recommend products with tracking links Telegram Communities**: Add shopping capabilities to existing channels Product Discovery**: Help customers find products they didn't know existed Key Features & Benefits 🧠 Intelligent Intent Detection Uses Google Gemini AI to understand user messages Automatically routes to product search or conversation mode Handles multiple languages and casual typing styles 🛒 Real-Time Product Data Integrates with Apify's Amazon scraper for live data Fetches prices, ratings, reviews, and product details Processes up to 10 products per search instantly 🎯 AI-Powered Recommendations Analyzes multiple products simultaneously Ranks by relevance, value, and user satisfaction Provides top 5 personalized recommendations with reasoning 📱 Telegram-Optimized Output Perfect formatting with emojis and markdown Respects character limits for mobile viewing Includes direct purchase links for easy buying Setup Requirements Required Credentials Telegram Bot Token - Free from @BotFather Google Gemini API Key - Free tier available at AI Studio Apify API Token - Free tier includes 100 requests/month Required n8n Nodes @n8n/n8n-nodes-langchain (for AI functionality) Built-in Telegram, HTTP Request, and Code nodes Quick Setup Guide Step 1: Telegram Bot Creation Message @BotFather on Telegram Create new bot with /newbot command Copy the bot token to your credentials Step 2: AI Configuration Sign up for Google AI Studio Generate API key for Gemini Add credentials to all three AI model nodes Step 3: Product Scraping Setup Register for free Apify account Get API token from dashboard Add token to "Amazon Product Scraper" node Step 4: Activation Import workflow JSON Add your credentials Activate the Telegram Trigger Test with a product query! Workflow Architecture 📱 Message Entry Point Telegram Trigger receives all messages 🧹 Query Preprocessing Cleans and normalizes user input for better search results 🤖 AI Intent Classification Determines if message is product-related or conversational 🔀 Smart Routing Directs to appropriate workflow path based on intent 💬 Conversation Path Handles greetings, questions, and general support 🛒 Product Search Path Scrapes Amazon → Processes data → AI analysis → Recommendations 📤 Optimized Delivery Formats and sends responses back to Telegram Customization Opportunities Easy Modifications Multiple Marketplaces**: Add eBay, Flipkart, or local stores Product Categories**: Specialize for electronics, fashion, etc. Language Support**: Translate for different markets Branding**: Customize responses with your brand voice Advanced Extensions Price Monitoring**: Set up alerts for price drops User Preferences**: Remember customer preferences Analytics Dashboard**: Track popular products and queries Affiliate Integration**: Add commission tracking links Success Metrics & ROI Performance Benchmarks Response Time**: 3-5 seconds for product queries Accuracy**: 90%+ relevant product matches User Satisfaction**: 85%+ positive feedback in testing Business Impact Reduced Support Costs**: Automate 70% of product inquiries Increased Conversions**: Personalized recommendations boost sales 24/7 Availability**: Never miss a customer inquiry Scalability**: Handle unlimited concurrent users Workflow Complexity Intermediate Level - Requires API setup but includes detailed instructions. Perfect for users with basic n8n experience who want to create something powerful.
by Denis
How it works Multi-modal AI Image Generator powered by Google's Nano Banana (Gemini 2.5 Flash Image) - the latest state-of-the-art image generation model Accepts text, images, voice messages, and PDFs via Telegram for maximum flexibility Uses OpenAI GPT models for conversation and image analysis, then Nano Banana for stunning image generation Features conversation memory for iterative image modifications ("make it darker", "change to blue") Processes different input types: analyzes uploaded images, transcribes voice messages, extracts PDF text All inputs are converted to optimized prompts specifically tuned for Nano Banana's capabilities Set up steps Create Telegram bot via @BotFather and get API token Set up Google Gemini API key from Google AI Studio for Nano Banana image generation (~$0.04/image) Configure OpenAI API key for GPT models (conversation, image analysis, voice transcription) Import workflow and configure all three API credentials in n8n Update bot tokens in HTTP request nodes for file downloads Test with text prompts, image uploads, voice messages, and PDF documents
by Jameson Kanakulya
Automated Content Page Generator with AI, Tavily Research, and Supabase Storage > ⚠️ Self-Hosted Disclaimer: This template requires self-hosted n8n installation and external service credentials (OpenAI, Tavily, Google Drive, NextCloud, Supabase). It cannot run on n8n Cloud due to dependency requirements. Overview Transform simple topic inputs into professional, multi-platform content automatically. This workflow combines AI-powered content generation with intelligent research and seamless storage integration to create website content, blog articles, and landing pages optimized for different audiences. Key Features Automated Research**: Uses Tavily's advanced search to gather relevant, up-to-date information Multi-Platform Content**: Generates optimized content for websites, blogs, and landing pages Image Management**: Downloads from Google Drive and uploads to NextCloud with public URL generation Database Integration**: Stores all content in Supabase for easy retrieval Error Handling**: Built-in error management workflow for reliability Content Optimization**: AI-driven content strategy with trend analysis and SEO optimization Required Services & APIs Core Services n8n**: Self-hosted instance (required) OpenAI**: GPT-4 API access for content generation Tavily**: Research API for content discovery Google Drive**: Image storage and retrieval Google Sheets**: Content input and workflow triggering NextCloud**: Image hosting and public URL generation Supabase**: Database storage for generated content Setup Instructions ## Prerequisites Before setting up this workflow, ensure you have: Self-hosted n8n installation API credentials for all required services Database table created in Supabase ## Step 1: Service Account Configuration OpenAI Setup Create an OpenAI account at platform.openai.com Generate API key from the API Keys section In n8n, create new OpenAI credentials using your API key Test connection to ensure GPT-4 access Tavily Research Setup Sign up at tavily.com Get your API key from the dashboard Add Tavily credentials in n8n Configure search depth to "advanced" for best results Google Services Setup Create Google Cloud Project Enable Google Drive API and Google Sheets API Create OAuth2 credentials Configure Google Drive and Google Sheets credentials in n8n Share your input spreadsheet with the service account NextCloud Setup Install NextCloud or use hosted solution Create application password for API access Configure NextCloud credentials in n8n Create /images/ folder for content storage Supabase Setup Create Supabase project at supabase.com Create table with the following structure: CREATE TABLE works ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, image_url TEXT, category TEXT, created_at TIMESTAMP DEFAULT NOW() ); Get project URL and service key from settings Configure Supabase credentials in n8n ## Step 2: Google Sheets Input Setup Create a Google Sheets document with the following columns: TITLE**: Topic or title for content generation IMAGE_URL**: Google Drive sharing URL for associated image Example format: TITLE | IMAGE_URL AI Chatbot Implementation | https://drive.google.com/file/d/your-file-id/view Digital Marketing Trends 2024 | https://drive.google.com/file/d/another-file-id/view ## Step 3: Workflow Import and Configuration Import the workflow JSON into your n8n instance Configure all credential connections: Link OpenAI credentials to "OpenAI_GPT4_Model" node Link Tavily credentials to "Tavily_Research_Agent" node Link Google credentials to "Google_Sheets_Trigger" and "Google_Drive_Image_Downloader" nodes Link NextCloud credentials to "NextCloud_Image_Uploader" and "NextCloud_Public_URL_Generator" nodes Link Supabase credentials to "Supabase_Content_Storage" node Update the Google Sheets Trigger node: Set your spreadsheet ID in the documentId field Configure polling frequency (default: every minute) Test each node connection individually before activating ## Step 4: Error Handler Setup (Optional) The workflow references an error handler workflow (GWQ4UI1i3Z0jp3GF). Either: Create a simple error notification workflow with this ID Remove the error handling references if not needed Update the workflow ID to match your error handler ## Step 5: Workflow Activation Save all node configurations Test the workflow with a sample row in your Google Sheet Verify content generation and storage in Supabase Activate the workflow for continuous monitoring How It Works ## Workflow Process Trigger: Google Sheets monitors for new rows with content topics Research: Tavily searches for 3 relevant articles about the topic Content Generation: AI agent creates multi-platform content (website, blog, landing page) Content Cleaning: Text processing removes formatting artifacts Image Processing: Downloads image from Google Drive, uploads to NextCloud URL Generation: Creates public sharing links for images Storage: Saves final content package to Supabase database ## Content Output Structure Each execution generates: Optimized Title**: SEO-friendly, platform-appropriate headline Multi-Platform Content**: Website content (professional, authority-building) Blog content (educational, SEO-optimized) Landing page content (conversion-focused) Category Classification**: Automated content categorization Image Assets**: Processed and publicly accessible images Customization Options ## Content Strategy Modification Edit the AI agent's system message to change content style Adjust character limits for different platform requirements Modify category classifications for your industry ## Research Parameters Change Tavily search depth (basic, advanced) Adjust number of research sources (1-10) Modify search topic focus ## Storage Configuration Update Supabase table structure for additional fields Change NextCloud folder organization Modify image naming conventions Troubleshooting ## Common Issues Workflow not triggering: Check Google Sheets permissions Verify polling frequency settings Ensure spreadsheet format matches requirements Content generation errors: Verify OpenAI API key and credits Check GPT-4 model access Review system message formatting Image processing failures: Confirm Google Drive sharing permissions Check NextCloud storage space and permissions Verify file formats are supported Database storage issues: Validate Supabase table structure Check API key permissions Review field mapping in storage node ## Performance Optimization Adjust polling frequency based on your content volume Monitor API usage to stay within limits Consider batch processing for high-volume scenarios Support and Updates This template is designed for self-hosted n8n environments and requires technical setup. For issues: Check n8n community forums Review service-specific documentation Test individual nodes in isolation Monitor execution logs for detailed error information
by Don Jayamaha Jr
Instantly access live OKX Spot Market data directly in Telegram! This workflow integrates the OKX REST v5 API with Telegram and optional GPT-4.1-mini formatting, delivering real-time insights such as latest prices, order book depth, candlesticks, trades, and mark prices — all in clean, structured reports. 🔎 How It Works A Telegram Trigger node listens for incoming user commands. The User Authentication node validates the Telegram ID to allow only authorized users. The workflow creates a Session ID from chat.id to manage session memory. The OKX AI Agent orchestrates data retrieval via HTTP requests to OKX endpoints: Latest Price (/api/v5/market/ticker?instId=BTC-USDT) 24h Stats (/api/v5/market/ticker?instId=BTC-USDT) Order Book Depth (/api/v5/market/books?instId=BTC-USDT&sz=50) Best Bid/Ask (book ticker snapshot) Candlesticks / Klines (/api/v5/market/candles?instId=BTC-USDT&bar=15m) Average / Mark Price (/api/v5/market/mark-price?instType=SPOT&instId=BTC-USDT) Recent Trades (/api/v5/market/trades?instId=BTC-USDT&limit=100) Utility tools refine the data: Calculator → spreads, % change, normalized volumes. Think → reshapes raw JSON into clean text. Simple Memory → stores sessionId, symbol, and state for multi-turn interactions. A message splitter ensures Telegram output stays under 4000 characters. Final results are sent to Telegram in structured, human-readable format. ✅ What You Can Do with This Agent Get latest price and 24h stats for any Spot instrument. Retrieve order book depth with configurable size (up to 400 levels). View best bid/ask snapshots instantly. Fetch candlestick OHLCV data across intervals (1m → 1M). Monitor recent trades (up to 100). Check the mark price as a fair average reference. Receive clean, Telegram-ready reports (auto-split if too long). 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to generate a bot token. Configure in n8n Import OKX AI Agent v1.02.json. Replace the placeholder in User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add your OpenAI API key for GPT-4.1-mini. Add your OKX API key optional. Deploy and Test Activate the workflow in n8n. Send a query like BTC-USDT to your bot. Instantly get structured OKX Spot data back in Telegram. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time OKX Spot Market insights directly in Telegram — no private API keys required! 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Jitesh Dugar
Transform college admissions from an overwhelming manual process into an intelligent, efficient, and equitable system that analyzes essays, scores applicants holistically, and identifies top candidates—saving 40+ hours per week while improving decision quality. 🎯 What This Workflow Does Automates comprehensive application review with AI-powered analysis: 📝 Application Intake - Captures complete college applications via Jotform 📚 AI Essay Analysis - Deep analysis of personal statements and supplemental essays for: Writing quality, authenticity, and voice AI-generated content detection Specificity and research quality Red flags (plagiarism, inconsistencies, generic writing) 🎯 Holistic Review AI - Evaluates applicants across five dimensions: Academic strength (GPA, test scores, rigor) Extracurricular profile (leadership, depth, impact) Personal qualities (character, resilience, maturity) Institutional fit (values alignment, contribution potential) Diversity contribution (unique perspectives, experiences) 🚦 Smart Routing - Automatically categorizes and routes applications: Strong Admit (85-100): Slack alert → Director email → Interview invitation → Fast-track Committee Review (65-84): Detailed analysis → Committee discussion → Human decision Standard Review (<65): Acknowledgment → Human verification → Standard timeline 📊 Comprehensive Analytics - All applications logged with scores, recommendations, and outcomes ✨ Key Features AI Essay Analysis Engine Writing Quality Assessment**: Grammar, vocabulary, structure, narrative coherence Authenticity Detection**: Distinguishes genuine voice from AI-generated content (GPT detectors) Content Depth Evaluation**: Self-awareness, insight, maturity, storytelling ability Specificity Scoring**: Generic vs tailored "Why Us" essays with research depth Red Flag Identification**: Plagiarism indicators, privilege blindness, inconsistencies, template writing Thematic Analysis**: Core values, motivations, growth narratives, unique perspectives Holistic Review Scoring (0-100 Scale) Academic Strength (35%)**: GPA in context, test scores, course rigor, intellectual curiosity Extracurricular Profile (25%)**: Quality over quantity, leadership impact, commitment depth Personal Qualities (20%)**: Character, resilience, empathy, authenticity, self-awareness Institutional Fit (15%)**: Values alignment, demonstrated interest, contribution potential Diversity Contribution (5%)**: Unique perspectives, life experiences, background diversity Intelligent Candidate Classification Admit**: Top 15% - clear admit, exceptional across multiple dimensions Strong Maybe**: Top 15-30% - competitive, needs committee discussion Maybe**: Top 30-50% - solid but not standout, waitlist consideration Deny**: Below threshold - does not meet competitive standards (always human-verified) Automated Workflows Priority Candidates**: Immediate Slack alerts, director briefs, interview invitations Committee Cases**: Detailed analysis packets, discussion points, voting workflows Standard Processing**: Professional acknowledgments, timeline communications Interview Scheduling**: Automated invitations with candidate-specific questions 💼 Perfect For Selective Colleges & Universities**: 15-30% acceptance rates, holistic review processes Liberal Arts Colleges**: Emphasis on essays, personal qualities, institutional fit Large Public Universities**: Processing thousands of applications efficiently Graduate Programs**: MBA, law, medical school admissions Scholarship Committees**: Evaluating merit and need-based awards Honors Programs**: Identifying top candidates for selective programs Private High Schools**: Admissions teams with holistic processes 🎓 Admissions Impact Efficiency & Productivity 40-50 hours saved per week** on initial application review 70% faster** essay evaluation with AI pre-analysis 3x more applications** processed per reader Zero data entry** - all information auto-extracted Consistent evaluation** across thousands of applications Same-day turnaround** for top candidate identification Decision Quality Improvements Objective scoring** reduces unconscious bias Consistent criteria** applied to all applicants Essay authenticity checks** catch AI-written applications Holistic view** considers all dimensions equally Data-driven insights** inform committee discussions Fast-track top talent** before competitors Equity & Fairness Standardized evaluation** ensures fair treatment First-generation flagging** provides context Socioeconomic consideration** in holistic scoring Diverse perspectives valued** in diversity score Bias detection** in essay analysis Audit trail** for compliance and review Candidate Experience Instant acknowledgment** of application receipt Professional communication** at every stage Clear timelines** and expectations Interview invitations** for competitive candidates Respectful process** for all applicants regardless of outcome 🔧 What You'll Need Required Integrations Jotform** - Application intake forms Create your form for free on JotForm using this link OpenAI API** - GPT-4o for analysis (~$0.15-0.25 per application) Gmail/Outlook** - Applicant and staff communication (free) Google Sheets** - Application database and analytics (free) Optional Integrations Slack** - Real-time alerts for strong candidates ($0-8/user/month) Google Calendar** - Interview scheduling automation (free) Airtable** - Advanced application tracking (alternative to Sheets) Applicant Portal Integration** - Status updates via API CRM Systems** - Slate, TargetX, Salesforce for higher ed 🚀 Setup Guide (3-4 Hours) Step 1: Create Application Form (60 min) Build comprehensive Jotform with sections: Basic Information Full name, email, phone High school, graduation year Intended major Academic Credentials GPA (weighted/unweighted, scale) SAT score (optional) ACT score (optional) Class rank (if available) Academic honors Essays (Most Important!) Personal statement (650 words max) "Why Our College" essay (250-300 words) Supplemental prompts (program-specific) Activities & Achievements Extracurricular activities (list with hours/week, years) Leadership positions (with descriptions) Honors and awards Community service hours Work experience Additional Information First-generation college student (yes/no) Financial aid needed (yes/no) Optional: demographic information Optional: additional context Step 2: Import n8n Workflow (15 min) Copy JSON from artifact n8n: Workflows → Import → Paste Includes all nodes + 7 detailed sticky notes Step 3: Configure OpenAI API (20 min) Get API key: https://platform.openai.com/api-keys Add to both AI nodes (Essay Analysis + Holistic Review) Model: gpt-4o (best for nuanced analysis) Temperature: 0.3 (consistency with creativity) Test with sample application Cost: $0.15-0.25 per application (essay analysis + holistic review) Step 4: Customize Institutional Context (45 min) Edit AI prompts to reflect YOUR college: In Holistic Review Prompt, Update: College name and type Acceptance rate Average admitted student profile (GPA, test scores) Institutional values and culture Academic programs and strengths What makes your college unique Desired student qualities In Essay Analysis Prompt, Add: Specific programs to look for mentions of Faculty names applicants should reference Campus culture keywords Red flags specific to your institution Step 5: Setup Email Communications (30 min) Connect Gmail/Outlook OAuth Update all recipient addresses: admissions-director@college.edu admissions-committee@college.edu Email addresses for strong candidate alerts Customize email templates: Add college name, logo, branding Update contact information Adjust tone to match institutional voice Include decision release dates Add applicant portal links Step 6: Configure Slack Alerts (15 min, Optional) Create channel: #admissions-strong-candidates Add webhook URL or bot token Test with mock strong candidate Customize alert format and recipients Step 7: Create Admissions Database (30 min) Google Sheet with columns:
by aditya vadaganadam
This n8n template turns chat questions into structured financial reports using Gemini and posts them to a Discord channel via webhook. Ask about tickers, sectors, or theses (e.g., “NVDA long‑term outlook?” or “Gold ETF short‑term drivers?”) and receive a concise, shareable report. Good to know Not financial advice: Use for insights only; verify independently. Model availability can vary by region. If you see “model not found,” it may be geo‑restricted. Costs depend on model and tokens. Check current Gemini pricing for updates. Discord messages are limited to ~2000 characters per post; long reports may need splitting. Rate limits: Discord webhooks are rate‑limited; add short waits for bursts. How it works Chat Trigger collects the user’s question (public chat supported when the workflow is activated). Conversation Memory keeps a short window of recent messages to maintain context. Connect Gemini provides the LLM (e.g., gemini‑2.5‑flash‑lite) and parameters (temperature, tokens). Agent (agent1) applies a financial analysis System Message to produce structured insights. Structured Output Parser enforces a simple JSON schema: idea (one‑line thesis) + analysis (Markdown sections). Code formats a Discord‑ready Markdown report (title, question, executive summary, sections, disclaimer). Edit Fields maps the formatted report to a clean content field. Discord Webhook posts the final report to your channel. How to use Start with the built‑in Chat Trigger: click Open chat, ask a question, and verify the Discord post. Replace or augment with a Cron or Webhook trigger for scheduled or programmatic runs. For richer context, add HTTP Request nodes (prices, news, filings) and pass summaries to the agent. Requirements n8n instance with internet access Google AI (Gemini) API key Discord server with a webhook URL Customising this workflow System Message: Adjust tone, depth, risk profile, and required sections (Summary, Drivers, Risks, Metrics, Next Steps, Takeaway). Model settings: Switch models or tune temperature/tokens in Connect Gemini. Schema: Extend the parser and formatter with fields like drivers[], risks[], or metrics{}. Formatting: Edit the Code node to change headings, emojis, disclaimers, or add timestamps. Operations: Add retries, message splitting for long outputs, and rate‑limit handling for Discord.
by Vinay Gangidi
Cash Reconciliation with AI This template automates daily cash reconciliation by comparing your open invoices against bank statement transactions. Instead of manually scanning statements line by line, the workflow uses AI to: Match transactions to invoices and assign confidence scores Flag unapplied or review-needed payments Produce a reconciliation table with clear metrics (match %, unmatched count, etc.) The end result: faster cash application, fewer errors, and better visibility into your cash flow. Good to know Each AI transaction match call will consume credits from your OpenAI account. Check OpenAI pricing for costs. OCR is used to extract data from PDF bank statements, so you’ll need a Mistral OCR API key. This workflow assumes invoices are stored in an Excel or CSV file. You may need to tweak column names to match your file headers. How it works Import files:The workflow pulls your invoice file (Excel/CSV) and daily bank statement (from OneDrive, Google Drive, or local storage). Extract and normalize data: OCR is applied to bank statements if needed. Both data sources are cleaned and aligned into comparable formats. AI matching: The AI agent compares statement transactions against invoice records, assigns a confidence score, and flags items that require manual review. Reconciliation output:A ready-made table shows matched invoices (with amounts and confidence), unmatched items, and summary stats. How to use Start with the manual trigger node to test the flow. Once validated, replace it with a schedule trigger to run daily. Adjust thresholds (like date tolerances or amount variances) in the code nodes to fit your business rules. Review the reconciliation table each day most of the work is automated, you just handle exceptions. Requirements OpenAI API key Mistral OCR API key (for PDF bank statements) Microsoft OneDrive API key and Microsoft Excel API key Access to your invoice file (Excel/CSV) and daily bank statement source Setup steps Connect accounts: Enter your API keys (OpenAI, Mistral OCR, OneDrive, Excel). Configure input nodes: Point the Excel/CSV node to your invoice file. Connect the Get Bank Statement node to your statement storage. Configure AI agent: Add your OpenAI API credentials to the AI node. Customize if needed Update column mappings if your file uses different headers. Adjust matching thresholds and tolerance logic.
by Rahul Joshi
📊 Description Ensure your GitHub repositories stay configuration-accurate and documentation-compliant with this intelligent AI-powered validation workflow. 🤖 This automation monitors repository updates, compares configuration files against documentation references, detects inconsistencies, and alerts your team instantly—streamlining DevOps and compliance reviews. ⚡ What This Template Does Step 1: Triggers automatically on GitHub push or pull_request events. 🔄 Step 2: Fetches both configuration files (config/app-config.json and faq-config.json) from the repository. 📂 Step 3: Uses GPT-4o-mini to compare configurations and detect mismatches, missing keys, or deprecated fields. 🧠 Step 4: Categorizes issues by severity—critical, high, medium, or low—and generates actionable recommendations. 🚨 Step 5: Logs all discrepancies to Google Sheets for tracking and audit purposes. 📑 Step 6: Sends Slack alerts summarizing key issues and linking to the full report. 💬 Key Benefits ✅ Prevents production incidents due to config drift ✅ Ensures documentation stays in sync with code changes ✅ Reduces manual review effort with AI-driven validation ✅ Improves team response with Slack-based alerts ✅ Maintains audit logs for compliance and traceability Features Real-time GitHub webhook integration AI-powered config comparison using GPT-4o-mini Severity-based issue classification Automated Google Sheets logging Slack alerts with detailed issue context Error handling for malformed JSON or parsing issues Requirements GitHub OAuth2 credentials with repo and webhook permissions OpenAI API key (GPT-4o-mini or compatible model) Google Sheets OAuth2 credentials Slack API token with chat:write permissions Target Audience DevOps teams ensuring consistent configuration across environments Engineering leads maintaining documentation accuracy QA and Compliance teams tracking configuration changes and risks Setup Instructions Create GitHub OAuth2 credentials and enable webhook access. Connect your OpenAI API key under credentials. Add your Google Sheets and Slack integrations. Update file paths (config/app-config.json and faq-config.json) if your repo uses different names. Activate the workflow — it will start validating on every push or PR. 🚀
by explorium
Explorium Agent for Slack AI-powered Slack bot for business intelligence queries using Explorium API through MCP. Prerequisites Slack workspace with admin access Anthropic API key (You can replace with other LLM Chat) Explorium API Key 1. Create Slack App Create App Go to api.slack.com/apps Click Create New App → From scratch Give it name (e.g., "Explorium Agent") and select workspace Bot Permissions (OAuth & Permissions) Add these Bot Token Scopes: app_mentions:read channels:history channels:read chat:write emoji:read groups:history groups:read im:history im:read mpim:history mpim:read reactions:read users:read Enable Events Event Subscriptions → Enable Add Request URL (from n8n Slack Trigger node) Subscribe to bot events: app_mention message.channels message.groups message.im message.mpim reaction_added Install App Install App → Install to Workspace Copy Bot User OAuth Token (xoxb-...) 2. Configure n8n Import & Setup Import this JSON template Slack Trigger node: Add Slack credential with Bot Token Copy webhook URL Paste in Slack Event Subscriptions Request URL Anthropic Chat Model node: Add Anthropic API credential Model: claude-haiku-4-5-20251001 (You can replace it with other chat models) MCP Client node: Endpoint: https://mcp.explorium.ai/mcp Header Auth: Add Explorium API key Usage Examples @ExploriumAgent find tech companies in SF with 50-200 employees @ExploriumAgent show Microsoft's technology stack @ExploriumAgent get CMO contacts at healthcare companies `
by Automate With Marc
🎨 Instagram Carousel & Caption Generator on Autopilot (GPT-5 + Nano Banana + Blotato + Google Sheets) Description Watch the full step-by-step tutorial on YouTube: https://youtu.be/id22R7iBTjo Disclaimer (self-hosted requirement): This template assumes you have valid API credentials for OpenAI, Wavespeed/Nano Banana, Blotato, and Google. If using n8n Self-Hosted, ensure HTTPS access and credentials are set in your instance. How It Works Chat Trigger – Receive a topic/idea (e.g. “5 best podcast tips”). Image Prompt Generator (GPT-5) – Creates 5 prompts using the “Hook → Problem → Insight → Solution → CTA” framework. Structured Output Parser – Formats output into a JSON array. Generate Images (Nano Banana) – Converts prompts into high-quality visuals. Wait for Render – Ensures image generation completes. Fetch Rendered Image URLs – Retrieves image links. Upload to Blotato – Hosts and prepares images for posting. Collect Media URLs – Gathers all uploaded image URLs. Log to Google Sheets – Stores image URLs + timestamps for tracking. Caption Generator (GPT-5) – Writes an SEO-friendly caption. Merge Caption + Images – Combines data. Post Carousel (Blotato) – Publishes directly to Instagram. Step-by-Step Setup Instructions 1) Prerequisites n8n (Cloud or Self-Hosted) OpenAI API Key (GPT-5) Wavespeed API Key (Nano Banana) Blotato API credentials (connected to Instagram) Google Sheets OAuth credentials 2) Add Credentials in n8n OpenAI: Settings → Credentials → Add “OpenAI API” Wavespeed: HTTP Header Auth (e.g. Authorization: Bearer <API_KEY>) Blotato: Add “Blotato API” Google Sheets: Add “Google Sheets OAuth2 API” 3) Configure & Test Run with an idea like “Top 5 design hacks”. Check generated images, caption, and logged sheet entry. Confirm posting works via Blotato. 4) Optional Add a Schedule Trigger for weekly automation. Insert a Slack approval loop before posting. Customization Guide ✏️ Change design style: Modify adjectives in the Image Prompt Generator. 📑 Adjust number of slides: Change Split node loop count. 💬 Tone of captions: Edit Caption Generator’s system prompt. ⏱️ Adjust render wait time: If image generation takes longer, increase the Wait node duration from 30 seconds to 60 seconds or more. 🗂️ Log extra data: Add columns in Google Sheets for campaign or topic. 🔁 Swap posting tool: Replace Blotato with your scheduler or email node. Requirements OpenAI API key (GPT-5 or compatible) Wavespeed API key (Nano Banana) Blotato API credentials Google Sheets OAuth credentials n8n account (Cloud or Self-Hosted)