by Port IO
RBAC for AI agents with n8n and Port This workflow implements role-based access control for AI agent tools using Port as the single source of truth for permissions. Different users get access to different tools based on their roles, without needing a separate permission database. For example, developers might have access to PagerDuty and AWS S3, while support staff only gets Wikipedia and a calculator. The workflow checks each user's permissions in Port before letting the agent use any tools. For the full guide with blueprint setup and detailed configuration, see RBAC for AI Agents with n8n and Port in the Port documentation. How it works The n8n workflow orchestrates the following steps: Slack trigger — Listens for @mentions and extracts the user ID from the message. Get user profile — Fetches the user's Slack profile to get their email address. Port authentication — Requests an access token from the Port API using client credentials. Permission lookup — Queries Port for the user entity (by email) and reads their allowed_tools array. Unknown user check — If the user doesn't exist in Port, sends an error message and stops. Permission filtering — The "Check permissions" node compares each connected tool against allowed_tools and replaces unauthorized ones with a stub that returns "You are not authorized to use this tool." AI agent — Runs with only permitted tools, using GPT-4 and chat memory. Response — Posts the agent output back to the Slack channel. Setup [ ] Connect your Slack account and set the channel ID in the trigger node [ ] Add your OpenAI API key [ ] Register for free on Port.io [ ] Create the rbacUser blueprint in Port (see full guide for blueprint setup) [ ] Add user entities using email as the identifier [ ] Replace YOUR_PORT_CLIENT_ID and YOUR_PORT_CLIENT_SECRET in the "Get Port access token" node [ ] Connect credentials for any tools you want to use (PagerDuty, AWS, etc.) [ ] Update the channel ID in the Slack nodes [ ] Invite the bot to your Slack channel [ ] You should be good to go! Prerequisites You have a Port account and have completed the onboarding process. You have a working n8n instance (self-hosted) with LangChain nodes available. Slack workspace with bot permissions to receive mentions and post messages. OpenAI API key for the LangChain agent. Port client ID and secret for API authentication. (Optional) PagerDuty, AWS, or other service credentials for tools you want to control. ⚠️ This template is intended for Self-Hosted instances only.
by Cheng Siong Chin
How It Works This workflow streamlines academic paper development through a multi-agent AI architecture that collects references, drafts individual sections autonomously, compiles the manuscript, and exports a professionally formatted DOCX file. Tailored for researchers, faculty members, and postgraduate students, it reduces the effort required to plan, write, and format scholarly articles from the ground up. Upon receiving a paper title and abstract, the system initiates web-based literature retrieval and reference extraction, handled by a Research Agent leveraging tools such as Google Scholar. A central Orchestration Agent then coordinates six dedicated writing agents, covering the Introduction, Related Work, Methodology, Results, Discussion, and Conclusion. The generated sections are consolidated with an automatically formatted bibliography, converted into a DOCX document via a document automation script, and prepared for download. Setup Steps Configure Research Paper Input node with topic, keywords, and paper parameters. Add Anthropic (Claude) API credentials to all Claude Model nodes. Set up Google Scholar Search Tool credentials or API key for literature retrieval. Connect Google Docs Script node with service account for DOCX generation. Configure workflow output path for DOCX file download or Drive storage. Prerequisites Google Scholar API or search tool access Google Docs Script or DOCX generation service Use Cases Automated first-draft generation for academic journal submissions Customization Swap Claude for OpenAI GPT-4 or NVIDIA NIM across writing agents Benefits Generates complete, structured research papers fully automatically
by NODA shuichi
Description: Automate your grant research with an AI Agent that reads, analyzes, and scores opportunities. 🏛️🤖 This advanced workflow transforms the tedious task of finding business subsidies into an automated intelligence stream. Unlike simple keyword scrapers, it uses an OpenAI Agent (GPT-4o) to read full articles, extract key details (deadlines, budgets), and evaluate their importance for SMEs on a 1-10 scale. Key Features: Intelligent Scoring: The AI assigns an "Importance Score" and "Urgency" level to each subsidy, filtering out noise. Structured Data Extraction: Converts unstructured news text into clean JSON (Deadlines, Requirements, Agencies). Smart Alerts: High-scoring subsidies (Score 7+) trigger a priority alert (🚨) sent directly to Chatwork. State Management: Uses Google Sheets to track history and prevent duplicate notifications. Organized Layout: Nodes are clearly grouped into sections (Setup, Aggregation, Analysis, Logic) for easy customization. How it works: Aggregate: Collects the latest articles from Google News, J-Net21, and Mirasapo Plus RSS feeds. Analyze: The AI Agent reads the content to extract fields like applicationDeadline and targetRecipients, while calculating an importance score. Deduplicate: Checks the URL against a Google Sheet database to ensure only new information is processed. Filter & Tag: High-value items are automatically tagged as "High Priority". Notify: Saves data to Google Sheets and sends a formatted message to Chatwork. Setup Requirements: Google Sheets: Create a sheet named Subsidies with the following headers in the first row: subsidyName, targetRecipients, applicationDeadline, budgetAmount, urgency, importanceScore, priorityTag, sourceUrl Credentials: OpenAI: API Key (GPT-4o recommended). Chatwork: API Token. Google Sheets: OAuth2 connection. Configuration: Open the "Sticky Note Setup" section (first node) and enter your Chatwork Room ID, Chatwork API Token, and Google Spreadsheet ID.
by Cheng Siong Chin
How It Works This workflow automates monthly tax processing by ingesting expense receipts alongside revenue data, extracting structured deduction details using GPT-4, and accurately matching expenses to their corresponding revenue periods. It retrieves receipts with built-in type validation, parses deduction information through OpenAI structured output extraction, and consolidates revenue records into a unified dataset. The system then intelligently aligns expenses with revenue timelines, calculates eligible deductions, and generates well-formatted tax reports that are automatically sent to designated agents via Gmail. Designed for accountants, tax professionals, and finance teams, it enables automated expense categorization and optimized deduction calculations. Setup Steps Configure receipt storage source and OpenAI Chat Model API key. Connect Gmail for report delivery and set up tax agent email. Define expense categories, revenue periods, and deduction rules. Schedule monthly trigger and test extraction Prerequisites Expense receipt repository; OpenAI API key; Gmail account; revenue data source Use Cases Accountants automating receipt processing for multiple clients; Customization Adjust extraction prompts for industry-specific expenses, modify deduction rules Benefits Eliminates manual receipt review, reduces categorization errors
by Rodrigo
How it works This workflow creates a complete AI-powered restaurant ordering system through WhatsApp. It receives customer messages, processes multimedia content (text, voice, images, PDFs, location), uses GPT-4 to understand customer intent and manage conversations, handles the complete ordering flow from menu selection to payment verification, and sends formatted orders to restaurant staff. The system maintains conversation memory, verifies payment receipts using OCR, and provides automated responses in multiple languages. Who's it for Restaurant owners, food delivery services, and hospitality businesses looking to automate customer service and order management through WhatsApp without hiring additional staff. Requirements WhatsApp Business API account OpenAI API key (GPT-4/GPT-4o access recommended) Supabase account (for conversation memory and vector storage) Google Drive account (for menu images and QR codes) Google Maps API key (for location services) Gemini API key (for PDF processing) How to set up Configure credentials - Add your WhatsApp Business API, OpenAI, Supabase, Google Drive, and Gemini API credentials to n8n Update phone numbers - Replace [PHONE_NUMBER] placeholders with your actual restaurant and staff phone numbers Customize restaurant details - Replace [RESTAURANT_NAME], [RESTAURANT_OWNER_NAME], and [BANK_ACCOUNT_NUMBER] with your information Upload menu images - Add your menu images to Google Drive and update the file IDs Set up Supabase - Create tables for chat memory and upload your menu/restaurant information to the vector database Configure AI prompts - Update the restaurant information in the AI agent system messages Test the workflow - Send test messages to verify all integrations work How to customize the workflow Menu management**: Update Google Drive file IDs to display your current menu images Payment verification**: Modify the receipt analysis logic to match your bank's receipt format Order formatting**: Customize the order confirmation template sent to kitchen staff AI personality**: Adjust the restaurant agent's tone and responses in the system prompts Languages**: The AI supports multiple languages - customize welcome messages for your target market Business hours**: Add time-based logic to handle orders outside operating hours Delivery zones**: Integrate with your delivery area logic using the location processing features
by hayatofujita
Manage expenses with AI insights through LINE 👥 Who’s it for This workflow is designed for small business owners, freelancers, and finance teams who want to eliminate manual data entry for expense tracking. It is ideal for users who want not just a record of their spending, but real-time AI-driven financial insights to help manage their cash flow. 🚀 How it works This template acts as an autonomous finance assistant that connects LINE, Google Workspace, and OpenAI. Data Capture**: Receives receipt images directly through the LINE Messaging API. AI Analysis**: Uses OpenAI (GPT-4o-mini) to perform OCR and extract Date, Store Name, Amount, and Category with high precision. Duplicate Prevention**: Automatically searches your Google Sheet to verify if the receipt has already been registered (based on Date and Amount) to prevent double-counting. Cloud Storage**: Renames and saves the receipt image to Google Drive for tax compliance and easy retrieval. Automated Ledger**: Appends the structured data into a Google Sheets master file. Financial Insights**: Calculates the current month's total spending, compares it to previous data, and generates sharp management advice via AI to help you stay on budget. ⚙️ Setup steps Prepare Google Sheet: Create a sheet with headers: Date, Amount, Store, and Category. Prepare Google Drive: Create a folder for receipt storage and copy its Folder ID. Configure Credentials: Set up your credentials in n8n for OpenAI (API Key), Google Workspace (OAuth2), and LINE Messaging API (Channel Access Token). Update Node Settings: In the "Google Sheets" nodes, select your specific spreadsheet. In the "Google Drive" node, paste your Folder ID. In the "LINE" HTTP nodes, ensure your Authorization header is set to Bearer YOUR_TOKEN. Activate: Set your n8n Webhook URL in the LINE Developers Console and toggle the workflow to "Active." 📦 Requirements n8n version 1.0 or later OpenAI API Key (GPT-4o / GPT-4o-mini) Google Workspace Account (Drive, Sheets) LINE Developers Account (Messaging API) 🎨 How to customize Refine AI Advice**: Edit the System Message in the AI Agent node to change the tone of the advice (e.g., from "Strict Accountant" to "Friendly Growth Hacker"). Switch Channels**: Replace the final LINE node with Slack or Discord nodes if your team uses those platforms. Budget Alerts**: Add a Filter node to trigger special notifications if monthly spending exceeds a certain threshold.
by Rahul Joshi
📊 Description Automate your YouTube research workflow by extracting audio from any video, transcribing it with Whisper AI, and generating structured GEO (Goal–Execution–Outcome) summaries using GPT-4o-mini. 🎥🤖 This template transforms unstructured video content into actionable, searchable insights that are automatically stored in Notion with rich metadata. It’s ideal for creators, educators, analysts, and knowledge workers who want to convert long videos into concise, high-quality summaries without manual effort. Perfect for content indexing, research automation, and knowledge-base enrichment. 📚✨ 🔁 What This Template Does • Triggers on a schedule to continuously process new YouTube videos. ⏰ • Fetches video metadata (title, description, thumbnails, published date) via the YouTube API. 🎥 • Downloads audio using RapidAPI and prepares it for transcription. 🎧 • Transcribes audio into text using OpenAI Whisper. 📝 • Skips invalid entries when no transcript is generated. 🚫 • Merges the transcript with metadata for richer AI context. 🔗 • Uses GPT-4o-mini to generate Goal, Execution, Outcome, and Keywords via structured JSON. 🤖📊 • Parses the AI-generated JSON into Notion-friendly formats. 🔍 • Creates a Notion page with GEO sections, keywords, and video metadata. 📄🏷️ • Produces a fully searchable knowledge record for every processed video. 📚✨ ⭐ Key Benefits ✅ Converts long YouTube videos into concise, structured knowledge ✅ AI-powered GEO summaries improve comprehension and recall ✅ Zero manual transcription or note-taking — 100% automated ✅ Seamless Notion integration creates a powerful video knowledge base ✅ Works on autopilot with scheduled triggers ✅ Saves hours for educators, researchers, analysts, and content teams 🧩 Features YouTube API integration for metadata retrieval RapidAPI audio downloader OpenAI Whisper transcription GPT-4o-mini structured analysis through LangChain Memory buffer + structured JSON parser for consistent results Automatic Notion page creation Fail-safe transcript validation (IF node) Metadata + transcript merging for richer AI context GEO (Goal–Execution–Outcome) summarization workflow 🔐 Requirements YouTube OAuth2 credentials OpenAI API key (Whisper + GPT-4o-mini) Notion API integration token + database ID RapidAPI key for YouTube audio downloading n8n with LangChain nodes enabled 🎯 Target Audience YouTubers and content creators archiving their content Researchers and educators summarizing long videos Knowledge managers building searchable Notion databases Automation teams creating video intelligence workflows 🛠️ Step-by-Step Setup Instructions Add YouTube OAuth2, OpenAI, Notion, and RapidAPI credentials. 🔑 Replace the placeholder RapidAPI key in the “Get YouTube Audio” node. ⚙️ Update the Notion database ID where summaries should be stored. 📄 Configure the Schedule Trigger interval based on your needs. ⏰ Replace the hardcoded video ID (if present) with dynamic input or playlist logic. 🔗 Test with a sample video to verify transcription + AI + Notion output. ▶️ Enable the workflow to run automatically. 🚀
by Don Jayamaha Jr
Instantly access real-time Binance Spot Market data in Telegram! This workflow connects the Binance REST API with Telegram and optional GPT-4.1-mini formatting, delivering structured insights such as latest prices, 24h stats, order book depth, trades, and candlesticks directly into chat. 🔎 How It Works A Telegram Trigger listens for incoming user requests. User Authentication validates the Telegram ID to restrict access. A Session ID is generated from chat.id to manage session memory. The Binance AI Agent executes HTTP calls to the Binance public API: Latest Price (Ticker) → /api/v3/ticker/price?symbol=BTCUSDT 24h Statistics → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlestick Data (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Tools refine outputs: Calculator → computes spreads, midpoints, averages, % changes. Think → extracts and reformats JSON into human-readable fields. Simple Memory → saves symbol, sessionId, and user context. Message Splitter ensures outputs >4000 characters are chunked for Telegram. Final structured reports are sent back to Telegram. ✅ What You Can Do with This Agent Get real-time Binance Spot prices with 24h stats. Fetch order book depth and liquidity snapshots. View best bid/ask quotes. Retrieve candlestick OHLCV data across timeframes. Check recent trades (up to 100). Calculate spreads, mid-prices, % changes automatically. Receive clean, structured messages instead of raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather and save the bot token. Configure in n8n Import Binance AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram credentials (bot token). Add OpenAI API key (Optional) Add Binance API key Deploy & Test Activate the workflow in n8n. Send BTCUSDT to your bot. Instantly receive Binance Spot Market insights inside Telegram. 📤 Output Rules Group outputs by Price, 24h Stats, Order Book, Candles, Trades. Respect Telegram’s 4000-char message limit (auto-split enabled). Only structured summaries — no raw JSON. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock Binance Spot Market insights instantly in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Guilherme Campos
This n8n workflow automates the process of creating high-quality, scroll-stopping LinkedIn posts based on live research, AI insight generation, and Google Sheets storage. Instead of relying on recycled AI tips or boring summaries, this system combines real-time trend discovery via Perplexity, structured idea shaping with GPT-4, and content generation tailored to a bold, human LinkedIn voice. The workflow saves each post idea (with image prompt, tone, and summary) to a Google Sheet, sends you a Telegram alert, and even formats your content for direct publishing. Perfect for solopreneurs, startup marketers, or anyone who posts regularly on LinkedIn and wants to sound original, not robotic. Who’s it for Content creators and solopreneurs building an audience on LinkedIn Startup teams, PMs, and tech marketers looking to scale thought leadership Anyone tired of generic AI-generated posts and craving structured, edgy output How it works Daily trigger at 6 AM starts the workflow. Pulls recent post history from Google Sheets to avoid repeated ideas. Perplexity AI scans the web Generates 3 structured post ideas (including tone, hook, visual prompt, and summary). GPT-4 refines each into a bold, human-style LinkedIn post, following detailed brand voice rules. Saves everything to Google Sheets (idea, content, image prompt, post status). Sends a Telegram notification to alert you new ideas are ready. How to set up Connect your Perplexity, OpenAI, Google Sheets, and Telegram credentials. Point to your preferred Google Sheet and sheet tab for storing post data. Adjust the schedule trigger if you want more or fewer ideas per week. (Optional) Tweak the content style prompt to match your personal tone or niche. Requirements Perplexity API account OpenAI API access (GPT-4 or GPT-4-mini) Telegram bot connected to your account Google Sheets document with appropriate column headers How to customize the workflow Change the research sources or prompt tone (e.g., more tactical, more spicy, more philosophical) Add an image generation tool to turn prompts into visuals for each post Filter or tag ideas based on type (trend, tip, story, etc.) Post automatically via LinkedIn API or Buffer integration
by Avinash Raju
How it works When a meeting ends in Fireflies, the transcript is automatically retrieved and sent to OpenAI for analysis. The AI evaluates objection handling, call effectiveness, and extracts key objections raised during the conversation. It then generates specific objection handlers for future calls. The analysis is formatted into a structured report and sent to both Slack for immediate visibility and Google Drive for centralized storage. Set up steps Prerequisites: Fireflies account with API access OpenAI API key Slack workspace Google Drive connected to n8n Configuration: Connect Fireflies webhook to trigger on meeting completion Add OpenAI API key in the AI analysis nodes Configure Slack channel destination for feedback delivery Set Google Drive folder path for report storage Adjust AI prompts in sticky notes to match your objection categories and sales methodology
by Stéphane Bordas
How it Works This workflow lets you build a Messenger AI Agent capable of understanding text, images, and voice notes, and replying intelligently in real time. It starts by receiving messages from a Facebook Page via a Webhook, detects the message type (text, image, or audio), and routes it through the right branch. Each input is then prepared as a prompt and sent to an AI Agent that can respond using text generation, perform quick calculations, or fetch information from Wikipedia. Finally, the answer is formatted and sent back to Messenger via the Graph API, creating a smooth, fully automated chat experience. Set Up Steps Connect credentials Add your OpenAI API key and Facebook Page Access Token in n8n credentials. Plug the webhook Copy the Messenger webhook URL from your workflow and paste it into your Facebook Page Developer settings (Webhook → Messages → Subscribe). Customize the agent Edit the System Message of the AI Agent to define tone, temperature, and purpose (e.g. “customer support”, “math assistant”). Enable memory & tools Turn on Simple Memory to keep conversation context and activate tools like Calculator or Wikipedia. Test & deploy Switch to production mode, test text, image, and voice messages directly from Messenger. Benefits 💬 Multi-modal Understanding — Handles text, images, and audio messages seamlessly. ⚙️ Full Automation — End-to-end workflow from Messenger to AI and back. 🧠 Smart Replies — Uses OpenAI + Wikipedia + Calculator for context-aware answers. 🚀 No-Code Setup — Build your first Messenger AI in less than 30 minutes. 🔗 Extensible — Easily connect more tools or APIs like Airtable, Google Sheets, or Notion.
by Emir Belkahia
🎙️ Voice-to-Slides: Business Review Kickstarter for Customer Success This workflow helps Customer Success Managers brain dump their client knowledge via voice notes and kickstart business review preparation by auto-generating a structured Google Slides draft in their official slide deck template. Who's it for CSMs and Account Managers who want to capture meeting insights quickly via voice and get a head start on business review prep, not a finished presentation, but a solid first draft to build from. What it does (and doesn't do) ✅ It DOES: Transcribe your (potentially unstructured) voice notes accurately Organize your thoughts into Value Realized / Recommendations / Next Steps Create a Google Slides file in your official template Pre-populate placeholders with structured content ❌ It DOESN'T: Generate a client-ready presentation Add charts, metrics, or data visualizations Write polished, final copy Replace the actual work of crafting your business review Think of it as: A smart assistant that turns your brain dump into a structured starting point, not a finished product. How it works Brain dump via voice - Speak freely to your Telegram bot about your client: wins, challenges, recommendations, next steps (no need to be perfectly organized) AI transcription - Groq Whisper converts audio to text Security check - Scans for sensitive data (PII, confidential info) and alerts if found Content structuring - AI categorizes your rambling into three sections Review & approve - You receive an email with extracted content to validate and add client details Template generation - Creates a Google Slides from your template in the client's Drive folder First draft ready - Slides are populated with placeholders filled: now you refine, add data, polish Set up steps Setup time: ~20 minutes Create Telegram bot via @BotFather Prepare your own Google Slides template with placeholders: value_realized_placeholder recommendations_placeholder next_steps_placeholder Connect credentials: Telegram, Groq, OpenAI, Gmail, Google Drive, Google Slides Update template ID in "Copy template to customer Folder" node Set your company name in "Set CSM's company name" node Add your email in all "human in the loop" nodes Requirements Telegram account Groq API key (Whisper transcription) OpenAI API key Google Workspace (Gmail, Drive, Slides) Google Slides template with required placeholders Client Google Drive folders (shared access) Cost breakdown For a typical 3-5 minute voice note: Transcription (Groq Whisper)**: Free AI Processing (GPT-5-nano + GPT-5-mini)**: ~$0.005 💰 Bottom line: Half a cent per business review. You could run 200+ business reviews for $1. The workflow uses cost-effective models (GPT-5-nano for security checks, GPT-5-mini for content extraction) to keep costs negligible while maintaining quality. Note: Costs may vary based on voice note length and verbosity. Prices based on GPT-5-Nano and GPT-5-Mini pricing as of Nov 2025. 💡 Pro tips Be mindful of the guardrail**: It's designed to catch sensitive info (full names + company + financials), but it can sometimes be overzealous. If you find it blocking legitimate content, consider: Adjusting the confidence threshold (currently 0.7) to be less strict Removing the guardrail entirely if you're experienced and know what to avoid Reviewing the "Sensitive information" custom prompt to fine-tune detection rules Structure your thoughts loosely**: While speaking, try to mentally organize around Value Realization → Recommendations → Next Steps. It's totally fine if things mix or overlap, the AI will reorganize, but having this structure in mind helps you cover everything. Record with your tools open**: This is key! Have your previous BRs, CS platform, analytics dashboards, or CRM open while recording. Reference specific metrics, feature adoption rates, and data points directly from your systems. The AI can't look up data for you, feed it the good stuff. Don't overthink it**: Your first recording will probably feel awkward. That's normal. The AI is surprisingly good at cleaning up "umms," tangents, and unstructured rambling. Just brain dump. Keep it under 5 minutes**: Better transcription accuracy, faster processing, and cheaper API costs. If you have more to say, split into multiple voice notes. Review the email summary carefully**: The AI extracts content well but loses the nuance and context you have. Use the email review step to catch misinterpretations before they hit the slides. What to do after the workflow runs Open the generated slides in the client's folder Refine the AI-generated text (add nuance, fix tone) Add charts, screenshots, data visualizations Polish formatting and visual hierarchy