by Rahul Joshi
Description Automatically consolidate Zendesk and Freshdesk ticket data into a unified performance dashboard with KPI calculations, Google Sheets logging, real-time Slack alerts, and weekly Gmail email reports. Provides complete visibility into support operations, SLA compliance, and customer satisfaction across multiple platforms. 📊💬📧 What This Template Does Runs weekly on schedule to fetch tickets from both Zendesk and Freshdesk. ⏰ Merges ticket data into a standardized JSON structure with normalized priorities, statuses, and channels. 🔄 Logs all tickets and metadata into Google Sheets for audit-ready performance tracking. 📑 Calculates advanced KPIs including resolution rates, SLA breaches, CSAT score estimation, urgent ticket rates, and performance grading. 📊 Evaluates alert conditions (e.g., high SLA breach, low CSAT, backlog risk). 🚨 Sends formatted Slack alerts with performance grades, key metrics, and recommendations. 💬 Generates corporate-style HTML weekly reports and delivers them via Gmail. 📧 Key Benefits Unifies Zendesk and Freshdesk data into one consistent reporting flow. 🌐 Provides actionable KPIs for SLA monitoring, customer satisfaction, and backlog health. ⏱️ Ensures leadership visibility with Google Sheets logs and professional email reports. 🧾 Alerts the support team instantly on Slack when performance drops. 🚨 Reduces manual data analysis with automated grading and recommendations. 🤖 Features Multi-Platform Ticket Integration – Fetches tickets from Zendesk and Freshdesk. 🎫 Data Normalization – Cleans descriptions, maps priorities/statuses, and detects escalations. 🧼 Google Sheets Logging – Tracks tickets with IDs, URLs, tags, timestamps, and metadata. 📈 KPI Calculation Engine – Computes SLA breach rate, resolution rate, CSAT, escalation %, and more. 🧮 Performance Grading – Grades support performance (A–D) with detailed descriptions. 🏅 Slack Alerts – Notifies with active alerts, recommendations, and emoji-based health signals. 📢 Weekly Gmail Reports – Delivers branded HTML reports for management and audits. ✨ Requirements n8n instance (cloud or self-hosted). Zendesk API credentials with ticket read access. Freshdesk API credentials with ticket read access. Google Sheets OAuth2 credentials with spreadsheet write permissions. Slack Bot API credentials with posting permissions. Gmail OAuth2 credentials with send email permissions. Pre-configured Google Sheet for KPI logging. Target Audience Support managers overseeing multi-platform ticketing systems. 👩💻 Customer success teams monitoring SLA compliance and CSAT health. 🚀 SMBs running Zendesk + Freshdesk who need unified dashboards. 🏢 Remote/global support teams needing automated KPI visibility. 🌐 Executives requiring weekly performance reports and recommendations. 📈 Step-by-Step Setup Instructions Connect Zendesk, Freshdesk, Google Sheets, Slack, and Gmail credentials in n8n. 🔑 Update the Google Sheet ID in the “Log KPIs in Google Sheets” node. 📊 Configure Slack channel ID for alerts (default: zendesk-churn-alerts). 💬 Replace {Enter Your Email} in the Gmail node with your recipient email. 📧 Adjust thresholds in the KPI calculation node (default: 4h response, 24h resolution). ⏱️ Test with sample tickets to validate Sheets logging, Slack alerts, and Gmail reports. ✅ Deploy on schedule (default: weekly at 8 PM) for continuous tracking. 🗓️
by Br1
Load Jira open issues with comments into Pinecone + RAG Agent (Direct Tool or MCP) Who’s it for This workflow is designed for support teams, data engineers, and AI developers who want to centralize Jira issue data into a vector database. It collects open issues and their associated comments, converts them into embeddings, and loads them into Pinecone for semantic search, retrieval-augmented generation (RAG), or AI-powered support bots. It’s also published as an MCP tool, so external applications can query the indexed issues directly. How it works The workflow automates Jira issue extraction, comment processing, and vector storage in Pinecone. Importantly, the Pinecone index is recreated at every run so that it always reflects the current set of unresolved tickets. Trigger – A schedule trigger runs the workflow at defined times (e.g., 8, 11, 14, and 17 on weekdays). Issue extraction with pagination – Calls the Jira REST API to fetch open issues matching a JQL query (unresolved cases created in the last year). Pagination is fully handled: issues are retrieved in batches of 25, and the workflow continues iterating until all open issues are loaded. Data transformation – Extracts key fields (issue ID, key, summary, description, product, customer, classification, status, registration date). Comments integration – Fetches all comments for each issue, filters out empty/irrelevant ones (images, dots, empty markdown), and merges them with the issue data. Text cleaning – Converts HTML descriptions into clean plain text for processing. Embedding generation – Uses the OpenAI Embeddings node to vectorize text. Vector storage with index recreation – Loads embeddings and metadata into Pinecone under the jira namespace and the openissues index. The namespace is cleared at every run to ensure the index contains only unresolved tickets. Document chunking – Splits long issue texts into smaller chunks (512 tokens, 50 overlap) for better embedding quality. MCP publishing – Exposes the Pinecone index as an MCP tool (openissues), enabling external systems to query Jira issues semantically. How to set up Jira – Configure a Jira account and generate a token. Update the Jira node with credentials and adjust the JQL query if needed. OpenAI – Set up an OpenAI API key for embeddings. Configure embedding dimensions (default: 512). Pinecone – Create an index (e.g., openissues) with matching dimensions (512). Configure Pinecone API credentials and namespace (jira). The index will be cleared automatically at every run before reloading unresolved issues. Schedule – Adjust the cron expression in the Schedule Trigger to fit your update frequency. Optional MCP – If you want to query Jira issues via MCP, configure the MCP trigger and tool nodes. Requirements Jira account with API access and permissions to read issues and comments. OpenAI API key with access to the embedding model. Pinecone account with an index created (dimensions = 512). n8n instance with credentials set up for Jira, OpenAI, and Pinecone. How to customize the workflow JQL query**: Modify it to control which issues are extracted (e.g., by project, type, or time window). Pagination size**: Adjust the maxResults parameter (default 25) if you want larger or smaller batches per iteration. Metadata fields**: Add or remove fields in the “Extract Relevant Info” code node. Chunk size**: Adjust chunk size/overlap in the Document Chunker for different embedding strategies. Embedding model**: Switch to a different embedding provider if preferred. Vector store**: Replace Pinecone with another supported vector database if needed. Downstream use**: Extend with notifications, dashboards, or AI assistants that consume the vector data. AI Chatbot for Jira open tickets with SLA insights Who’s it for This workflow is designed for commercial teams, customer support, and service managers who need quick, conversational access to unresolved Jira tickets. It enables them to check whether a client has open issues, see related details, and understand SLA implications without manually browsing Jira. How it works Chat interface** – Provides a web-based chat that team members can use to ask natural language questions such as: “Are there any issues from client ACME?” “Do we have tickets that have been open for a long time?” AI Agent** – Powered by OpenAI, it interprets questions and queries the Pinecone vector store (openissues index, jira namespace). Memory** – Maintains short-term chat history for more natural conversations. Ticket retrieval** – Uses Pinecone embeddings (dimension = 512) to fetch unresolved tickets enriched with metadata: Issue key, description, customer, product, severity color, status, AM contract type, and SLA. SLA integration** – Service levels (Basic, Advanced, Full Service, with optional Fast Support) are provided via the SLA node. The agent explains which SLA applies based on ticket severity, registration date, and contract type. AI response** – Returns a friendly, collaborative summary of all tickets found, including: Ticket identifier Description Customer and product Severity level (Red, Yellow, Green, White) Ticket status Contract level and SLA explanation Setup Configure Jira → Pinecone index (openissues, 512 dimensions) already populated with unresolved tickets. Provide OpenAI API credentials. Ensure the SLA node includes the correct service-level definitions. Adjust chat branding (title, subtitle, CSS) if desired. Requirements Jira account with API access. Pinecone account with an index (openissues, dimensions = 512). OpenAI API key. n8n instance with LangChain and chatTrigger nodes enabled. How to customize Change the SLA node text if your service levels differ. Adjust the chat interface design (colors, title, subtitle). Expand metadata in Pinecone (e.g., add project type, priority, or assigned team). Train with additional examples in the system message to refine AI behavior.
by Robin Geuens
Overview Get a weekly report on website traffic driven by large language models (LLMs) such as ChatGPT, Perplexity, and Gemini. This workflow helps you track how these tools bring visitors to your site. A weekly snapshot can guide better content and marketing decisions. How it works The trigger runs every Monday. Pull the number of sessions on your website by source/medium from Google Analytics. The code node uses the following regex to filter referral traffic from AI providers like ChatGPT, Perplexity, and Gemini: /^.openai.|.copilot.|.chatgpt.|.gemini.|.gpt.|.neeva.|.writesonic.|.nimble.|.outrider.|.perplexity.|.google.bard.|.bard.google.|.bard.|.edgeservices.|.astastic.|.copy.ai.|.bnngpt.|.gemini.google.$/i; Combine the filtered sessions into one list so they can be processed by an LLM. Generate a short report using the filtered data. Email the report to yourself. Setup Get or connect your OpenAI API key and set up your OpenAI credentials in n8n. Enable Google Analytics and Gmail API access in the Google Cloud Console. (Read more here). Set up your Google Analytics and Gmail credentials in n8n. If you're using the cloud version of n8n, you can log in with your Google account to connect them easily. In the Google Analytics node, add your credentials and select the property for the website you’re working with. Alternatively, you can use your property ID, which can be found in the Google Analytics admin panel under Property > Property Details. The property ID is shown in the top-right corner. Add this to the property field. Under Metrics, select the metric you want to measure. This workflow is configured to use sessions, but you can choose others. Leave the dimension as-is, since we need the source/medium dimension to filter LLMs. (Optional) To expand the list of LLMs being filtered, adjust the regex in the code node. You can do this by copying and pasting one of the existing patterns and modifying it. Example: |.example.| The LLM node creates a basic report. If you’d like a more detailed version, adjust the system prompt to specify the details or formatting you want. Add your email address to the Gmail node so the report is delivered to your inbox. Requirements OpenAI API key for report generation Google Analytics API enabled in Google Cloud Console Gmail API enabled in Google Cloud Console Customizing this workflow The regex used to filter LLM referral traffic can be expanded to include specific websites. The system prompt in the AI node can be customized to create a more detailed or styled report.
by Growth AI
Intelligent chatbot with custom knowledge base Who's it for Businesses, developers, and organizations who need a customizable AI chatbot for internal documentation access, customer support, e-commerce assistance, or any use case requiring intelligent conversation with access to specific knowledge bases. What it does This workflow creates a fully customizable AI chatbot that can be deployed on any platform supporting webhook triggers (websites, Slack, Teams, etc.). The chatbot accesses a personalized knowledge base stored in Supabase and can perform advanced actions like sending emails, scheduling appointments, or updating databases beyond simple conversation. How it works The workflow combines several powerful components: Webhook Trigger: Accepts messages from any platform that supports webhooks AI Agent: Processes user queries with customizable personality and instructions Vector Database: Searches relevant information from your Supabase knowledge base Memory System: Maintains conversation history for context and traceability Action Tools: Performs additional tasks like email sending or calendar booking Technical architecture Chat trigger connects directly to AI Agent Language model, memory, and vector store all connect as tools/components to the AI Agent Embeddings connect specifically to the Supabase Vector Store for similarity search Requirements Supabase account and project AI model API key (any LLM provider of your choice) OpenAI API key (for embeddings - this is covered in Cole Medin's tutorial) n8n built-in PostgreSQL access (for conversation memory) Platform-specific webhook configuration (optional) How to set up Step 1: Configure your trigger The template uses n8n's default chat trigger For external platforms: Replace with webhook trigger and configure your platform's webhook URL Supported platforms: Any service with webhook capabilities (websites, Slack, Teams, Discord, etc.) Step 2: Set up your knowledge base For creating and managing your vector database, follow this comprehensive guide: Watch Cole Medin's tutorial on document vectorization This video shows how to build a complete knowledge base on Supabase The tutorial covers document processing, embedding creation, and database optimization Important: The video explains the OpenAI embeddings configuration required for vector search Step 3: Configure the AI agent Define your prompt: Customize the agent's personality and role Example: "You are the virtual assistant for example.com. Help users by answering their questions about our products and services." Select your language model: Choose any AI provider you prefer (OpenAI, Anthropic, Google, etc.) Set behavior parameters: Define response style, tone, and limitations Step 4: Connect Supabase Vector Store Add the "Supabase Vector Store" tool to your agent Configure your Supabase project credentials Mode: Set to "retrieve-as-tool" for automatic agent integration Tool Description: Customize description (default: "Database") to describe your knowledge base Table configuration: Specify the table containing your knowledge base (example shows "growth_ai_documents") Ensure your table name matches your actual knowledge base structure Multiple tables: You can connect several tables for organized data structure The agent will automatically decide when to search the knowledge base based on user queries Step 5: Set up conversation memory (recommended) Use "Postgres Chat Memory" with n8n's built-in PostgreSQL credentials Configure table name: Choose a name for your chat history table (will be auto-created) Context Window Length: Set to 20 messages by default (adjustable based on your needs) Benefits: Conversation traceability and analytics Context retention across messages Unique conversation IDs for user sessions Stored in n8n's database, not Supabase How to customize the workflow Basic conversation features Response style: Modify prompts to change personality and tone Knowledge scope: Update Supabase tables to expand or focus the knowledge base Language support: Configure for multiple languages Response length: Set limits for concise or detailed answers Memory retention: Adjust context window length for longer or shorter conversation memory Advanced action capabilities The chatbot can be extended with additional tools for: Email automation: Send support emails when users request assistance Calendar integration: Book appointments directly in Google Calendar Database updates: Modify Airtable or other databases based on user interactions API integrations: Connect to external services and systems File handling: Process and analyze uploaded documents Platform-specific deployments Website integration Replace chat trigger with webhook trigger Configure your website's chat widget to send messages to the n8n webhook URL Handle response formatting for your specific chat interface Slack/Teams deployment Set up webhook trigger with Slack/Teams webhook URL Configure response formatting for platform-specific message structures Add platform-specific features (mentions, channels, etc.) E-commerce integration Connect to product databases Add order tracking capabilities Integrate with payment systems Configure support ticket creation Results interpretation Conversation management Chat history: All conversations stored in n8n's PostgreSQL database with unique IDs Context tracking: Agent maintains conversation flow and references previous messages Analytics potential: Historical data available for analysis and improvement Knowledge retrieval Semantic search: Vector database returns most relevant information based on meaning, not just keywords Automatic decision: Agent automatically determines when to search the knowledge base Source tracking: Ability to trace answers back to source documents Accuracy improvement: Continuously refine knowledge base based on user queries Use cases Internal applications Developer documentation: Quick access to technical guides and APIs HR support: Employee handbook and policy questions IT helpdesk: Troubleshooting guides and system information Training assistant: Learning materials and procedure guidance External customer service E-commerce support: Product information and order assistance Technical support: User manuals and troubleshooting Sales assistance: Product recommendations and pricing FAQ automation: Common questions and instant responses Specialized implementations Lead qualification: Gather customer information and schedule sales calls Appointment booking: Healthcare, consulting, or service appointments Order processing: Take orders and update inventory systems Multi-language support: Global customer service with language detection Workflow limitations Knowledge base dependency: Quality depends on source documentation and embedding setup Memory storage: Requires active n8n PostgreSQL connection for conversation history Platform restrictions: Some platforms may have webhook limitations Response time: Vector search may add slight delay to responses Token limits: Large context windows may increase API costs Embedding costs: OpenAI embeddings required for vector search functionality
by Atta
This workflow automates brand monitoring on X by analyzing both the text and the images in posts. It uses multi-modal AI to score brand relevance, filters out noise, logs important mentions in Airtable, and sends real-time alerts to a Telegram group for high-priority posts. What it does Traditional brand monitoring tools often miss the most authentic user content because they only track text. They can't "see" your logo in a photo or your product featured in a video without a direct keyword mention. This workflow acts as an AI agent that overcomes this blind spot. It finds mentions of your brand on X and then uses Google Gemini's multi-modal capabilities to perform a comprehensive analysis of both the text and any attached images. This allows it to understand the full context of a mention, score its relevance to your brand, and take the appropriate action, creating a powerful "visual intelligence" system. How it works The workflow runs on a schedule to find, analyze, and triage brand mentions. Get New Tweets: The workflow begins by using an Apify actor to scrape X for recent posts based on a defined set of search terms (e.g., Tesla OR $TSLA). It then filters these results to find unique mentions not already processed. Check for Duplicates: It cross-references each found tweet with an Airtable base to ensure it hasn't been analyzed before, preventing duplicate work. Analyze Post Content: For each new, unique post, the workflow performs two parallel analyses using Google Gemini: Analyze the Photos: The AI examines the images in the post to describe the scene, identify logos or products, and determine the visual mood. Analyze the Text: A separate AI call analyzes the text of the post to understand its context and sentiment. Final Relevance Check: A "Head Strategist" AI node receives the outputs from both the visual and text analyses. It synthesizes this information to assign a final brand relevance score from 1 to 10. Triage and Action: Based on this score, the workflow automatically triages the post: High Relevance (Score > 7): The post is logged in the Airtable base, and an instant, detailed alert is sent to a Telegram monitoring group. Medium Relevance (Score 4-7): The post is quietly logged in Airtable for later strategic review. Low Relevance (Score < 4): The post is ignored, effectively filtering out noise. Setup Instructions To get this workflow running, you will need to configure your Airtable base and provide credentials for Apify, Google, and Telegram. Required Credentials Apify: You will need an Apify API Token to run the X scraper. Airtable: You will need Airtable API credentials to connect to your base. Google AI: You will need credentials for the Google AI APIs to use the Gemini models. Telegram: You will need a Bot Token and the Chat ID for the channel where you want to receive high-relevance alerts. Of course. Based on the Config node parameters you provided, the setup process is much more centralized. Here is the corrected and rewritten "Step-by-Step Configuration" section. Of course. Here is the rewritten "Step-by-Step Configuration" section with the link to the advanced search documentation. Step-by-Step Configuration Set up Your Airtable Base: Before configuring the workflow, create a new table in your Airtable base. For the workflow to function correctly, this table must contain fields to store the analysis results. Create fields with the following names: postId, postURL, postText, postDateCreated, authorUsername, authorName, sentiment, relevanceScore, relevanceReasoning, mediaPhotosAnalysis, and status. Once the table is created, have your Base ID and Table ID ready to use in the Config node. Edit the Config Node: The majority of the setup is handled in the first Config node. Click on it and edit the following parameters in the "Expressions" tab: searchTerms: Replace the example with the keywords, hashtags, and accounts you want to monitor. The field supports advanced search operators for complex queries. For a full list of available parameters, see the Twitter Advanced Search documentation. airtableBaseId: Paste your Airtable Base ID here. airtableTableId: Paste your Airtable Table ID here. lang: Set the two-letter language code for the posts you want to find (e.g., "en" for English). min_faves: Set the minimum number of "favorites" a post should have to be considered. tweetsToScrape: Define the maximum number of posts the scraper should find in each run. actorId: This is the specific Apify actor for scraping X. You can leave this as is unless you intend to use a different one. Configure the Telegram Node: In the final node, "Send High Relevance Posts to Monitoring Group", you need to manually set the destination for the alerts. Enter the Chat ID for your Telegram group or channel. How to Adapt the Template This workflow is a powerful framework that can be adapted for various monitoring needs. Change the Source:* Replace the *Apify** node with a different trigger or data source. You could monitor Reddit, specific RSS feeds, or a news API for mentions. Customize the AI Logic:* The core of this workflow is in the AI prompts. You can edit the prompts in the *Google Gemini** nodes to change the analysis criteria. For example, you could instruct the AI to check for specific competitor logos, analyze the sentiment of comments, or identify if the post is from an influential account. Modify the Scoring:** Adjust the logic in the "Switch" node to change the thresholds for what constitutes a high, medium, or low-relevance post to better fit your brand's needs. Change the Actions:* Replace the *Telegram** node with a different action. Instead of sending an alert, you could: Create a ticket in a customer support system like Zendesk or Jira. Send a summary email to your marketing team. Add the post to a content curation tool or a social media management platform.
by Jan Zaiser
Your inbox is overflowing with daily newsletters: Public Affairs, ESG, Legal, Finance, you name it. You want to stay informed, but reading 10 emails every morning? Impossible. What if you could get one single digest summarizing everything that matters, automatically? ❌ No more copy-pasting text into ChatGPT ❌ No more scrolling through endless email threads ✅ Just one smart, structured daily briefing in your inbox Who Is This For Public Affairs Teams: Stay ahead of political and regulatory updates—without drowning in emails. Executives & Analysts: Get daily summaries of key insights from multiple newsletters. Marketing, Legal, or ESG Departments: Repurpose this workflow for your own content sources. How It Works Gmail collects all newsletters from the day (based on sender or label). HTML noise and formatting are stripped automatically. Long texts are split into chunks and logged in Google Sheets. An AI Agent (Gemini or OpenAI) summarizes all content into one clean daily digest. The workflow structures the summary into an HTML email and sends it to your chosen recipients. Setup Guide • You’ll need Gmail and Google Sheets credentials. • Add your own AI Model (e.g., Gemini or OpenAI) with an API key. • Adjust the prompt inside the “Public Affairs Consultant” node to fit your topic (e.g., Legal, Finance, ESG, Marketing). • Customize the email subject and design inside the “Structure HTML-Mail” node. • Optional: Use Memory3 to let the AI learn your preferred tone and style over time. Cost & Runtime Runs once per day. Typical cost: ~$0.10–0.30 per run (depending on model and input length). Average runtime: <2 minutes.
by Jitesh Dugar
👤 Who’s it for This workflow is designed for employees who need to submit expense claims for business trips. It automates the process of extracting data from receipts/invoices, logging it to a Google Sheet, and notifying the finance team via email. Ideal users: Employees submitting business trip expense claims HR or Admins reviewing travel-related reimbursements Finance teams responsible for processing claims ⚙️ How it works / What it does Employee submits a form with trip information (name, department, purpose, dates) and uploads one or more receipts/invoices (PDF). Uploaded files are saved to Google Drive for record-keeping. Each PDF is passed to a DocClaim Assistant agent, which uses GPT-4o and a structured parser to extract structured invoice data. The data is transformed and formatted into a standard JSON structure. Two parallel paths are followed: Invoice records are appended to a Google Sheet for centralized tracking. A detailed HTML email summarizing the trip and expenses is generated and sent to the finance department for claim processing. 🛠 How to set up Create a form to capture: Employee Name Department Trip Purpose From Date / To Date Receipt/Invoice File Upload (multiple PDFs) Configure file upload node to store files in a specific Google Drive folder. Set up DocClaim Agent using: GPT-4o or any LLM with document analysis capability Output parser for standardizing extracted receipt data (e.g., vendor, total, tax, date) Transform extracted data into a structured claim record (Code Node). Path 1: Save records to a Google Sheet (one row per expense). Path 2: Format the employee + claim data into a dynamic HTML email Use Send Email node to notify the finance department (e.g., finance@yourcompany.com) ✅ Requirements Jotform account with expense form setup Sign up for free here n8n running with access to: Google Drive API (for file uploads) Google Sheets API (for logging expenses) Email node (SMTP or Gmail for sending) GPT-4o or equivalent LLM with document parsing ability PDF invoices with clear formatting Shared Google Sheet for claim tracking Optional: Shared inbox for finance team 🧩 How to customize the workflow Add approval steps**: route the email to a manager before finance Attach original PDFs**: include uploaded files in the email as attachments Localize for other languages**: adapt form labels, email content, or parser prompts Sync to ERP or accounting system**: replace Google Sheet with QuickBooks, Xero, etc. Set limits/validation**: enforce max claim per trip or required fields before submission Auto-tag expenses**: add categories (e.g., travel, accommodation) for better reporting
by Tsubasa Shukuwa
How it works This workflow automatically generates a new haiku poem every morning using AI, formats it in 5-7-5 structure, saves it to Google Docs, and sends it to your email inbox. Workflow steps: Schedule Trigger – Runs daily at 7:00 AM. AI Agent – Asks AI to output four words (kigo, noun, verb1, verb2) in JSON format. Code in JavaScript – Builds a 5-7-5 haiku using the AI-generated words and sets today’s title. Edit Fields – Prepares document fields (title and body) for Google Docs. Create a document – Creates a new Google Document for the haiku. Prepare Append – Collects the document ID and haiku text for appending. Update a document – Inserts the haiku into the existing Google Doc. Send a message – Sends the haiku of the day to your Gmail inbox. OpenRouter Chat Model – Connects the OpenRouter model used by the AI Agent. Setup steps Connect your OpenRouter API key as a credential (used in the AI Agent node). Update your Google Docs folder ID and Gmail account credentials. Change the email recipient address in the “Send a message” node. Adjust the Schedule Trigger time as you like. Run the workflow once to test and verify document creation and email delivery. Ideal for Writers and poets who want daily creative inspiration. Individuals seeking a fun morning ritual. Educators demonstrating AI text generation in a practical example. ⚙️ Note: Each node includes an English Sticky Note above it for clarity and documentation.
by Muhammad Ali
Who’s it for Perfect for marketing agencies that manage multiple Facebook ad accounts and want to automate their weekly reporting. It eliminates manual data collection, analysis, and client updates by delivering a ready-to-share PDF report. How it works Every Monday, the workflow: Fetches the previous week’s campaign metrics from the Facebook Graph API. Formats and summarizes each campaign’s performance using OpenAI. Merges all summaries into one comprehensive report with insights and next-week suggestions. Converts the report into a polished PDF using any PDF generation API. Sends the final PDF report automatically to the client via Gmail. How to set up Connect your Facebook, OpenAI, and Gmail accounts in n8n. Add credentials for your preferred PDF generator (e.g., PDFCrowd, Placid, etc.). Open the “Set Node” to customize recipient email, date range, or report text. Requirements Facebook Graph API access token OpenAI API key Gmail credentials API key for your PDF generation service How to customize You can modify the trigger day, personalize the report design, or include additional analytics such as ROAS, CPC, or conversion data for deeper insights.
by Daniel Agrici
This workflow automates business intelligence. Submit one URL, and it scrapes the website, uses AI to perform a comprehensive analysis, and generates a professional report in Google Doc and PDF format. It's perfect for agencies, freelancers, and consultants who need to streamline client research or competitive analysis. How It Works The workflow is triggered by a form input, where you provide a single website URL. Scrape: It uses Firecrawl to scrape the sitemap and get the full content from the target website. Analyze: The main workflow calls a Tools Workflow (included below) which uses Google Gemini and Perplexity AI agents to analyze the scraped content and extract key business information. Generate & Deliver: All the extracted data is formatted and used to populate a template in Google Docs. The final report is saved to Google Drive and delivered via Gmail. What It Generates The final report is a comprehensive business analysis, including: Business Overview: A full company description. Target Audience Personas: Defines the demographic and psychographic profiles of ideal customers. Brand & UVP: Extracts the brand's personality matrix and its Unique Value Proposition (UVP). Customer Journey: Maps the typical customer journey from Awareness to Loyalty. Required Tools This workflow requires n8n and API keys/credentials for the following services: Firecrawl (for scraping) Perplexity (for AI analysis) Google Gemini (for AI analysis) Google Services (for Docs, Drive, and Gmail) ⚠️ Required: Tools Workflow This workflow will not work without its "Tools" sub-workflow. Please create a new, separate workflow in n8n, name it (e.g., "Business Analysis Tools"), and paste the following code into it. { "name": "Business Analysis Workflow Tools", "nodes": [ { "parameters": { "workflowInputs": { "values": [ { "name": "function" }, { "name": "keyword" }, { "name": "url" }, { "name": "location_code" }, { "name": "language_code" } ] } }, "type": "n8n-nodes-base.executeWorkflowTrigger", "typeVersion": 1.1, "position": [ -448, 800 ], "id": "e79e0605-f9ac-4166-894c-e5aa9bd75bac", "name": "When Executed by Another Workflow" }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "8d7d3035-3a57-47ee-b1d1-dd7bfcab9114", "leftValue": "serp_search", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "serp_search" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "bb2c23eb-862d-4582-961e-5a8d8338842c", "leftValue": "ai_mode", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "ai_mode" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "4603eee1-3888-4e32-b3b9-4f299dfd6df3", "leftValue": "internal_links", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "internal_links" } ] }, "options": {} }, "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ -208, 784 ], "id": "72c37890-7054-48d8-a508-47ed981551d6", "name": "Switch" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/organic/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword.replace(/[:'\"\\\\/]/g, '') }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"depth\": 10,\n \"group_organic_results\": true,\n \"load_async_ai_overview\": true,\n \"people_also_ask_click_depth\": 1\n }\n]", "options": { "redirect": { "redirect": { "followRedirects": false } } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 512 ], "id": "6203f722-b590-4a25-8953-8753a44eb3cb", "name": "SERP Google", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## SERP Google", "height": 272, "width": 688, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 432 ], "id": "81593217-034f-466d-9055-03ab6b2d7d08", "name": "Sticky Note3" }, { "parameters": { "assignments": { "assignments": [ { "id": "97ef7ee0-bc97-4089-bc37-c0545e28ed9f", "name": "platform", "value": "={{ $json.tasks[0].data.se }}", "type": "string" }, { "id": "9299e6bb-bd36-4691-bc6c-655795a6226e", "name": "type", "value": "={{ $json.tasks[0].data.se_type }}", "type": "string" }, { "id": "2dc26c8e-713c-4a59-a353-9d9259109e74", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "84c9be31-8f1d-4a67-9d13-897910d7ec18", "name": "results", "value": "={{ $json.tasks[0].result }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 512 ], "id": "a916551a-009b-403f-b02e-3951d54d2407", "name": "Prepare SERP output" }, { "parameters": { "content": "# Google Organic Search API\n\nThis API lets you retrieve real-time Google search results with a wide range of parameters and custom settings. \nThe response includes structured data for all available SERP features, along with a direct URL to the search results page. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 976, 432 ], "id": "87672b01-7477-4b43-9ccc-523ef8d91c64", "name": "Sticky Note17" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/ai_mode/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"device\": \"mobile\",\n \"os\": \"android\"\n }\n]", "options": { "redirect": { "redirect": {} } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 800 ], "id": "fb0001c4-d590-45b3-a3d0-cac7174741d3", "name": "AI Mode", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## AI Mode", "height": 272, "width": 512, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 720 ], "id": "2cea3312-31f8-4ff0-b385-5b76b836274c", "name": "Sticky Note11" }, { "parameters": { "assignments": { "assignments": [ { "id": "b822f458-ebf2-4a37-9906-b6a2606e6106", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "10484675-b107-4157-bc7e-b942d8cdb5d2", "name": "result", "value": "={{ $json.tasks[0].result[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 800 ], "id": "6b1e7239-ee2b-4457-8acb-17ce87415729", "name": "Prepare AI Mode Output" }, { "parameters": { "content": "# Google AI Mode API\n\nThis API provides AI-generated search result summaries and insights from Google. \nIt returns detailed explanations, overviews, and related information based on search queries, with parameters to customize the AI overview. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 800, 720 ], "id": "d761dc57-e35d-4052-a360-71170a155f7b", "name": "Sticky Note18" }, { "parameters": { "content": "## Input", "height": 384, "width": 544, "color": 7 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -528, 672 ], "id": "db90385e-f921-4a9c-89f3-53fc5825b207", "name": "Sticky Note" }, { "parameters": { "assignments": { "assignments": [ { "id": "b865f4a0-b4c3-4dde-bf18-3da933ab21af", "name": "platform", "value": "={{ $json.platform }}", "type": "string" }, { "id": "476e07ca-ccf6-43d4-acb4-4cc905464314", "name": "type", "value": "={{ $json.type }}", "type": "string" }, { "id": "f1a14eb8-9f10-4198-bbc7-17091532b38e", "name": "keyword", "value": "={{ $json.keyword }}", "type": "string" }, { "id": "181791a0-1d88-481c-8d98-a86242bb2135", "name": "results", "value": "={{ $json.results[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 800, 512 ], "id": "83fef061-5e0b-417c-b1f6-d34eb712fac6", "name": "Sort Results" }, { "parameters": { "content": "## Internal Links", "height": 272, "width": 272, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 1024 ], "id": "9246601a-f133-4ca3-aac8-989cb45e6cd2", "name": "Sticky Note7" }, { "parameters": { "method": "POST", "url": "https://api.firecrawl.dev/v2/map", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer your-firecrawl-apikey" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={\n \"url\": \"https://{{ $json.url }}\",\n \"limit\": 400,\n \"includeSubdomains\": false,\n \"sitemap\": \"include\"\n }", "options": {} }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 368, 1104 ], "id": "fd6a33ae-6fb3-4331-ab6a-994048659116", "name": "Get Internal Links" }, { "parameters": { "content": "# Firecrawl Map API\n\nThis endpoint maps a website from a single URL and returns the list of discovered URLs (titles and descriptions when available) — extremely fast and useful for selecting which pages to scrape or for quickly enumerating site links. (Firecrawl)\n\nIt supports a search parameter to find relevant pages inside a site, location/languages options to emulate country/language (uses proxies when available), and SDK + cURL examples in the docs,\n\n👉 Documentation\n\n[1]: https://docs.firecrawl.dev/features/map \"Map | Firecrawl\"\n", "height": 272, "width": 624, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 560, 1024 ], "id": "08457204-93ff-4586-a76e-03907118be3c", "name": "Sticky Note24" } ], "pinData": { "When Executed by Another Workflow": [ { "json": { "function": "serp_search", "keyword": "villanyszerelő Largo Florida", "url": null, "location_code": 2840, "language_code": "hu" } } ] }, "connections": { "When Executed by Another Workflow": { "main": [ [ { "node": "Switch", "type": "main", "index": 0 } ] ] }, "Switch": { "main": [ [ { "node": "SERP Google", "type": "main", "index": 0 } ], [ { "node": "AI Mode", "type": "main", "index": 0 } ], [ { "node": "Get Internal Links", "type": "main", "index": 0 } ] ] }, "SERP Google": { "main": [ [ { "node": "Prepare SERP output", "type": "main", "index": 0 } ] ] }, "AI Mode": { "main": [ [ { "node": "Prepare AI Mode Output", "type": "main", "index": 0 } ] ] }, "Prepare SERP output": { "main": [ [ { "node": "Sort Results", "type": "main", "index": 0 } ] ] }, "Sort Results": { "main": [ [] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "6fce16d1-aa28-4939-9c2d-930d11c1e17f", "meta": { "instanceId": "1ee7b11b3a4bb285563e32fdddf3fbac26379ada529b942ee7cda230735046a1" }, "id": "VjpOW2V2aNV9HpQJ", "tags": [] } `
by Tsubasa Shukuwa
How it works This workflow automatically fetches the latest public grant information from the Ministry of Health, Labour and Welfare (MHLW) RSS feed. It uses AI to summarize and structure each grant post into a clear format, stores the results in Google Sheets, and sends a formatted HTML summary via Gmail. Workflow summary Schedule Trigger – Runs the flow daily or weekly. RSS Feed Reader – Fetches the latest MHLW news and updates. Text Classifier (AI) – Categorizes the item as “Grant/Subsidy”, “Labor-related”, or “Other”. AI Agent – Extracts structured data such as title, summary, deadline, amount, target, and URL. Google Sheets – Appends or updates the database using the grant title as the key. Code Node – Builds an HTML report summarizing new entries. Gmail – Sends a daily digest email to your inbox. Setup steps Add your OpenRouter API key as a credential (used in the AI Agent). Replace the Google Sheets ID and sheet name with your own. Update the recipient email address in the Gmail node. Adjust the schedule trigger to match your preferred frequency. (Optional) Add more RSS feeds if you want to monitor other sources. Ideal for Consultants or administrators tracking subsidy and grant programs Small business owners who want automatic updates Anyone who wants a daily AI-summarized government grant digest ⚙️ Note: Detailed explanations and setup hints are included as Sticky Notes above each node inside the workflow.
by Cheng Siong Chin
Introduction Automates travel planning by aggregating flights, hotels, activities, and weather via APIs, then uses AI to generate professional itineraries delivered through Gmail and Slack. How It Works Webhook receives requests, searches APIs (Skyscanner, Booking.com, Kiwi, Viator, weather), merges data, AI builds itineraries, scores options, generates HTML emails, delivers via Gmail/Slack. Workflow Template Webhook → Extract → Parallel Searches (Flights/Hotels/Activities/Weather) → Merge → Build Itinerary → AI Processing → Score → Generate HTML → Gmail → Slack → Response Workflow Steps Trigger & Extract: Receives destination, dates, preferences, extracts parameters. Data Gathering: Parallel APIs fetch flights, hotels, activities, weather, merges responses. AI Processing: Analyzes data, creates itinerary, ranks recommendations. Delivery: Generates HTML email, sends via Gmail/Slack, confirms completion. Setup Instructions API Configuration: Add keys for Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter. Communication: Connect Gmail OAuth2, Slack webhook. Customization: Adjust endpoints, AI prompts, HTML template, scoring criteria. Prerequisites API keys: Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter Gmail account Slack workspace n8n instance Use Cases Corporate travel planning Vacation itinerary generation Group trip coordination Customization Add sources (Airbnb, TripAdvisor) Filter by budget preferences Add PDF generation Customize Slack format Benefits Saves 3-5 hours per trip Real-time pricing aggregation AI-powered personalization Automated multi-channel delivery