by Raquel Giugliano
This workflow automates currency rate uploads into SAP Business One via Service Layer, using flexible input sources such as JSON (API), SQL Server, Google Sheets, or manual values. It leverages logic branching, AI validation, and logging for complete control and traceability. ++⚙️ HOW IT WORKS:++ 🔹 1. Receive Data via Webhook The workflow listens on the endpoint /formulario-datos via HTTP POST. The request body should include: origen: one of JSON, SQL, GoogleSheets, or Manual Depending on the value, the flow branches accordingly. 🔹 2. Authenticate with SAP Business One A POST request is sent to SAP B1’s Login endpoint. A session cookie (B1SESSION) is retrieved and used in all subsequent API calls. 🔹 3. Switch by Origin The flow branches into four processing paths based on origen: JSON: The payload is normalized using OpenAI to extract an array of rates. Each rate is sent to SAP individually after parsing. SQL: The SQL query provided in the payload is executed on a connected Microsoft SQL Server. The results are checked by AI to validate the date format. If valid, rates are sent to SAP. GoogleSheets: Rates are pulled from a connected spreadsheet. Each entry is sent to SAP in sequence. Manual: Uses currency, rate, and rateDate directly from the webhook payload. Sends the result directly to SAP. 🔹 4. AI-Powered Enhancements (Optional but enabled) Normalize JSON: Uses OpenAI (LangChain node) to convert any messy structure into a uniform array under the key rate. Date Formatting: Another OpenAI call ensures RateDate is in yyyyMMdd format (required by SAP), converting from ISO, timestamp, or other formats. 🔹 5. Send to SAP Business One (Service Layer) All paths send a POST request to: /SBOBobService_SetCurrencyRate With a payload such as: { "Currency": "USD", "Rate": "0.92", "RateDate": "20250612" } 🔹 6. Log Results All success/failure results are appended to a Google Sheets log (LOGS_N8N) The log includes method, URL, sent payload, status code, and message. ++🛠 SETUP STEPS:++ 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and configure: SAP Business One (Service Layer) Type: HTTP Request Auth or Token Base URL: https://<your-host>:50000/b1s/v1/ Provide Username, Password, and CompanyDB via variables or fields Google Sheets OAuth2 connection to a Google account with access Microsoft SQL Server SQL login credentials and host OpenAI API key with access to models like GPT-4o 2️⃣ Environment Variables (Recommended) Set these variables in n8n → Settings → Variables: SAP_URL=https://<host>:50000/b1s/v1/ SAP_USER=your_username SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB 3️⃣ Prepare Google Sheets Sheet 1: RATE (for charging the data) Columns: Currency, Rate, RateDate Sheet 2: LOGS_N8N (to save the logs, success or failed) Columns: workflow, method, url, json, status_code, message 4️⃣ Activate and Test Deploy the webhook and grab the URL. ++✅ BONUS++ Built-in AI assistance for input validation and structure Logs all results for compliance and audit Flexible integration paths: perfect for hybrid or transitional systems
by CustomJS
This n8n template shows how to extract selected pages from a generated PDF with the PDF Toolkit by www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Extract** pages from the PDF file as needed. Requirements Self-hosted** n8n instance CustomJS API key** for extracting PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. Download PDF File: Pass urls for PDF files to merge. Extract Pages from PDF: Extract selected pages from a generated PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Extract Pages from PDF. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Taking a note of specific pages from PDF files. Splitting PDF file into multiple parts.
by Omar Akoudad
This n8n workflow helps eCommerce businesses (especially in the Cash on Delivery space) send real-time order events to the Meta (Facebook) Conversions API, ensuring accurate event tracking and better ad attribution. Features Webhook Listener**: Accepts incoming order data (name, phone, IP, user-agent, etc.) via HTTP POST/GET. Data Normalization**: Cleans and formats first_name, last_name, phone, and event_time according to Facebook's strict specs. Data Hashing**: Securely hashes sensitive user data (SHA256), as required by Meta. Full Custom Data Suppor**t: Pass order value, currency, and more. Ideal For: Shopify, WooCommerce, custom stores (Laravel, Node, etc.) Businesses using Meta Ads and needing high-quality server-side tracking Teams without access to full dev resources, but using n8n for automation How It Works: Receive Order from your store via Webhook or API. Format & Normalize fields to match Facebook’s expected structure. Encrypt Sensitive Fields using SHA256 (name, phone, email). Send to Facebook via the Conversions API endpoint. Requirements: A Meta Business Manager account with Conversions API access Your Access Token and Pixel ID set up in n8n credentials
by Hassan
Overview Transform your customer support operations with an intelligent WhatsApp automation system that handles text, voice, and image messages across multiple languages. This comprehensive solution uses advanced AI to provide instant, accurate responses by accessing your company's knowledge base, while maintaining conversation context and supporting both English and Roman Urdu communications. Perfect for businesses serving diverse markets who need 24/7 customer support without the overhead costs. Key Benefits 🤖 Multi-Modal AI Processing Handle text messages, voice notes, and images seamlessly. The system automatically transcribes audio, analyzes images, and processes all content types through a single intelligent pipeline. 🌍 True Multilingual Support Native support for English and Roman Urdu with intelligent language detection and matching responses. The AI automatically detects the customer's language and responds accordingly, making it perfect for South Asian markets. 📚 Dynamic Knowledge Base Integration Real-time access to your Google Docs knowledge base ensures customers always receive current, accurate information about your products and services. No more outdated responses or manual updates needed. 💭 Conversation Memory & Context Advanced memory system maintains conversation history for natural, contextual interactions. Customers can have flowing conversations without repeating information, improving satisfaction rates. ⚡ Instant Response Times Automated responses within seconds of receiving messages, dramatically improving customer satisfaction and reducing response time from hours to seconds. 🔄 Zero Manual Intervention Fully automated system that works 24/7 without human oversight. Handles inquiries, provides information, and maintains professional communication standards automatically. 📊 Scalable Architecture Built on enterprise-grade n8n platform with robust error handling and retry mechanisms. Can handle thousands of concurrent conversations without performance degradation. 💰 Cost-Effective Operations Replace expensive customer support teams with intelligent automation. Reduce operational costs by up to 80% while improving response quality and availability. How It Works Phase 1: Message Reception & Classification The system begins with a WhatsApp webhook trigger that captures all incoming messages in real-time. An intelligent switch node immediately analyzes each message to determine its content type - whether it's a text message, voice note, or image with optional caption. This classification is crucial as each media type requires different processing approaches to extract meaningful information. Phase 2: Advanced Media Processing For voice messages, the system retrieves the audio file URL, downloads the content using authenticated requests, and processes it through OpenAI's Whisper transcription service to convert speech to text. Image messages follow a similar pattern - the system downloads the image and uses GPT-4 Vision to analyze and describe the visual content in detail. Text messages are processed directly, while all media types are ultimately converted to standardized text format for consistent AI processing. Phase 3: Intelligent Response Generation The processed content is fed into a sophisticated AI agent powered by Claude Sonnet 4 via OpenRouter. This agent operates with a comprehensive system prompt that defines its role as a professional customer support representative with specific instructions for tone, language handling, and response protocols. The agent has access to a Google Docs tool that allows it to retrieve real-time information from your company's knowledge base. Phase 4: Contextual Memory Management A memory buffer system maintains conversation history for each unique phone number, allowing for natural, flowing conversations where the AI remembers previous interactions and can reference earlier parts of the conversation. This creates a more human-like experience and reduces customer frustration from having to repeat information. Phase 5: Response Delivery Generated responses are automatically sent back to the customer's WhatsApp number using the WhatsApp Business API, completing the conversation loop. The system maintains proper formatting and ensures message delivery confirmation. Required Setup & Database Requirements API Credentials Needed: WhatsApp Business API**: For receiving and sending messages OpenAI API**: For audio transcription and image analysis OpenRouter API**: For Claude Sonnet 4 language model access Google Docs API**: For knowledge base integration n8n Cloud/Self-hosted instance**: For workflow execution Knowledge Base Setup: Google Docs Document**: Structured company information document Document Permissions**: Shared with the Google service account Content Organization**: FAQ format with clear sections for products, services, pricing, and contact information WhatsApp Configuration: Business Phone Number**: Verified WhatsApp Business account Webhook URL**: Configured to point to n8n webhook endpoint Message Templates**: Pre-approved for business communications Business Use Cases E-commerce Support: Handle product inquiries, order status checks, and return policies across multiple languages, perfect for businesses serving diverse customer bases. Service Business Automation: Appointment scheduling, service explanations, and pricing inquiries for consultancies, agencies, and professional services. Restaurant & Food Industry: Menu inquiries, order modifications, and delivery status updates with support for local language preferences. Real Estate: Property inquiries, showing appointments, and market information with ability to process property images sent by clients. Healthcare & Wellness: Appointment booking, service explanations, and general inquiries while maintaining professional communication standards. Education & Training: Course information, enrollment processes, and student support with multilingual capabilities for international programs. Revenue Potential Direct Cost Savings: $3,000-8,000/month in customer support staff salaries Increased Conversion: 25-40% improvement in lead response rates due to instant replies Extended Availability: 24/7 operation captures international and after-hours inquiries worth $2,000-5,000/month additional revenue Scalability: Handle 10x more inquiries without proportional cost increases Customer Satisfaction: Improved response times lead to 15-30% increase in customer retention ROI Timeline: Typical payback period of 2-3 months with ongoing monthly savings of $4,000-12,000 depending on business size. Difficulty Level & Build Time Complexity: Intermediate to Advanced (7/10) Estimated Build Time: 4-6 hours for experienced n8n users Setup Time: 2-3 hours for API configurations and testing Maintenance: Minimal - primarily updating knowledge base content Skills Required: n8n workflow building experience API credential management WhatsApp Business API familiarity Basic understanding of AI language models Detailed Setup Steps 1. API Account Setup (60 minutes) Create and configure accounts for WhatsApp Business, OpenAI, OpenRouter, and Google Cloud Platform. Obtain all necessary API keys and configure proper permissions for each service. 2. n8n Credential Configuration (30 minutes) Add all API credentials to your n8n instance using the credential manager. Test each connection to ensure proper authentication and access permissions. 3. WhatsApp Business Integration (45 minutes) Configure your WhatsApp Business account with webhook URLs pointing to your n8n instance. Set up phone number verification and message template approvals. 4. Knowledge Base Creation (90 minutes) Structure your Google Docs knowledge base with comprehensive information about your business. Include FAQs, product details, pricing, and contact information in an organized format. 5. Workflow Import & Testing (60 minutes) Import the n8n workflow, configure all node parameters with your specific credentials and settings, then conduct thorough testing with different message types and languages. 6. Production Deployment (30 minutes) Deploy the workflow to production, monitor initial performance, and fine-tune system prompts based on real customer interactions. Advanced Customization Options Custom Language Support: Extend beyond English and Roman Urdu by modifying the system prompt and adding language detection for additional languages like Arabic, Hindi, or French. Integration Expansions: Connect additional data sources like CRM systems, databases, or e-commerce platforms to provide more comprehensive customer information. Advanced Analytics: Add logging nodes to track conversation metrics, response times, and customer satisfaction scores for continuous improvement. Multi-Channel Support: Extend the system to handle Telegram, Facebook Messenger, or other messaging platforms using similar processing logic. Escalation Protocols: Implement human handoff triggers for complex queries that require personal attention, with automatic notification systems for support teams. Custom AI Models: Swap Claude Sonnet 4 for other models like GPT-4, Gemini, or open-source alternatives based on your specific needs and budget requirements. This automation system represents the future of customer support - intelligent, scalable, and incredibly cost-effective while maintaining the personal touch that customers expect from quality businesses.
by AppStoneLab Technologies LLP
Automated AI Research Assistant: From Query to Polished Report with Jina & Gemini Turn a single research question into a comprehensive, multi-source report with proper citations. This workflow automates the entire research process by leveraging the web-crawling power of Jina AI and the advanced reasoning capabilities of Google's Gemini models. Simply input your query, and this AI-powered assembly line will search the web, scrape relevant sources, summarize the content, draft a structured research paper, and finally, evaluate and polish the report for accuracy and formatting. ✨ Key Features 🔎 Dynamic Web Search**: Kicks off by searching the web with Jina AI based on your initial query. 📚 Multi-Source Content Scraping**: Automatically reads and extracts content from the top 10 search results. 🧠 AI-Powered Summarization**: Uses a Gemini agent to intelligently summarize each webpage, retaining the core information. ✍️ Automated Report Generation**: A specialized "Generator Agent" synthesizes the summarized data into a structured research paper, complete with an executive summary, introduction, discussion, and conclusion. ✅ Citation & Quality Verification**: A final "Evaluator Agent" meticulously checks the generated report for citation accuracy, logical flow, and markdown formatting, delivering a polished final document. 📈Rate-Limit Ready**: Includes a configurable Wait node to ensure stable execution when dealing with multiple API calls. 📝 What This Workflow Does This workflow is designed to be your personal research assistant. It addresses the time-consuming process of gathering, reading, and synthesizing information from multiple online sources. Instead of spending hours manually searching, reading, and citing, you can delegate the entire task to this workflow and receive a well-structured and cited report as the final output. It's perfect for students, researchers, content creators, and analysts who need to quickly compile information on any given topic. ⚙️ How It Works (Step-by-Step) Initiate with a Query: The workflow starts when you send your research question or topic to the Chat Trigger node. Search the Web: The user's query is passed to the Jina AI node, which performs a web search and returns the top 10 most relevant URLs. Scrape, Summarize, Repeat: The workflow then loops through each URL: Read Content: The Jina AI node scrapes the full text content from the URL. Summarize: A Summarizer Agent powered by Google Gemini reads the scraped content and the original user query, then generates a concise summary. Wait: A one-second pause helps to avoid hitting API rate limits before processing the next URL. Aggregate the Knowledge: Once the loop is complete, a Code node gathers all 10 individual summaries into a single, neatly structured list. Draft the Research Report: This aggregated data is fed to the Generator Agent. Following a detailed prompt, this Gemini-powered agent writes a full research report, structuring it with headings and adding inline citations for every piece of information it uses. Evaluate and Finalize: The generated draft is passed to the final Evaluator Chain. This agent acts as a quality control supervisor. It verifies that all claims are correctly cited, refines the content for clarity and academic tone, and polishes the markdown formatting to produce the final, ready-to-use report. 🚀 How to Use This Workflow Credentials: Click on Use template, then configure your credentials for the following nodes: Jina AI: You will need a Jina AI API key for the Search web and Read URL content nodes. Get your key from here: JinaAI API Key Google Gemini: You will need a Google Gemini API key for the Summarizer Model, Generator Model, and Evaluator Model nodes. Get your key from here: Gemini API Key Activate Workflow: Make sure the workflow is active in your n8n instance. Start Research: Send a chat message with your research topic to the webhook URL provided in the When chat message received node. Get Your Report: Check the output of the final node, Evaluator Chain, to find your completed and polished research report. Nodes Used Chat Trigger Jina AI Code (Python) Split in Batches (Looping) Wait AI Agent Basic LLM Chain Google Gemini Chat Model
by David Harvey
iMessage AI-Powered Smart Calorie Tracker > 📌 What it looks like in use: > This image shows a visual of the workflow in action. Use it for reference when replicating or customizing the template. This n8n template transforms a user-submitted food photo into a detailed, friendly, AI-generated nutritional report — sent back seamlessly as a chat message. It combines OpenAI's visual reasoning, Postgres-based memory, and real-time messaging with Blooio to create a hands-free calorie and nutrition tracker. 🧠 Use Cases Auto-analyze meals based on user-uploaded images. Daily/weekly/monthly diet summaries with no manual input. Virtual food journaling integrated into messaging apps. Nutrition companion for healthcare, fitness, and wellness apps. 📌 Good to Know ⚠️ This uses GPT-4 with image capabilities, which may incur higher usage costs depending on your OpenAI pricing tier. Review OpenAI’s pricing. The model uses visual reasoning and estimation to determine nutritional info — results are estimates and should not replace medical advice. Blooio is used for sending/receiving messages. You will need a valid API key and project set up with webhook delivery. A Postgres database is required for long-term memory (optional but recommended). You can use any memory node with it. ⚙️ How It Works Webhook Trigger The workflow begins when a message is received via Blooio. This webhook listens for user-submitted content, including any image attachments. Image Validation and Extraction A conditional check verifies the presence of attachments. If images are found, their URLs are extracted using a Code node and prepared for processing. Image Analysis via AI Agent Images are passed to an OpenAI-based agent using a custom system prompt that: Identifies the meal, Estimates portion sizes, Calculates calories, macros, fiber, sugar, and sodium, Scores the meal with a health and confidence rating, Responds in a chatty, human-like summary format. Memory Integration A Postgres memory node stores user interactions for recall and contextual continuity, allowing day/week/month reports to be generated based on cumulative messages. Response Aggregation & Summary Messages are aggregated and summarized by a second AI agent into a single concise message to be sent back to the user via Blooio. Message Dispatch The final message is posted back to the originating conversation using the Blooio Send Message API. 🚀 How to Use The included webhook can be triggered manually or programmatically by linking Blooio to a frontend chat UI. You can test the flow using a manual POST request containing mock Blooio payloads. Want to use a different messages app? Replace the Blooio nodes with your preferred messaging API (e.g., Twilio, Slack, Telegram). ✅ Requirements OpenAI API access with GPT-4 Vision or equivalent multimodal support. Blooio account with access to incoming and outgoing message APIs. Optional: Postgres DB (e.g., via Neon) for tracking message context over time. 🛠️ Customising This Workflow Prompt Tuning** Tailor the system prompt in the AI Agent node to fit specific diets (e.g., keto, diabetic), age groups, or regionally-specific foods. Analytics Dashboards** Hook up your Postgres memory to a data visualization tool for nutritional trends over time. Multilingual Support** Adjust the response prompt to translate messages into other languages or regional dialects. Image Preprocessing** Insert a preprocessing node before sending images to the model to resize, crop, or enhance clarity for better results.
by Khairul Muhtadin
Automate Outreach Prospect automates finding, enriching, and messaging potential partners (like restaurants, malls, and bars) using Apify Google Maps scraping, Perplexity enrichment, OpenAI LLMs, Google Sheets, Pinecone knowledge, and WhatsApp sending via GOWA. It turns a manual, slow outreach funnel into a repeatable pipeline so your team spends time closing deals instead of copy-pasting contact details. ⚠️ Important Disclaimer This workflow uses community nodes for WhatsApp functionality: GOWA WhatsApp HTTP API 💡 Why Use Automate Outreach Prospect? Faster prospecting:** Scrape up to 150 leads per search (jumlah leads = 150) and queue them for outreach in minutes, cutting manual research time from days to hours. Fixes the busywork:** Automatically enrich missing contact data and only send messages to records with phone numbers, so you stop chasing dead leads. Measurable lift:** Enrich in batches (enrichment batch size = 20), improving outbound readiness and increasing contactable leads per campaign by dozens each run. Better conversions with context:** Use a searchable company knowledge base (Pinecone + LlamaIndex) so replies are handled with context — less robotic, more relevant. Yes, your bot can sound like a helpful human (minus the coffee breath). ⚡ Perfect For Sales Ops:** Teams that need to scale partner outreach without hiring a mini-empire of SDRs. Growth Marketers:** People who want repeatable local outreach campaigns (mall, restaurant, bar categories). Small Biz Owners:** Quick way to build partnership lists and automate first outreach without becoming a spreadsheet hermit. 🔧 How It Works ⏱ Trigger Manual scrape start or scheduled jobs: Daily Outbound Schedule, Schedule Outbound message, or Knowledge Base Updated Trigger. 📎 Process Apify Google Maps Scraper gathers business listings (location, phone, socials). Results are fetched and saved to Google Sheets (Raw Data). Unenriched records are split and enriched via Perplexity, then saved back. 🤖 Smart Logic OpenAI LLM creates personalized initial messages, and a Reply Handler AI Agent, uses Pinecone/knowledge embeddings to interpret replies and decide next actions (save PICs, request meeting, send proposal). 💌 Output Outbound messages are sent over WhatsApp using GOWA nodes (typing indicators, simulated typing delays, read receipts) and replies are handled & stored; qualified PIC contacts are appended to a Leads sheet. 🗂 Storage Google Sheets is the central datastore (Raw Data, Leads Collected). Knowledge base lives in Google Drive and Pinecone (n8n-recharge, namespace CompanyKnowledgeBased). Conversation memory stored in Postgres/Neon. 🔐 Quick Setup Import Workflow: Import JSON file to your n8n instances Add Credentials: Google Sheets OAuth2 Google Drive OAuth2 Apify API token OpenAI API Perplexity API Pinecone API Cohere API LlamaIndex Cloud key GOWA (WhatsApp) credentials WAHA webhook (optional) PostgreSQL/Neon Customize Parameters: Scraping parameters (Location Category, lokasi, jumlah leads, minimum Stars, Skip Closed Place) Message templates/time greetings Enrichment batch size Update Configuration: Google Drive doc ID Google Sheets ID Apify actor config Pinecone index name Pinecone namespaces LlamaIndex endpoints (if used) Test Setup: Run a manual scrape with a real location and send a single outbound message to verify WhatsApp delivery and reply handling. 🧩 Required Services Active n8n instances Google Sheets & Google Drive accounts (OAuth2) Apify account & actor access (Google Maps Scraper) OpenAI API key (for LLMs & embeddings) Perplexity API key (enrichment) Pinecone account (vector index n8n-recharge) Cohere API (reranker, optional) LlamaIndex Cloud (optional document parsing) GOWA / WA WhatsApp setup (or WAHA alternative) PostgreSQL/Neon for conversation memory 🧠 Workflow Nodes Triggers & Scheduling Incoming message Manual Trigger - Start Scraping Daily Outbound Schedule Schedule Outbound message Knowledge Base Updated Trigger Data Collection & Processing Configure Scraping Parameters Execute Google Maps Scraper Fetch Scraped Business Data Save Raw Business Leads Get Unenriched Records Limit Enrichment Batch Size Split Records for Processing Data Enrichment Business Data Enrichment Parse Enrichment Response Save Enriched Business Data Outbound Messaging Get Outbound Candidates Limit Outbound Batch Size Validate Phone Number Exists Prepare Outbound Session Data Outbound Message Generator Outbound Message LLM Format Outbound Message Data WhatsApp Communication Show Typing Indicator - Outbound Simulate Typing Delay - Outbound Send Outbound WhatsApp Message Mark as Contacted Extract WhatsApp Session Data Reply Handling Reply Handler AI Agent Reply Handler LLM Format Reply Message Data Show Typing Indicator - Reply Simulate Typing Delay - Reply Send WhatsApp Reply Save Lead Contact Information Knowledge Management Store Knowledge Embeddings Query Knowledge Base Reply Conversation Memory Outbound Conversation Memory Made by: Khaisa Studio Need custom work? Contact Me
by Friedemann Schuetz
Update 19-04-2025 Change from OpenAI to Claude 3.7 Sonnet module Adding the Think Tool The update enables significantly better results to be achieved. This is particularly noticeable during longer meetings! What this workflow does This workflow retrieves the Zoom meeting data from the last 24 hours. The transcript of the last meeting is then retrieved, processed, a summary is created using AI and sent to all participants by email. AI is then used to create tasks and follow-up appointments based on the content of the meeting. Important: You need a Zoom Workspace Pro account and must have activated Cloud Recording/Transcripts! This workflow has the following sequence: manual trigger (Can be replaced by a scheduled trigger or a webhook) retrieval of of Zoom meeting data filter the events of the last 24 hours retrieval of transcripts and extract of the text creating a meeting summary, format to html and send per mail create tasks and follow-up call (if discussed in the meeting) in ClickUp/Outlook (can be replaced by Gmail, Airtable, and so forth) via sub workflow Requirements: Zoom Workspace (via API and HTTP Request): Documentation Microsoft Outlook: Documentation ClickUp: Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) SMTP access data (for sending the mail) You must set up the individual sub-workflows as separate workflows. Then set the “Execute workflow trigger” here. Then select the corresponding sub-workflow in the AI Agent Tools. You can select the number of domains yourself. If the data queries are not required, simply delete the corresponding tool (e.g. “Analytics_Domain_5). Feel free to contact me via LinkedIn, if you have any questions!
by Lorena
This workflow ensures gender inclusive language in Mattermost channels. If someone addresses the group with “guys” or “gals”, a bot promptly replies with: "May I suggest “folks” or “y'all”? We use gender inclusive language here. 😄". Webhook node**: triggers the workflow when a new message is posted in Mattermost. IF node**: verifies if the message includes the words "guys" or "gals". If false, it does not take any action. If true, it triggers the Mattermost node. Mattermost node**: posts the language warning message in the Mattermost channel.
by Ranjan Dailata
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automates the creation of structured eBooks by generating chapters, table of contents, and content using Google Gemini Flash 2.0. Overview This n8n workflow allows users to input a topic or outline, which is then processed by Google Gemini Flash 2.0 to generate chapter titles, structured table of contents, and detailed section-wise content. The final output is formatted and exported into a Google Document, ready for review and further publishing. Who This Workflow Is For Authors & Writers** Save time by auto-generating chapter ideas, summaries, and full-length content based on a topic or outline—great for fiction and nonfiction alike. Content Marketers** Rapidly create downloadable eBooks, whitepapers, or lead magnets for campaigns without relying on long production cycles. Educators & Course Creators** Convert your syllabus, course modules, or learning outcomes into structured, well-formatted educational eBooks. Agencies & Freelancers** Offer AI-powered eBook creation as a value-added service to clients in need of fast, professional content. Entrepreneurs & Coaches** Turn your knowledge, frameworks, or training material into publish-ready books to promote your brand or monetize content. Technical Writers & Documentarians** Generate structured documentation or guides from outlines, simplifying the technical writing process with the help of AI. Tools Used n8n**: Orchestrates input handling, AI processing, formatting, and export. Google Gemini Flash 2.0**: Generates high-quality, structured content, including chapters, summaries, and body text. Google Docs**: Used to compile and format the full eBook in a collaborative document. Google Drive / Email**: Optional nodes for storing or delivering the final output. How to Install Import the Workflow**: Download and import the .json file into your n8n instance. Configure Gemini Flash 2.0**: Add your API key and set the desired creativity, length, and tone options. Provide Input**: Use a webhook or manual trigger to define the eBook topic or structure. Customize Format**: Modify prompts or Gemini instructions to match your eBook format, voice, or domain (e.g., fiction, business, technical). Export to Google Docs**: Authenticate and configure the Google Docs node to write the output chapter-wise into a new or existing document. Optional Distribution**: Connect to Google Drive or Gmail to store or send the final eBook. Use Cases Writers & Authors**: Quickly draft entire eBooks based on minimal input. Marketers**: Generate lead magnets, guides, and product documentation at scale. Educators**: Produce structured learning materials or course eBooks. Agencies**: Offer eBook creation services powered by AI. Entrepreneurs**: Turn knowledge into content assets without hiring ghostwriters. Connect with Me Email: ranjancse@gmail.com LinkedIn: https://www.linkedin.com/in/ranjan-dailata/ Get Bright Data: Bright Data (Supports free workflows with a small commission) #n8n #automation #ebookcreation #googleai #geminiflash #aiwriting #gdocs #contentautomation #ebookworkflow #nocode #contentmarketing #gemini #aiwriter #automatedpublishing #aicontent #bookcreation #geminiworkflow #ebookgenerator #gptalternative #flash20 #geminiflash2 #authorautomation #educationalcontent #aiinmarketing #n8nworkflow
by Priya Jain
This workflow provides an OAuth 2.0 auth token refresh process for better control. Developers can utilize it as an alternative to n8n's built-in OAuth flow to achieve improved control and visibility. In this template, I've used Pipedrive API, but users can apply it with any app that requires the authorization_code for token access. This resolves the issue of manually refreshing the OAuth 2.0 token when it expires, or when n8n's native OAuth stops working. What you need to replicate this Your database with a pre-existing table for storing authentication tokens and associated information. I'm using Supabase in this example, but you can also employ a self-hosted MySQL. Here's a quick video on setting up the Supabase table. Create a client app for your chosen application that you want to access via the API. After duplicating the template: a. Add credentials to your database and connect the DB nodes in all 3 workflows. Enable/Publish the first workflow, "1. Generate and Save Pipedrive tokens to Database." Open your client app and follow the Pipedrive instructions to authenticate. Click on Install and test. This will save your initial refresh token and access token to the database. Please watch the YouTube video for a detailed demonstration of the workflow: How it operates Workflow 1. Create a workflow to capture the authorization_code, generate the access_token, and refresh the token, and then save the token to the database. Workflow 2. Develop your primary workflow to fetch or post data to/from your application. Observe the logic to include an if condition when an error occurs with an invalid token. This triggers the third workflow to refresh the token. Workflow 3. This workflow will handle the token refresh. Remember to send the unique ID to the webhook to fetch the necessary tokens from your table. Detailed demonstration of the workflow: https://youtu.be/6nXi_yverss
by Dataki
This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1**: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2**: Handle upserts for WordPress content when edits are made. Workflow 3**: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8n_website_embedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: ** **Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content** Updated content** Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8n_website_embedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the **Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent** Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook** This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.