by InfyOm Technologies
โ What problem does this workflow solve? Missed return pickups create logistics delays, extra follow-ups, and unhappy customers for e-commerce teams. This workflow automates return pickup reminders, ensuring customers are notified on the day of pickup via WhatsApp messages and automated voice calls, without any manual effort. โ๏ธ What does this workflow do? Runs automatically on a daily schedule. Reads return pickup data from Google Sheets. Identifies customers with: ๐ Pickup date = today โณ Status = Pending Sends personalized WhatsApp reminders. Places automated voice call reminders when required. Updates reminder status in Google Sheets for clear tracking. ๐ง How It Works โ Step by Step 1. โฐ Scheduled Trigger The workflow starts at a fixed time every day (e.g., 9โ10 AM) using a Schedule Trigger. 2. ๐ Read Pickup Data from Google Sheets It fetches rows from Google Sheets where: Pickup Date** = today Status** = Pending This ensures only relevant pickups are processed. 3. ๐ Loop Through Pickups Each matching row is processed individually to send customer-specific reminders. 4. โ๏ธ Generate Personalized Messages Using a Code node, the workflow creates: ๐ฒ A WhatsApp text message ๐ A voice message script Messages include: Customer name Product name Pickup address Return reason Pickup timing reminder 5. ๐ฒ Send WhatsApp Reminder A personalized WhatsApp message is sent via Twilio, reminding the customer to keep the package ready. 6. ๐ Place Voice Call Reminder If required, the workflow places an automated voice call using Twilio and reads out a clear pickup reminder using text-to-speech. 7. โ Update Pickup Status Once notifications are sent: The workflow updates the Status column to โReminder Sentโ Ensures the same pickup is not notified again ๐ Sample Google Sheet Columns | Order ID | Customer Name | Phone Number | Product | Pickup Date | Address | Return Reason | Status | |--------|----------------|--------------|---------|-------------|---------|---------------|--------| ๐ง Integrations Used Google Sheets** โ Pickup data source and tracking Twilio WhatsApp API** โ Message delivery Twilio Voice API** โ Automated call reminders n8n Schedule + Logic Nodes** โ Automation orchestration ๐ค Who can use this? Perfect for: ๐ E-commerce brands ๐ฆ Reverse logistics teams ๐ Delivery & pickup operations ๐งโ๐ผ Customer support teams It also works well for service visits, deliveries, appointments, and field operations. ๐ก Key Benefits โ Fewer missed pickups โ Improved customer compliance โ Reduced manual follow-ups โ Clear tracking in Google Sheets โ Scalable and fully automated ๐ Ready to Use? Just connect: โ Google Sheets with pickup data โ Twilio credentials (WhatsApp + Voice) โ Schedule trigger time
by Fahmi Fahreza
Automated Multi-Bank Balance Sync to BigQuery This workflow automatically fetches balances from multiple financial institutions (RBC, Amex, Wise, PayPal) using Plaid, maps them to QuickBooks account names, and loads structured records into Google BigQuery for analytics. Whoโs it for? Finance teams, accountants, and data engineers managing consolidated bank reporting in Google BigQuery. How it works The Schedule Trigger runs weekly. Four Plaid API calls fetch balances from RBC, Amex, Wise, and PayPal. Each response splits out individual accounts and maps them to QuickBooks names. All accounts are merged into one dataset. The workflow structures the account data, generates UUIDs, and formats SQL inserts. BigQuery node uploads the finalized records. How to set up Add Plaid and Google BigQuery credentials, replace client IDs and secrets with variables, test each connection, and schedule the trigger for your reporting cadence.
by Yusei Miyakoshi
Who's it for This template is for teams that want to stay updated on industry trends, tech news, or competitor mentions without manually browsing news sites. It's ideal for marketing, development, and research teams who use Slack as their central hub for automated, timely information. What it does / How it works This workflow runs on a daily schedule (default 9 AM), fetches the top articles from Hacker News for a specific keyword you define (e.g., 'AI'), and uses an AI agent with OpenRouter to generate a concise, 3-bullet point summary in Japanese for each article. The final formatted summary, including the article title, is then posted to a designated Slack channel. The entire process is guided by descriptive sticky notes on the canvas, explaining each configuration step. How to set up In the Configure Your Settings node, change the default keyword AI to your topic of interest and update the slack_channel to your target channel name. Click the OpenRouter Chat Model node and select your OpenRouter API key from the Credentials dropdown. If you haven't connected it yet, you will need to create a new credential. Click the Send Summary to Slack node and connect your Slack account using OAuth2 credentials. (Optional) Adjust the schedule in the Trigger Daily at 9 AM node to change how often the workflow runs. Activate the workflow. Requirements An n8n instance (Cloud or self-hosted). A Slack account and workspace. An OpenRouter API key stored in your n8n credentials. If self-hosting, ensure the LangChain nodes are enabled. How to customize the workflow Change the News Source:* Replace the *Hacker News* node with an *RSS Feed Read** node or another news integration to pull articles from different sources. Modify the AI Prompt:* In the *Summarize Article with AI** node, you can edit the system message to change the summary language, length, or tone. Use a Different AI Model:* Swap the *OpenRouter* node for an *OpenAI, **Anthropic, or any other supported chat model. Track Multiple Keywords:* Modify the workflow to loop through a list of keywords in the *Configure Your Settings** node to monitor several topics at once.
by Port IO
Complete incident workflow from detection through resolution to post-mortem, with full organizational context from Port's catalog. This template handles both incident triggered and resolved events from PagerDuty, automatically creating Jira tickets with context, notifying teams via Slack, calculating MTTR, and using Port AI Agents to schedule post-mortem meetings and create documentation. How it works The n8n workflow orchestrates the following steps: On Incident Triggered: PagerDuty webhook โ Receives incident events from PagerDuty via POST request. Event routing โ Routes to triggered or resolved flow based on event type. Port context enrichment โ Uses Port's n8n node to query your software catalog for service context, on-call engineers, recent deployments, runbooks, and past incidents. AI severity assessment โ OpenAI assesses severity based on Port context and recommends investigation actions. Escalation routing โ Critical incidents automatically escalate to leadership Slack channel. Jira ticket creation โ Creates incident ticket with full context, investigation checklist, and recommended actions. Team notification โ Notifies the team's Slack channel with incident details and resources. On Incident Resolved: Port context extraction โ Gets post-incident context from Port including stakeholders and documentation spaces. MTTR calculation โ Calculates mean time to resolution from incident timestamps. Post-mortem generation โ AI generates a structured post-mortem template with timeline. Port AI Agent scheduling โ Triggers Port AI Agent to schedule post-mortem meeting, invite stakeholders, and create documentation. Resolution notification โ Notifies team with MTTR, post-mortem document link, and meeting details. Metrics logging โ Logs MTTR metrics back to Port for service reliability tracking. Setup [ ] Register for free on Port.io [ ] Configure Port with services, on-call schedules, and deployment history [ ] Set up Port AI agents for post-mortem scheduling [ ] Connect PagerDuty webhook for incident events [ ] Configure Jira project for incident tickets (use project key 'INC' or customize) [ ] Set up Slack channels for alerts (#incidents and #leadership-alerts) [ ] Add OpenAI credentials for severity assessment [ ] Test with a sample incident event [ ] You should be good to go! Prerequisites You have a Port account and have completed the onboarding process. Port's integrations are configured (GitHub, Jira, PagerDuty if available). You have a working n8n instance (Cloud or self-hosted) with Port's n8n custom node installed. PagerDuty account with webhook capabilities. Jira Cloud account with appropriate project permissions. Slack workspace with bot permissions to post messages. OpenAI API key for severity assessment and post-mortem generation. โ ๏ธ This template is intended for Self-Hosted instances only.
by Intuz
This n8n template from Intuz provides a complete solution to automate the syncing of new subscribers from Google Sheets to MailerLite. It intelligently identifies and adds only new contacts, preventing duplicates and ensuring your email lists are clean and accurate. Who's this workflow for? Marketing Teams Email Marketers Small Business Owners Community Managers How it works 1. Read from Google Sheets: The workflow begins by reading all contact rows from your designated Google Sheet. 2. Check for Existing Subscribers: For each contact, it performs a search in MailerLite to check if a subscriber with that email address already exists. 3. Handle Duplicates: If the subscriber is found in MailerLite, the workflow stops processing that specific contact, preventing any duplicates from being created. 4. Create New Subscribers: If the contact is not found, the workflow proceeds to create a new subscriber in MailerLite, using all the details from the Google Sheet (like name, company, and country) and assigns them to the specified group. Setup Instructions 1. Google Sheets Setup: Connect your Google Sheets account to n8n. Create a sheet with the required columns: Email, first_name, last_name, Company, Country, and group_id. In the Get row(s) in sheet node, select your credentials and specify the Document ID and Sheet Name. 2. MailerLite Setup: Connect your MailerLite account to n8n using your API key. In both the Get a subscriber and Create subscriber... nodes, select your MailerLite credentials. Make sure the group_id values in your Google Sheet correspond to valid Group IDs in your MailerLite account. 3. Activate Workflow: Save the workflow and click "Execute workflow" to run the sync whenever you need to update your subscriber list. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Michael Taleb
Workflow Summary This automation keeps your Supabase vector database synchronized with documents stored in Google Drive, while also making the data contextual and vector based for better retrieval. When a file is added or modified, the workflow extracts its text, splits it into smaller chunks, and enriches each chunk with contextual metadata (such as summaries and document details). It then generates embeddings using OpenAI and stores both the vector data and metadata in Supabase. If a file changes, the old records are replaced with updated, contextualized content. The result is a continuously updated and context-aware vector database, enabling highly accurate hybrid search and retrieval. To setup 1. Connect Google Drive โข Create a Google Drive folder to watch. โข Connect your Google Drive account in n8n and authorize access. โข Point the Google Drive Trigger node to this folder (new/modified files trigger the flow). 2. Configure Supabase โข Please refer to the Setting Up Supabase Sticky Note. 3. Connect OpenAI (or your embedding model) โข Add your OpenAI API key in n8n credentials.
by Tushar Mishra
1. Data Ingestion Workflow (Left Panel โ Pink Section) This part collects data from the ServiceNow Knowledge Article table, processes it into embeddings, and stores it in Qdrant. Steps: Trigger: When clicking โExecute workflowโ The workflow starts manually when you click Execute workflow in n8n. Get Many Table Records Fetches multiple records from the ServiceNow Knowledge Article table. Each record typically contains knowledge article content that needs to be indexed. Default Data Loader Takes the fetched data and structures it into a format suitable for text splitting and embedding generation. Recursive Character Text Splitter Splits large text (e.g., long knowledge articles) into smaller, manageable chunks for embeddings. This step ensures that each text chunk can be properly processed by the embedding model. Embeddings OpenAI Uses OpenAIโs Embeddings API to convert each text chunk into a high-dimensional vector representation. These embeddings are essential for semantic search in the vector database. Qdrant Vector Store Stores the generated embeddings along with metadata (e.g., article ID, title) in the Qdrant vector database. This database will later be used for similarity searches during chatbot interactions. 2. RAG Chatbot Workflow (Right Panel โ Green Section) This section powers the Retrieval-Augmented Generation (RAG) chatbot that retrieves relevant information from Qdrant and responds intelligently. Steps: Trigger: When chat message received Starts when a user sends a chat message to the system. AI Agent Acts as the orchestrator, combining memory, tools, and LLM reasoning. Connects to the OpenAI Chat Model and Qdrant Vector Store. OpenAI Chat Model Processes user messages and generates responses, enriched with context retrieved from Qdrant. Simple Memory Stores conversational history or context to ensure continuity in multi-turn conversations. Qdrant Vector Store1 Performs a similarity search on stored embeddings using the userโs query. Retrieves the most relevant knowledge article chunks for the chatbot. Embeddings OpenAI Converts user query into embeddings for vector search in Qdrant.
by Harshil Agrawal
This workflow allows you to create, update and get a post using the Discourse node. Discourse node: This node creates a new post under a category. Based on your use-case, you can select a different category. Discourse1 node: This node updates the content of the post. Discourse2 node: This node fetches the node that we created using the Discourse node. Based on your use-case, you can add or remove nodes to connect Discourse to different services.
by Satva Solutions
Automated Stripe Payment to QuickBooks Sales Receipt This n8n workflow seamlessly connects Stripe and QuickBooks Online to keep your accounting in perfect sync. Whenever a payment in Stripe succeeds, the workflow automatically checks if the corresponding customer exists in QuickBooks. If found, it instantly creates a Sales Receipt under that customer. If not, it creates the customer first โ then logs the sale. Key Features: โก Real-Time Sync: Automatically triggers when a Stripe payment intent succeeds. ๐ค Smart Customer Matching: Searches for existing customers in QuickBooks to prevent duplicates. ๐งพ Automated Sales Receipts: Creates accurate sales records for every successful Stripe payment. ๐ End-to-End Automation: Handles customer creation, receipt generation, and data consistency without manual entry. Requirements: A running n8n instance, active Stripe and QuickBooks Online accounts with API access.
by Yaron Been
CTO Agent with Engineering Team Description Complete AI-powered engineering department with a Chief Technology Officer (CTO) agent orchestrating specialized engineering team members for comprehensive software development and technical operations. Overview This n8n workflow creates a comprehensive engineering department using AI agents. The CTO agent analyzes technical requests and delegates tasks to specialized agents for software architecture, DevOps, security, quality assurance, backend development, and frontend development. Features Strategic CTO agent using OpenAI O3 for complex technical decision-making Six specialized engineering agents powered by GPT-4.1-mini for efficient execution Complete software development lifecycle coverage from architecture to deployment Automated DevOps pipelines and infrastructure management Security assessments and compliance frameworks Quality assurance and test automation strategies Full-stack development capabilities Team Structure CTO Agent**: Technical leadership and strategic delegation (O3 model) Software Architect Agent**: System design, patterns, technology stack decisions DevOps Engineer Agent**: CI/CD pipelines, infrastructure automation, containerization Security Engineer Agent**: Application security, vulnerability assessments, compliance QA Test Engineer Agent**: Test automation, quality strategies, performance testing Backend Developer Agent**: Server-side development, APIs, database architecture Frontend Developer Agent**: UI/UX development, responsive design, frontend frameworks How to Use Import the workflow into your n8n instance Configure OpenAI API credentials for all chat models Deploy the webhook for chat interactions Send technical requests via chat (e.g., "Design a scalable microservices architecture for our e-commerce platform") The CTO will analyze and delegate to appropriate specialists Receive comprehensive technical deliverables Use Cases Full Stack Development**: Complete application architecture and implementation System Architecture**: Scalable designs for microservices and distributed systems DevOps Automation**: CI/CD pipelines, containerization, cloud deployment strategies Security Audits**: Vulnerability assessments, secure coding practices, compliance Quality Assurance**: Test automation frameworks, performance testing strategies Technical Documentation**: API documentation, system diagrams, deployment guides Requirements n8n instance with LangChain nodes OpenAI API access (O3 for CTO, GPT-4.1-mini for specialists) Webhook capability for chat interactions Optional: Integration with development tools and platforms Cost Optimization O3 model used only for strategic CTO decisions GPT-4.1-mini provides 90% cost reduction for specialist tasks Parallel processing enables simultaneous agent execution Code template library reduces redundant development work Integration Options Connect to development platforms (GitHub, GitLab, Bitbucket) Integrate with project management tools (Jira, Trello, Asana) Link to monitoring and logging systems Export to documentation platforms Contact & Resources Website**: nofluff.online YouTube**: @YaronBeen LinkedIn**: Yaron Been Tags #SoftwareEngineering #TechStack #DevOps #SecurityFirst #QualityAssurance #FullStackDevelopment #Microservices #CloudNative #TechLeadership #EngineeringAutomation #n8n #OpenAI #MultiAgentSystem #EngineeringExcellence #DevAutomation #TechInnovation
by Davide
This workflow automates the entire process of creating, managing, and publishing AI-generated videos using OpenAI Sora2 Pro, Google Sheets, Google Drive, and YouTube. Advantages โ Fully Automated Video Pipeline From idea to YouTube publication with zero manual intervention after setup. โ Centralized Control via Google Sheets Simple spreadsheet interface โ no need to use APIs or dashboards directly. โ AI-Powered Video Creation Uses OpenAI Sora2 Pro for generating professional-quality videos from text prompts. โ SEO-Optimized Titles with GPT-5 Automatically creates catchy, keyword-rich titles optimized for YouTube engagement. โ Cloud Integration Seamless use of Google Drive for file management and YouTube for publishing. โ Scalable and Repeatable Can handle multiple videos in sequence, triggered manually or at regular intervals. โ Error-Resilient and Transparent Uses conditional checks (โCompleted?โ node) and real-time updates in Google Sheets to ensure reliability and visibility. How it Works This workflow automates the entire process of generating AI videos and publishing them to YouTube, using a Google Sheet as the central control panel. Trigger & Data Fetch: The workflow is triggered either manually or on a schedule. It starts by reading a specific Google Sheet to find new video requests. A new request is identified as a row where the "PROMPT" and "DURATION" columns are filled, but the "VIDEO" column is empty. AI Video Generation: For each new request, it takes the prompt and duration, then sends a request to the Fal.ai Sora-2 Pro model via its API to generate the video. The system then enters a polling loop, checking the video generation status every 60 seconds until it is COMPLETED. Post-Processing & Upload: Once the video is ready, the workflow performs three parallel actions: Fetch Video & Upload to Drive: It retrieves the generated video file and uploads it to a specified folder in Google Drive for archiving. Generate YouTube Title: It sends the original prompt to OpenAI's GPT-5 (or another specified model) to generate an optimized, SEO-friendly title for the YouTube video. Publish to YouTube: It takes the generated video file and the AI-created title and uses the Upload-Post.com service to automatically publish the video to a connected YouTube channel. Update & Log: Finally, the workflow updates the original Google Sheet row with the URL of the archived video in Google Drive and the newly created YouTube video URL, providing a complete audit trail. Set up Steps To configure this workflow, follow these steps: Prepare the Google Sheet: Create a Google Sheet with at least these columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. In the n8n "Get new video" and update nodes, configure the documentId and sheetName to point to your specific Google Sheet. Configure Fal.ai API Key: Create an account on fal.ai and obtain your API key. In both the "Create Video" and "Get status" HTTP Request nodes, set up the HTTP Header Authentication. Set the Name to Authorization and the Value to Key YOUR_API_KEY. Set up Upload-Post.com for YouTube: Create an account on Upload-Post.com and get your API key. Connect your YouTube channel as a "profile". In the "HTTP Request" node (for uploading), configure the Header Auth with Name: Authorization and Value: Apikey YOUR_UPLOAD_POST_API_KEY. Replace YOUR_USERNAME in the node's body parameters with the profile name you created on Upload-Post.com (e.g., test1). Configure OpenAI (Optional but Recommended): The "Generate title" node uses an OpenAI model. Ensure you have valid OpenAI API credentials set up in n8n for this node to function and create optimized titles. Finalize Paths and Activate: In the "Upload Video" node, specify the correct Google Drive folder ID where you want the videos to be saved. Once all credentials and paths are set, you can activate the workflow and set the "Schedule Trigger" node to run at your desired interval (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by InfyOm Technologies
โ What problem does this workflow solve? Salon staff often spend hours juggling appointment calls, managing bookings manually, and keeping track of customer preferences. This workflow automates your entire salon appointment system via WhatsApp, delivering a personalized and human-like booking experience using AI, memory, and Google Sheets. ๐ก Main Use Cases ๐โโ๏ธ Offer personalized stylist recommendations by remembering customer preferences and past visits. ๐ Provide real-time availability and salon opening hour information. ๐ Book and update appointments directly from customer chat. ๐ Simplify appointment changes by recalling previous booking details. ๐ง Enable context-aware, memory-driven conversations across multiple interactions. ๐ง How It Works โ Step-by-Step 1. ๐ฒ Chat Message Trigger The workflow is triggered whenever a customer sends a message to your WhatsApp salon assistant. 2. ๐ง Memory Buffer for Context Management The assistant uses a Memory Buffer to: Recognize returning customers Avoid repeating questions Maintain conversation flow across multiple sessions This enables a seamless and intelligent dialogue with each customer. 3. ๐ Stylist & Service Lookup When the customer asks for stylist suggestions, available time slots, or services: Extracts request details using AI Queries a Google Sheet containing: Stylist availability Service types Salon opening hours Returns personalized recommendations based on preferences and availability 4. โ Appointment Booking Collects all necessary info: Date, time, selected service, stylist, contact info Stores the appointment in Google Sheets Sends a confirmation message to the customer in WhatsApp 5. ๐ Modify or Cancel Bookings Customers can update or cancel appointments Bot matches records by phone number Modifies or deletes the appointment in the sheet accordingly ๐งฉ Integrations Used WhatsApp Integration** (via Twilio, Meta API, or other connector) OpenAI/GPT Model** for natural conversation flow and extraction Google Sheets** as a simple and effective appointment database Memory Buffer** for ongoing context across chats ๐ค Who can use this? Perfect for: ๐โโ๏ธ Salons and barbershops ๐ Spas and beauty centers ๐งโโ๏ธ Wellness studios ๐ Developers building vertical AI assistants for SMBs If you're looking to modernize your booking process and impress customers with an AI-powered, memory-enabled WhatsApp botโthis workflow delivers. ๐ Benefits โฐ Save time for your staff ๐ง Offer truly personalized experiences ๐ฒ Book appointments 24/7 via WhatsApp ๐ Keep all records organized in Google Sheets ๐ง Reduce human error and double bookings ๐ฆ Ready to Launch? Just configure: โ Your WhatsApp number + webhook integration โ Google Sheet with stylist and service data โ OpenAI key for AI-powered conversation โ Memory Buffer to enable smart replies And your salon will be ready to offer automated, intelligent bookingโright from a simple WhatsApp chat.