by System Admin
Define URLs in array. curl -X POST https://api.firecrawl.dev/v1/scrape \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY' \ -d '{ "url": "https://docs.firecrawl.dev", ...
by Ramdoni
🚀 ExamForge AI Automated PDF to Structured Exam Generator (MCQ + Essay + Answer Key) Generate structured exams automatically from text-based PDF materials using AI. ExamForge AI is a production-ready n8n workflow that transforms educational content into multiple-choice and essay questions with customizable difficulty and automatic answer key generation. ✨ Features 📄 Upload PDF via Webhook ✅ File size validation (default: max 5MB) 🧹 Automatic text cleaning 📏 Token length estimation & safety control 🎯 Customizable MCQ & Essay count 🧠 Difficulty selection (easy / medium / hard) 🌍 Language selection 📦 Structured JSON AI output 📝 Separate Exam PDF & Answer Key PDF 📲 Telegram delivery support (optional) 🔒 Parameter validation with structured error responses 🧠 What This Workflow Does Accepts PDF upload via Webhook Validates file size Extracts and cleans text content Estimates text length to prevent token overflow Validates required parameters: mcq_count essay_count difficulty language Sends structured prompt to OpenAI Parses JSON response Formats exam and answer key separately Converts both into PDF Sends results via Telegram or Webhook response ⚙️ Requirements Accounts Required OpenAI account (API key required) Telegram Bot (optional) PDF Munk (API key required) Environment n8n (self-hosted or cloud) Node version compatible with your n8n installation 🔑 Credentials Setup 1️⃣ OpenAI Add OpenAI credentials inside n8n Insert your API key Select preferred model (e.g., GPT-4o / GPT-4) 2️⃣ Telegram (Optional) Create a Telegram Bot via BotFather Insert Bot Token into Telegram node Add your Chat ID 🛠 Webhook Configuration Method: POST Content-Type: multipart/form-data Required Parameters | Parameter | Type | Required | Description | |--------------|--------|----------|-------------| | file | File | Yes | PDF document | | mcq_count | Number | Yes | Number of multiple-choice questions | | essay_count | Number | Yes | Number of essay questions | | difficulty | String | Yes | easy / medium / hard | | language | String | Yes | Output language | 📥 Example Request curl -X POST https://your-n8n-domain/webhook/examforge \ -F "file=@document.pdf" \ -F "mcq_count=20" \ -F "essay_count=5" \ -F "difficulty=medium" \ -F "language=Indonesian"
by Stefan Joulien
Who this template is for This workflow is designed for teams and businesses that receive invoices in Google Drive and want to automatically extract structured financial data without manual processing. It is ideal for finance teams, operators, and founders who want a simple way to turn invoices into usable data. No accounting software is required, and the workflow works with common invoice formats such as PDFs and images. What this workflow does This workflow monitors a Google Drive folder for newly uploaded invoices. When a file is detected, it uses AI to extract key invoice information such as issuer, date, total amount, taxes, currency, and description. The extracted data is automatically cleaned, structured, and stored in Google Sheets, creating a centralized and searchable invoice database. How it works The workflow starts when a new file is added to a Google Drive folder Each file is processed individually and classified based on its type (PDF or image) The file is then downloaded and analyzed using an AI model optimized for document or image understanding Key invoice fields such as issuer, date, total amount, taxes, currency, and description are extracted and normalized into structured fields The AI output is appended to a Google Sheets table — a short wait step ensures reliable sequential writes when multiple invoices are processed at the same time How to set up Select the Google Drive folder where invoices will be uploaded Connect your OpenAI credentials for document and image analysis Choose the Google Sheets file that will store the extracted invoice data Activate the workflow and upload an invoice to test it Requirements Google Drive account Google Sheets account OpenAI API credentials n8n instance (cloud or self-hosted) How to customize the workflow You can adjust the fields extracted from invoices, add validation rules, connect the data to accounting tools, or extend the workflow with reporting and notification steps.
by vinci-king-01
Public Transport Schedule & Delay Tracker with Microsoft Teams and Dropbox ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Microsoft Teams incoming webhook configured Dropbox account with an app token created Public transit data source (website or API) that is legally scrapable or offers open data Required Credentials ScrapeGraphAI API Key** – enables web scraping Microsoft Teams Webhook URL** – posts messages into a channel Dropbox Access Token** – saves JSON files to Dropbox Specific Setup Requirements | Item | Example | Notes | |------|---------|-------| | Transit URL(s) | https://mycitytransit.com/line/42 | Must return the schedule or service alert data you need | | Polling Interval | 5 min | Adjust via Cron node or external trigger | | Teams Channel | #commuter-updates | Create an incoming webhook in channel settings | How it works This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Key Steps: Webhook Trigger**: Starts the workflow (can be replaced with Cron for polling). Set Node**: Stores target route IDs, URLs, or API endpoints. SplitInBatches**: Processes multiple routes one after another to avoid rate limits. ScrapeGraphAI**: Scrapes each route page/API and returns structured schedule & alert data. Code Node (Normalize)**: Cleans & normalizes scraped fields (e.g., converts times to ISO). If Node (Delay Detected?)**: Compares live data vs. expected timetable to detect delays. Merge Node**: Combines route metadata with delay information. Microsoft Teams Node**: Sends alert message and rich card to the selected Teams channel. Dropbox Node**: Saves the full JSON snapshot to a dated folder for historical reference. StickyNote**: Documents the mapping between scraped fields and final JSON structure. Set up steps Setup Time: 15-25 minutes Clone or Import the JSON workflow into your n8n instance. Install ScrapeGraphAI community node if you haven’t already (Settings → Community Nodes). Open the Set node and enter your target routes or API endpoints (array of URLs/IDs). Configure ScrapeGraphAI: Add your API key in the node’s credentials section. Define CSS selectors or API fields inside the node parameters. Add Microsoft Teams credentials: Paste your channel’s incoming webhook URL into the Microsoft Teams node. Customize the message template (e.g., include route name, delay minutes, reason). Add Dropbox credentials: Provide the access token and designate a folder path (e.g., /TransitLogs/). Customize the If node logic to match your delay threshold (e.g., ≥5 min). Activate the workflow and trigger via the webhook URL, or add a Cron node (every 5 min). Node Descriptions Core Workflow Nodes: Webhook** – External trigger for on-demand checks or recurring scheduler. Set** – Defines static or dynamic variables such as route list and thresholds. SplitInBatches** – Iterates through each route to control request volume. ScrapeGraphAI** – Extracts live schedule and alert data from transit websites/APIs. Code (Normalize)** – Formats scraped data, merges dates, and calculates delay minutes. If (Delay Detected?)** – Branches the flow based on presence of delays. Merge** – Re-assembles metadata with computed delay results. Microsoft Teams** – Sends formatted notifications to Teams channels. Dropbox** – Archives complete JSON payloads for auditing and analytics. StickyNote** – Provides inline documentation for maintainers. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI → Code (Normalize) → If (Delay Detected?) ├─ true → Merge → Microsoft Teams → Dropbox └─ false → Dropbox Customization Examples Change to Slack instead of Teams // Replace Microsoft Teams node with Slack node { "text": 🚊 ${$json.route} is delayed by ${$json.delay} minutes., "channel": "#commuter-updates" } Filter only major delays (>10 min) // In If node, use: return $json.delay >= 10; Data Output Format The workflow outputs structured JSON data: { "route": "Line 42", "expected_departure": "2024-04-22T14:05:00Z", "actual_departure": "2024-04-22T14:17:00Z", "delay": 12, "status": "delayed", "reason": "Signal failure at Main Station", "scraped_at": "2024-04-22T13:58:22Z", "source_url": "https://mycitytransit.com/line/42" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Verify CSS selectors/API fields match the current website markup; update selectors after site redesigns. Teams messages not arriving – Ensure the Teams webhook URL is correct and the incoming webhook is still enabled. Dropbox writes fail – Check folder path, token scopes (files.content.write), and available storage quota. Performance Tips Limit SplitInBatches to 5-10 routes per run to avoid IP blocking. Cache unchanged schedules locally and fetch only alert pages for faster runs. Pro Tips: Use environment variables for API keys & webhook URLs to keep credentials secure. Attach a Cron node set to off-peak hours (e.g., 4 AM) for daily full-schedule backups. Add a Grafana dashboard that reads the Dropbox archive for long-term delay analytics.
by JJ Tham
Struggling with inaccurate Meta Ads tracking due to iOS 14+ and ad blockers? 📉 This workflow is your solution. It provides a robust, server-side endpoint to reliably send conversion events directly to the Meta Conversions API (CAPI). By bypassing the browser, you can achieve more accurate ad attribution and optimize your campaigns with better data. This template handles all the required data normalization, hashing, and formatting, so you can set up server-side tracking in minutes. ⚙️ How it works This workflow provides a webhook URL that you can send your conversion data to (e.g., from a web form, CRM, or backend). Once it receives the data, it: Sanitizes User Data: Cleans and normalizes PII like email and phone numbers. Hashes PII: Securely hashes the user data using SHA-256 to meet Meta's privacy requirements. Formats the Payload: Assembles all the data, including click IDs (fbc, fbp) and user info, into the exact format required by the Meta CAPI. Sends the Event: Makes a direct, server-to-server call to Meta, reliably logging your conversion event. 👥 Who’s it for? Performance Marketers: Improve ad performance and ROAS with more accurate conversion data. Lead Generation Businesses: Reliably track form submissions as conversions. E-commerce Stores: Send purchase events from your backend to ensure nothing gets missed. Developers: A ready-to-use template for implementing server-side tracking without writing custom code from scratch. 🛠️ How to set up Setup is straightforward. You'll need your Meta Pixel ID and a CAPI Access Token. For a complete walkthrough, check out the tutorial video for this workflow on YouTube: https://youtu.be/_fdMPIYEvFM The basic steps are to copy the webhook URL, configure your form or backend to send the correct data payload, and add your Meta Pixel ID and Access Token to the final HTTP Request node. 👉 For a detailed, step-by-step guide, please refer to the yellow sticky note inside the workflow.
by Daniel
Generate stunning 10-second AI-crafted nature stock videos on autopilot and deliver them straight to your Telegram chat—perfect for content creators seeking effortless inspiration without the hassle of manual prompting or editing. 📋 What This Template Does This workflow automates the creation and delivery of high-quality, 10-second nature-themed videos using AI generation tools. Triggered on a schedule, it leverages Google Gemini to craft precise video prompts, submits them to the Kie AI API for video synthesis, polls for completion, downloads the result, and sends it via Telegram. Dynamically generates varied nature scenes (e.g., misty forests, ocean sunsets) with professional cinematography specs. Handles asynchronous video processing with webhook callbacks for efficiency. Ensures commercial-ready outputs: watermark-free, portrait aspect, natural ambient audio. Customizable schedule for daily/weekly bursts of creative B-roll footage. 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled. Google Gemini API access for prompt generation. Kie AI API account for video creation (supports Sora-like text-to-video models). Telegram Bot setup for message delivery. 🔑 Required Credentials Google Gemini API Setup Go to aistudio.google.com → Create API key. Ensure the key has access to Gemini 1.5 Flash or Pro models. Add to n8n as "Google Gemini API" credential type. Kie AI API Setup Sign up at kie.ai → Dashboard → API Keys. Generate a new API key with video generation permissions (sora-2-text-to-video model). Add to n8n as "HTTP Header Auth" credential (header: Authorization, value: Bearer [Your API Key]). Telegram Bot API Setup Create a bot via @BotFather on Telegram → Get API token. Note your target chat ID (use @userinfobot for personal chats). Add to n8n as "Telegram API" credential type. ⚙️ Configuration Steps Import the workflow JSON into your n8n instance. Assign the required credentials to the Gemini, Kie AI, and Telegram nodes. Update the Telegram node's chat ID with your target chat (e.g., personal or group). Adjust the Schedule Trigger interval (e.g., daily at 9 AM) via node settings. Activate the workflow and monitor the first execution for video delivery. 🎯 Use Cases Content creators automating daily social media B-roll: Generate fresh nature clips for Instagram Reels or YouTube intros without filming. Marketing teams sourcing versatile stock footage: Quickly produce themed videos for campaigns, like serene landscapes for wellness brands. Educational bots for classrooms: Deliver randomized nature videos to Telegram groups for biology lessons on ecosystems and wildlife. Personal productivity: Schedule motivational nature escapes to your chat for remote workers needing quick digital breaks. ⚠️ Troubleshooting Video generation fails with quota error: Check Kie AI dashboard for usage limits and upgrade plan if needed. Prompt output too generic: Tweak the Video Prompting Agent's system prompt for more specificity (e.g., add seasonal themes). Telegram send error: Verify bot token and chat ID; test with a simple message node first. Webhook callback timeout: Ensure n8n production URL is publicly accessible; use ngrok for local testing.
by Trung Tran
EC2 Lifecycle Manager with AI Chat Agent (Describe, Start, Stop, Reboot, Terminate) Watch the demo video below: Who’s it for This workflow is designed for DevOps engineers and cloud administrators who want to manage AWS EC2 instances directly from chat platforms (Slack, Teams, Telegram, etc.) using natural language. It helps engineers quickly check EC2 instance status, start/stop servers, reboot instances, or terminate unused machines — without logging into the AWS console. How it works / What it does A chat message (command) from the engineer triggers the workflow. The EC2 Manager AI Agent interprets the request using the AI chat model and memory. The agent decides which AWS EC2 action to perform: DescribeInstances → List or check status of EC2 instances. StartInstances → Boot up stopped instances. StopInstances → Gracefully shut down running instances. RebootInstances → Restart instances without stopping them. TerminateInstances → Permanently delete instances. The selected tool (API call) is executed via an HTTP Request to the AWS EC2 endpoint. The agent replies back in chat with the result (confirmation, instance status, errors, etc.). How to set up Add Chat Trigger Connect your chatbot platform (Slack/Telegram/Teams) to n8n. Configure the “When chat message received” node. Configure OpenAI Chat Model Select a supported LLM (GPT-4, GPT-4.1, GPT-5, etc.). Add system and user prompts to define behavior (EC2 assistant role). Add Memory Use Simple Memory to keep track of context (e.g., instance IDs, region, last action). Connect EC2 API Tools Create HTTP Request nodes for: Describe Instances Start Instance Stop Instance Reboot Instance Terminate Instance Use AWS credentials with Signature V4 authentication. API endpoint: https://ec2.{region}.amazonaws.com/ Link Tools to Agent Attach all EC2 tools to the EC2 Manager AI Agent node. Ensure the agent can choose which tool to call based on user input. Requirements n8n instance** (self-hosted or cloud). Chat platform integration** (Slack, Teams, or Telegram). OpenAI (or other LLM) credentials**. AWS IAM user with EC2 permissions**: ec2:DescribeInstances ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:TerminateInstances AWS region configured** for API calls. How to customize the workflow Add safety checks**: Require explicit confirmation before running Stop or Terminate. Region flexibility**: Add support for multi-region management by letting the user specify the region in chat. Tag-based filters**: Extend DescribeInstances to return only instances matching specific tags (e.g., env=dev). Cost-saving automation**: Add scheduled rules to automatically stop instances outside working hours. Enhanced chatbot UX**: Format responses into tables or rich messages in Slack/Teams. Audit logging**: Store each action (who/what/when) into a database or Google Sheets for compliance.
by Cheng Siong Chin
How It Works This workflow automates the complex process of managing lawsuit responses through intelligent task validation and multi-authority coordination. Designed for legal departments, compliance teams, and government agencies handling litigation matters, it solves the critical challenge of ensuring timely, accurate responses while maintaining proper oversight across multiple organizational levels. The system receives lawsuit notifications, validates critical information, and intelligently routes tasks based on authority levels. It orchestrates human oversight at strategic checkpoints, merges authority paths for comprehensive review, and generates detailed orchestration reports. By automating document preparation and multi-trail logging, it ensures accountability while reducing manual coordination overhead. The workflow seamlessly integrates validation results, manages execution plans, and prepares final responses through systematic processes, ultimately delivering compliant lawsuit responses through secure multi-trail communication channels. Setup Steps Configure Workflow Execution Webhook trigger endpoint Connect Workflow Configuration node with workflow parameters Set up Prepare Request Data node with lawsuit data structure mapping Configure Fetch Authority Rules node with OpenAI/Nvidia API credentials Connect Check Validation Result node with boundary enforcement parameters Configure Human Checkpoint nodes (High/Medium Authority) with approval routing Set up Merge Authority Paths node for consolidation logic Configure Orchestration Export node with Google Sheets credentials Prerequisites OpenAI or Nvidia API credentials for validation processing, Google Sheets access for orchestration logging Use Cases Government litigation departments managing multi-level approval workflows Customization Modify authority routing logic for organizational hierarchies Benefits Reduces response coordination time by 70%, eliminates manual routing errors
by EoCi - Mr.Eo
🎯 What This Does This workflow automatically monitors a specific Google Drive folder for new images. When you drop a file in, it uses Google's Gemini AI to analyze the image, generate an creative title, and write a high-engagement description. It then posts the image and text to a Discord channel and organizes your Google Drive by renaming the file and moving it to a "Processed" folder. 🔄 How It Works Watch:** The workflow detects when a new image file is uploaded to a specific Google Drive folder. Analyze:** It downloads the image and sends it to a Google Gemini AI Agent to identify the "hook" and generate technical/marketing copy. Format:** The AI returns a structured title, description, and a new optimized filename. Publish:** The workflow posts the image and the AI-generated caption directly to your Discord channel as a new thread. Organize:** Finally, it renames the original file in Google Drive and moves it to a separate "Processed" folder to keep your workspace clean. 🚀 Setup Requirements n8n Version:** Latest stable release recommended. Google Cloud Console Project:* With *Google Drive API** enabled. Google Gemini API Key:** For the AI generation. Discord Application:** A Bot Token with permissions to send messages/create threads in your server. Estimated Setup Time:** ~15 minutes. Set up steps Configure Google Drive Credentials: Set up a project in Google Cloud Console. Enable the Google Drive API. Create OAuth 2.0 credentials and add them to the Google Drive Trigger and Google Drive nodes in n8n. Prepare Drive Folders: Create a folder in Google Drive for Input (where you drop files). Copy the Folder ID from the URL. Create a folder for Processed files. Copy this Folder ID as well. Paste the Input Folder ID into the Google Drive Trigger node. Update the processed_folder_id value in the "Get File & Set Channel" (Set) node. Configure AI Agent: Get your API Key from Google AI Studio. Add a new credential for Google PaLM API in the Chat Model node. Setup Discord Bot: Go to the Discord Developer Portal and create a new Application/Bot. Copy the Bot Token. Invite the bot to your server. Enable Developer Mode in your Discord User Settings to right-click a channel and "Copy Channel ID". Update the channel_id in the "Get File & Set Channel" node. Open the "Post To Discord Channel" (HTTP Request) node. Under Authentication, select "Predefined Credential Type" -> "Discord Bot API" and paste your token. Test the Workflow: Click "Test Workflow" in n8n. Upload an image to your Google Drive Input folder. Watch the execution! Check Discord for the new post and Drive to see the file move. Nodes Used Google Drive Trigger:** Watches for new content. Google Drive:** Downloads, Updates (Renames), and Moves files. AI Agent (LangChain):** Orchestrates the analysis. Google Gemini Chat Model:** Generates the creative text. Structured Output Parser:** Ensures the AI replies in usable JSON. HTTP Request:** custom API call to Discord for advanced thread creation. Set:** Manages variables and folder IDs. Customization Guide Change the Persona:* Edit the "System Message" in the *AI Agent** node to change the tone. Want a pirate narrator? Or a strictly professional corporate tone? Change it there! 🙏 Thank You for Trying This Workflow! Your time and trust mean a lot! I truly appreciate you using this template. Your feedback shapes future updates: 💡 Suggestions for improvement 🆕 Ideas for new features 📝 Requests for other automation workflows Please share your thoughts! Every idea helps shape the next update. 🙋♂️ Join & Follow For More Free Templates! Discord Community: We Work Together Get help, share builds, collaborate! Daily tips, tutorials, and updates Thank you again for being part of this journey! 🚀 Together, we automate better! 🤖✨
by David Olusola
🧹 Auto-Clean CSV Uploads Before Import This workflow automatically cleans, validates, and standardizes any CSV file you upload. Perfect for preparing customer lists, sales leads, product catalogs, or any messy datasets before pushing them into Google Sheets, Google Drive, or other systems. ⚙️ How It Works CSV Upload (Webhook) Upload your CSV via webhook (supports form-data, base64, or binary file upload). Handles files up to ~10MB comfortably. Extract & Parse Reads raw CSV content. Validates file structure and headers. Detects and normalizes column names (e.g. First Name → first_name). Clean & Standardize Data Removes duplicate rows (based on email or all fields). Deletes empty rows. Standardizes fields: Emails → lowercased, validated format. Phone numbers → normalized (xxx) xxx-xxxx or +1 format. Names → capitalized (John Smith). Text → trims spaces & fixes inconsistent spacing. Assigns each row a data quality score so you know how “clean” it is. Generate Cleaned CSV Produces a cleaned CSV file with the same headers. Saves to Google Drive (optional). Ready for immediate import into Sheets or any app. Google Sheets Integration (Optional) Clears out an existing sheet. Re-imports the cleaned rows. Perfect for always keeping your “master sheet” clean. Final Report Logs processing summary: Rows before & after cleaning. Duplicates removed. Low-quality rows removed. Average data quality score. Outputs a neat summary for auditing. 🛠️ Setup Steps Upload Method Use the webhook endpoint generated by the CSV Upload Webhook node. Send CSV via binary upload, base64 encoding, or JSON payload with csv_content. Google Drive (Optional) Connect your Drive OAuth credentials. Replace YOUR_DRIVE_FOLDER_ID with your target folder. Google Sheets (Optional) Connect Google Sheets OAuth. Replace YOUR_GOOGLE_SHEET_ID with your target sheet ID. Customize Cleaning Rules Adjust the Clean & Standardize Data code node if you want different cleaning thresholds (default = 30% minimum data quality). 📊 Example Cleaning Report Input file: raw_leads.csv Rows before: 2,450 Rows after cleaning: 1,982 Duplicates removed: 210 Low-quality rows removed: 258 Avg. data quality: 87% ✅ Clean CSV saved to Drive ✅ Clean data imported into Google Sheets ✅ Full processing report generated 🎯 Why Use This? Stop wasting time manually cleaning CSVs. Ensure high-quality, import-ready data every time. Works with any dataset: leads, contacts, e-commerce exports, logs, surveys. Completely free — a must-have utility in your automation toolbox. ⚡ Upload dirty CSV → Get clean, validated, standardized data instantly!
by Rohit Dabra
Shopify MCP AI Agent Workflow for n8n Overview This n8n workflow showcases a full-featured AI-powered assistant connected to a Shopify store through a custom MCP (Multi-Channel Commerce Platform) Server toolkit. It empowers users to automate comprehensive Shopify store management by leveraging AI to interact conversationally with their data and operations. The workflow can create, fetch, search, update, and delete Shopify products and orders, all triggered via simple chat messages, making day-to-day store operations frictionless and highly efficient. Core capabilities include: Product and order management (CRUD) via chat commands. Smart retrieval: AI proactively fetches details instead of asking repeated questions. Contextual memory: AI uses n8n memory to provide context-aware, fluent responses. End-to-end automation: Connects Shopify, OpenAI, and n8n’s automation logic for seamless workflows. This solution is ideal for Shopify merchants, agencies, and developers aiming to reduce manual overhead and enable conversational, AI-powered commerce automation in their operations. 🎬 Watch Demo Video on YouTube Step-by-Step Setup Guide Follow these steps to import and configure the Shopify MCP AI Agent workflow in n8n: 1. Import the Workflow File Download the workflow file from this Creator Hub listing. In your n8n instance, go to Workflows > Import from File and upload the JSON. 2. Prepare Shopify Access Log in to your Shopify admin. Create a Custom App or use an existing app and retrieve the Admin API Access Token. Storefront access: Ensure your app has relevant permissions for Products, Orders, Discounts, and Store Settings. 3. Set Up Credentials in n8n In n8n, navigate to Credentials and add a new Shopify API credential using your Access Token. Name it something memorable (e.g., Shopify Access Token account) to match the credential used in the workflow nodes. 4. Configure the MCP Server Connection Make sure your MCP Server is running and accessible with API endpoints for product/order management. Update any relevant connection endpoints in the workflow if you run your MCP Server locally or in a different location. 5. Connect OpenAI or Other LLM Provider Provide your API key for OpenAI GPT or a compatible model. Link the credential to the OpenAI Chat Model node (replace with other providers if required). 6. (Optional) Customize for Your Needs Tweak node logic, add new triggers, or extend memory features as required. Add, remove, or restrain the AI’s capabilities to fit your operational needs. Configure chat triggers for more personalized workflows. 7. Testing Use the “When chat message received” trigger or send http requests to the workflow’s endpoint. Example: “Create an order for Jane Doe, 3 Black T-shirts” or “Show today’s fulfilled orders”. The workflow and AI Agent will handle context, fetch/store data, and reply accordingly. 8. Ready to Automate! Begin leveraging conversational automation to manage your Shopify store. For additional tips, consult the workflow’s internal documentation and n8n’s official guides. Additional Notes This template includes all core Shopify product and order operations. The AI agent auto-resolves context, making routine admin tasks simple and quick. Extend or fork the workflow to suit niche scenarios—discounts, analytics, and more. Visual thumbnail and schematic are included for easy reference.
by Alex Pekler
What this template does Instantly reach new leads on WhatsApp when they submit a form (Typeform, JotForm, Google Forms, or any webhook-enabled form) using MoltFlow (https://molt.waiflow.app). Leads are also logged to Google Sheets for CRM tracking. How it works A form submission triggers this webhook Contact info is extracted (name, phone, interest) A personalized WhatsApp message is sent via MoltFlow The lead is logged to Google Sheets for follow-up tracking Set up steps Create a MoltFlow account (https://molt.waiflow.app) and connect your WhatsApp number Generate an API key in MoltFlow (Sessions page, API Keys tab) Activate this workflow in n8n and copy the webhook URL Configure your form tool to POST submissions to this webhook URL Map your form field names in the Parse Form Data code node (name, phone, email, interest) Set YOUR_SESSION_ID in the Parse Form Data code node Add your MoltFlow API key as an HTTP Header Auth credential (Header Name: X-API-Key) Optional: Connect Google Sheets to log leads automatically Prerequisites MoltFlow account with an active WhatsApp session Any form tool that supports webhooks (Typeform, JotForm, Google Forms, Tally, etc.) Optional: Google Sheets for lead tracking