by RedOne
This workflow is designed for e-commerce store owners, operations managers, and developers who use Shopify as their e-commerce platform and want an automated way to track and analyze their order data. It is particularly useful for businesses that: Need a centralized view of all Shopify orders Want to analyze order trends without logging into Shopify Need to share order data with team members who don't have Shopify access Want to build custom reports based on order information What Problem Is This Workflow Solving? While Shopify provides excellent order management within its platform, many businesses need their order data available in other systems for various purposes: Data accessibility**: Not everyone in your organization may have access to Shopify's admin interface Custom reporting**: Google Sheets allows for flexible analysis and report creation Data integration**: Having orders in Google Sheets makes it easier to combine with other business data Backup**: Creates an additional backup of your critical order information What This Workflow Does This n8n workflow creates an automated bridge between your Shopify store and Google Sheets: Listens for new order notifications from your Shopify store via webhooks Processes the incoming order data and transforms it into a structured format Stores each new order in a dedicated Google Sheets spreadsheet Sends real-time notifications to Telegram when new orders are received or errors occur Setup Create a Google Sheet Create a new Google Sheet to store your orders Add a sheet named "orders" with the following columns: orderId orderNumber created_at processed processed_at json customer shippingAddress lineItems totalPrice currency Set Up Telegram Bot Create a Telegram bot using BotFather (send /newbot to @BotFather) Save your bot token for use in n8n credentials Start a chat with your bot and get your chat ID (you can use @userinfobot) Configure the Workflow Set your Google Sheet ID in the "Edit Variables" node Enter your Telegram chat ID in the "Edit Variables" node Set up your Telegram API credentials in n8n Configure Shopify Webhook In your Shopify admin, go to: Settings > Notifications > Webhooks Create a new webhook for "Order creation" Set the URL to your n8n webhook URL (from the "Receive New Shopify Order" node) Set the format to JSON How to Customize This Workflow to Your Needs Additional data**: Modify the "Transform Order Data to Standard Format" function to extract more Shopify data Multiple sheets**: Duplicate the Google Sheets node to store different aspects of orders in separate sheets Telegram messages**: Customize the text in Telegram nodes to include more details or rich formatting Data processing**: Add nodes to perform calculations or transformations on order data Additional notifications**: Add more channels like Slack, Discord, or SMS Integrations**: Extend the workflow to send order data to other systems like CRMs, ERPs, or accounting software Final Notes This workflow serves as a foundation that you can build upon to create a comprehensive order management system tailored to your specific business needs.
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive marketing automation and booking system that combines Excel-based lead management with voice-powered customer interactions. The system utilizes VAPI for voice communication and Excel/Google Sheets for data management, making it ideal for restaurants seeking to automate marketing campaigns and streamline booking processes through intelligent voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Excel operations are handled in real-time with immediate data synchronization The system can handle multiple simultaneous voice calls and lead processing All customer data is stored securely in Excel with proper formatting and validation Marketing campaigns can be scheduled and automated based on lead data How it works Lead Management & Marketing Automation Workflow New Lead Trigger: Excel triggers capture new leads when customers are added to the lead management spreadsheet Lead Preparation: The system processes and formats lead data, extracting relevant details (name, phone, preferences, booking history) Campaign Loop: Automated loop processes through multiple leads for batch marketing campaigns Voice Marketing Call: VAPI initiates personalized voice calls to leads with tailored restaurant offers and booking invitations Response Tracking: All call results and lead responses are logged back to Excel for campaign analysis Booking & Order Processing Workflow Voice Response Capture: VAPI webhook triggers when customers respond to marketing calls or make direct booking requests Response Storage: Customer responses and booking preferences are immediately saved to Excel sheets Information Extraction: System processes natural language responses to extract booking details (party size, preferred times, special requests) Calendar Integration: Booking information is automatically scheduled in restaurant management systems Confirmation Loop: Automated follow-up voice messages confirm bookings and provide additional restaurant information Excel Sheet Structure Lead Management Sheet | Column | Description | |--------|-------------| | lead_id | Unique identifier for each lead | | customer_name | Customer's full name | | phone_number | Primary contact number | | email | Customer email address | | last_visit_date | Date of last restaurant visit | | preferred_cuisine | Customer's food preferences | | party_size_typical | Usual number of guests | | preferred_time_slot | Preferred dining times | | marketing_consent | Permission for marketing calls | | lead_source | How customer was acquired | | lead_status | Current status (new, contacted, converted, inactive) | | last_contact_date | Date of last marketing contact | | notes | Additional customer information | | created_at | Lead creation timestamp | Booking Responses Sheet | Column | Description | |--------|-------------| | response_id | Unique response identifier | | customer_name | Customer's name from call | | phone_number | Contact number used for call | | booking_requested | Whether customer wants to book | | party_size | Number of guests requested | | preferred_date | Requested booking date | | preferred_time | Requested time slot | | special_requests | Dietary restrictions or special occasions | | call_duration | Length of VAPI call | | call_outcome | Result of marketing call | | follow_up_needed | Whether additional contact is required | | booking_confirmed | Final booking confirmation status | | created_at | Response timestamp | Campaign Tracking Sheet | Column | Description | |--------|-------------| | campaign_id | Unique campaign identifier | | campaign_name | Descriptive campaign title | | target_audience | Lead segments targeted | | total_leads | Number of leads contacted | | successful_calls | Calls that connected | | bookings_generated | Number of bookings from campaign | | conversion_rate | Percentage of leads converted | | campaign_cost | Total VAPI usage cost | | roi | Return on investment | | start_date | Campaign launch date | | end_date | Campaign completion date | | status | Campaign status (active, completed, paused) | How to use Setup: Import the workflow into your n8n instance and configure VAPI credentials Excel Configuration: Set up Excel/Google Sheets with the required sheet structure provided above Lead Import: Populate the Lead Management sheet with customer data from various sources Campaign Setup: Configure marketing message templates in VAPI nodes to match your restaurant's branding Testing: Test voice commands such as "I'd like to book a table for tonight" or "What are your specials?" Automation: Enable triggers to automatically process new leads and schedule marketing campaigns Monitoring: Track campaign performance through the Campaign Tracking sheet and adjust strategies accordingly The system can handle multiple concurrent voice calls and scales with your restaurant's marketing needs. Requirements VAPI account** for voice processing and natural language understanding Excel/Google Sheets** for storing lead, booking, and campaign data n8n instance** with Excel/Sheets and VAPI integrations enabled Valid phone numbers** for lead contact and compliance with local calling regulations Customising this workflow Multi-location Support**: Adapt voice AI automation for restaurant chains with location-specific offers Seasonal Campaigns**: Try popular use-cases such as holiday promotions, special event marketing, or loyalty program outreach Integration Options**: The workflow can be extended to include CRM integration, SMS follow-ups, and social media campaign coordination Advanced Analytics**: Add nodes for detailed campaign performance analysis and customer segmentation
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by Yang
Who is this for? This workflow is for digital marketers, small business owners, lead generation agencies, and VAs who need a scalable way to find and store local business leads using AI. It’s especially useful for teams that want to enrich leads with real-time news insights and save the structured data to Airtable. What problem is this workflow solving? Manually researching local businesses and staying up to date with relevant news is time-consuming and inefficient. This automation eliminates that burden by using Dumpling AI chat agents to generate leads and context, GPT-4o to summarize, and Airtable to store everything in one place. What this workflow does This AI workflow listens for a manual trigger in n8n and executes the following steps: Extracts local business leads using a Local Business Agent from Dumpling AI. Pulls current news related to the business type or location using a News Agent from Dumpling AI. Uses GPT-4o to combine both responses into a human-readable summary. Extracts structured lead data like name, category, and city. Saves the summary and lead data into Airtable for easy follow-up. Setup 1. Create AI Agents in Dumpling AI Sign in at Dumpling AI Create two separate agents: Local Business Agent: Designed to respond with structured lists of businesses by location and category. News Agent: Designed to fetch relevant recent news and summaries about a specific industry or region. After setting up each agent, copy the Agent Key from Dumpling AI. These keys will be required in the headers of your HTTP Request nodes in n8n. 2. Manual Trigger This workflow begins with a manual trigger inside n8n, Which is the When chat message is recieved. This makes it easy to test and reuse, especially during setup. 3. Get Local Business Data from Dumpling AI The first HTTP Request node sends a prompt like List 5 top real estate companies in Atlanta with full address and services. Include your Local Business Agent Key in the x-agent-key header. The response will return a structured list of business leads. 4. Get News Context from Dumpling AI The second HTTP Request node sends a prompt such as Give me the latest news related to the real estate market in Atlanta. Use your News Agent Key in the header. This fetches a brief set of recent news summaries relevant to the businesses being researched. 5. Use GPT-4o to Merge and Summarize The GPT node combines the list of businesses and news into one coherent summary. You can modify the prompt to output in paragraph format, bullet points, or structured notes. 6. Save Lead to Airtable The Airtable node sends all structured fields into your selected base and table. Be sure to connect your Airtable account and confirm the columns match exactly. How to customize this workflow Replace the prompt inside the HTTP node to focus on different types of businesses or cities. Expand the GPT output to include additional lead info like websites, phone numbers, or emails if the agent includes them. Add a webhook trigger to allow this flow to be run via a chatbot, external app, or button. Link to HubSpot or another CRM to sync the leads automatically. Duplicate the process to run for multiple industries in parallel. Final Notes You must create and configure your Dumpling AI agents first before running this workflow. The Agent Keys from Dumpling AI are required in both HTTP Request nodes. This flow is modular and flexible, ready for deeper CRM integrations. The manual trigger is great for testing, but you can add a Webhook node to automate it. This workflow helps you launch an intelligent lead gen process that combines location-targeted business discovery, AI-generated insights, and structured CRM-friendly output, all powered by Dumpling AI and OpenAI.
by TechDennis
Edit an existing image with OpenAI ImageGen1 via API Request Transform your creative pipeline by letting n8n call OpenAI ImageGen1’s edit image endpoint, automatically replacing or augmenting parts of any image you supply and returning a brand-new version in seconds. Designers, marketers, and product teams can eliminate repetitive manual edits and test more variations, faster. Who is this for? Content creators who need quick, on-brand image tweaks Marketers running A/B visual tests at scale Developers exploring the new ImageGen1 API inside low-code automations Use case / problem solved Opening design software to mask, fill, or swap objects is slow and error-prone. This workflow feeds an input image plus a prompt to OpenAI ImageGen1, receives the edited output, and passes it on to any service you like—perfect for bulk-editing product shots, social visuals, or UI mocks. What this workflow does Read or receive the source image (Webhook → Binary Data). Call OpenAI ImageGen1 with an HTTP Request node, sending the image and edit prompt. Parse the JSON response to capture the returned image URL. Download & hand off the edited file (e.g., upload to S3, post to Slack, or store in Drive). Setup Add your OpenAI API key in the API KEY node. Follow the notes on the workflow for more information. (Optional) Point the final node to your preferred storage or chat tool. > 📝 A sticky note in the workflow summarizes these steps and links to the OpenAI documentation. How to customize this workflow Trigger alternatives**: Replace the Chat with Google Drive, Airtable, etc. Chained edits**: Loop the output back for successive prompts. Conditional flows**: Add an If node to branch actions by image size or category. With renamed nodes, color-coded sticky notes, and a concise setup guide, you’ll be editing images via OpenAI ImageGen1 in under five minutes—no code, maximum creativity.
by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Khairul Muhtadin
❓ What Problem Does It Solve? Manual exporting or copying of leads and newsletter signups from web forms to spreadsheets is time-consuming, error-prone, and delays follow-ups or marketing activities. Traditional workflows can lose data due to mistakes or lack of automation. The Fluentform Export workflow automates the capture and organization of form submissions and newsletter signups into Google Sheets 💡 Why Use this workflow? Save Time:** Automate tedious manual data entry for form leads and newsletter signups Avoid Data Loss:** Ensure all submissions are reliably logged with real-time updates Organized Data:** Separate sheets for newsletter and contact form data maintain clarity Easy Integration:** Works seamlessly with Fluentform submissions and Google Sheets Flexible & Scalable:** Quickly adapt to changes in form structure or spreadsheet columns ⚡ Who Is This For? Marketers & Growth Teams:** Automatically gather leads and newsletter contacts to fuel campaigns Small to Medium Businesses:** Reduce overhead from manual data management and errors Customer Support Teams:** Keep track of form submissions in a centralized, accessible place Website Admins:** Simplify data workflow from Fluentform plugins without coding 🔧 What This Workflow Does ⏱ Trigger:** Listens for incoming POST requests from Fluentform via webhook 📎 Step 2:** Evaluates if the submission is a newsletter signup or a form based on a specific token 🔄 Step 3 (Newsletter Path):** Maps email from newsletter submissions and appends/updates Google Sheets "News Letter" tab 🔄 Step 3 (Form Path):** Extracts full name, email, phone, subject, and message fields and appends/updates the Google Sheets "form" tab 💌 Step 4:** Sends a JSON success response back to Fluentform confirming receipt 🔐 Setup Instructions Import the provided .json workflow file into your n8n instance Set up credentials: Google Sheets OAuth2 credential with access to your target spreadsheets Customize workflow elements: Update Fluentform webhook URL in your Fluentform settings to the n8n webhook URL generated Adjust field names or spreadsheet columns if your form structure changes Update spreadsheet IDs and sheet names used in the Google Sheets nodes to match your own Sheets Test workflow thoroughly with actual Fluentform submissions to verify data flows correctly 🧩 Pre-Requirements Running n8n instance (Cloud or self-hosted) Google account with access to Google Sheets and OAuth credentials Fluentform installed on your website with ability to set webhook URL Target Google Sheets prepared with tabs named "News Letter" and "form" with expected columns 🧠 Nodes Used Webhook (POST - Retrieve Leads) If (Form or newsletter?) Set (newsletter and form data preparation) Google Sheets (Append/update for newsletter and form sheets) Respond to Webhook 📞 Support Made by: khaisa Studio Tag: automation, Google Sheets, Fluentform, Leads Category: Marketing Need a custom? Contact Me
by Dave Bernier
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. This template aims to ease the process of deploying workflows from github. It has a companion repository that developers might find useful{. See below for more details How it works Automatically import and deploy n8n workflows from your GitHub repository to your production n8n instance using a secured webhook-based approach. This template enables teams to maintain version control of their workflows while ensuring seamless deployment through a CI/CD pipeline. Receives webhook notifications from GitHub when changes are pushed to your repository Lists all files in the repository and filters for .json workflow files Downloads each workflow file and saves it locally Imports all workflows into n8n using the CLI import command Cleans up temporary files after successful import To trigger the deployment, send a POST request to your webhook with the set up credentials (basic auth) with the following body: { "owner": "GITHUB_REPO_OWNER_NAME", "repository": "GITHUB_REPOSITORY_NAME" } Set up steps Once importing this template in n8n : Setup the webhook basic auth credentials Setup the github credentials Activate the workflow ! Companion repository There is a companion repository located at https://github.com/dynamicNerdsSolutions/n8n-git-flow-template that has a Github action already setup to work with this workflow. It provides a complete development environment with: Local n8n instance via Docker Automated workflow export and commit scripts Version control integration CI/CD pipeline setup This setup allows teams to maintain a clean separation between development and production environments while ensuring reliable workflow deployment.
by Lucas Correia
What Does This Flow Do? This workflow demonstrates how to dynamically generate a line chart using the QuickChart node based on data provided in a JSON object and then upload the resulting chart image to Google Drive. Use Cases You can use it in presentations or requesting for chart generation from a software with HTTP requests. Automated report generation (e.g., daily sales charts). Visualizing data fetched from APIs or databases. Simple monitoring dashboards. Adding charts to internal tools or notifications. How it Works Trigger: The workflow starts manually when you click 'Test workflow'. Set Sample Data: A Set node (Edit Fields: Set JSON data to test) defines a sample JSON object named jsonData. This object contains: reportTitle: A title (not used in the chart generation in this example, but useful for context). labels: An array of strings representing the labels for the chart's X-axis (e.g., ["Q1", "Q2", "Q3", "Q4"]). salesData: An array of numbers representing the data points for the chart's Y-axis (e.g., [1250, 1800, 1550, 2100]). Generate Chart: The QuickChart node is configured to: Create a line chart. Dynamically read labels from the jsonData.labels array (Labels Mode: From Array). Use the jsonData.salesData array as the input data (Note: This configuration places data in the top-level 'Data' field. For more complex charts with multiple datasets or specific dataset options, configure datasets under 'Dataset Options' instead). The node outputs the generated chart image as binary data in a field named data. Upload to Google Drive: The Google Drive node (Google Drive: Upload File): Takes the binary data (data) from the QuickChart node. Uploads the image to your specified Google Drive folder. Dynamically names the file based on its extension (e.g., chart.png). Setup Steps Import: Import this template into your n8n instance. Configure Google Drive Credentials: Select the Google Drive: Upload File node. You MUST configure your own Google Drive credentials. Click on the 'Credentials' dropdown and either select existing credentials or create new ones by following the authentication prompts. (Optional) Customize Google Drive Folder: In the Google Drive: Upload File node, you can change the Drive ID and Folder ID to specify exactly where the chart should be uploaded. Activate: Activate the workflow if you want it to run automatically based on a different trigger. How to Use & Customize Change Input Data:** Modify the labels and salesData arrays within the Edit Fields: Set JSON data to test node to use your own data. Ensure the number of labels matches the number of data points. Use Real Data Sources:** Replace the Edit Fields: Set JSON data to test node with nodes that fetch data from real sources like: HTTP Request (APIs) Postgres / MongoDB nodes (Databases) Google Sheets node Ensure the output data from your source node is formatted similarly (providing labels and salesData arrays). You might need another Set node to structure the data correctly before the QuickChart node. Change Chart Type:** In the QuickChart node, modify the Chart Type parameter (e.g., change from line to bar, pie, doughnut, etc.). Customize Chart Appearance:** Explore the Chart Options parameter within the QuickChart node to add titles, change colors, modify axes, etc., using QuickChart's standard JSON configuration options. Use Datasets (Recommended for Complex Charts):** For multiple lines/bars or more control, configure datasets explicitly in the QuickChart node: Remove the expression from the top-level Data field. Go to Dataset Options -> Add option -> Add dataset. Set the Data field within the dataset using an expression like {{ $json.jsonData.salesData }}. You can add multiple datasets this way. Change Output Destination:** Replace the Google Drive: Upload File node with other nodes to handle the chart image differently: Write Binary File: Save the chart to the local filesystem where n8n is running. Slack / Discord / Telegram: Send the chart to messaging platforms. Move Binary Data: Convert the image to Base64 to embed in HTML or return via webhook response. Nodes Used Manual Trigger Set QuickChart Google Drive Tags: (Suggestions for tags field) QuickChart, Chart, Visualization, Line Chart, Google Drive, Reporting, Automation
by The O Suite
This n8n workflow automates website security audits. It combines direct website scanning, threat intelligence from AlienVault OTX, and advanced analysis from an OpenAI large language model (LLM) to generate and email a comprehensive security report. How it Works (Workflow Flow): Input: A user provides a website URL via a simple web form. Data Collection: An HTTP Request node visits the provided URL to gather initial data (status code, headers). An AlienVault HTTP Request node queries AlienVault OTX for known threats associated with the website's hostname. Data Preparation (Prepare Data for AI): A custom code node consolidates the collected website data and AlienVault intelligence, performing initial checks for common issues (e.g., error codes, missing security headers, AlienVault warnings). AI Analysis (Security Configuration Audit): The prepared data is sent to an OpenAI Chat Model, which acts as a cybersecurity expert. The AI analyzes the data to identify vulnerabilities, explain their impact, suggest exploitation methods, and outline mitigation steps. Report Formatting (Format Report for Email): Another custom code node takes the AI's plain-text report and converts it into a structured HTML format suitable for email. Delivery (Send Security Report): The final HTML report is sent via Gmail to a specified email address. Setup Steps: To use this workflow, you'll need an n8n instance and the following credentials: n8n Instance: Ensure your n8n environment is running. OpenAI API Key: Generate a key from OpenAI. Add an "OpenAI API" credential in n8n (e.g., "OpenAI account"). AlienVault OTX API Key: Obtain a key from your AlienVault OTX profile. Add an "AlienVault OTX API" credential in n8n (e.g., "AlienVault account"). Gmail Account: Set up a "Gmail OAuth2" credential in n8n for sending emails (recommended for security; involves Google Cloud setup). Import Workflow: Copy the workflow's JSON code. In n8n, import the workflow via "Workflows" > "New" > "Import from JSON". Configure Recipient: In the "Send Security Report" node, specify the email address where reports should be sent. Activate: Enable the workflow to start processing submissions. Once activated, access the "On form submission" webhook URL to input a URL and trigger an audit.
by WeblineIndia
Automate Telegram Chat Responses Using Google Gemini By WeblineIndia* ⚡ TL;DR (Quick Steps) Create a Telegram bot using @BotFather and copy the API Token. Obtain Google Gemini API Key via Google Cloud. Set up the n8n workflow: Trigger: Telegram message received. AI Model: Google Gemini generates response. Output: AI reply sent back to user via Telegram. Customize the system prompt, model, or message handling to suit your use case. 🧠 Description This n8n workflow enables seamless automation of real-time chat replies in Telegram by integrating with Google Gemini's Chat Model. Every time a user sends a message to your Telegram bot, the workflow routes it through the Gemini AI, which analyzes and crafts a professional response. This reply is then automatically delivered back to the user. The setup acts as a lightweight but powerful chatbot system — ideal for businesses, customer service, or even personal productivity bots. You can easily modify its tone, intelligence level, or logging mechanisms to cater to specific domains such as sales, tech support, or general Q&A. 🎯 Purpose of the Workflow The primary goal of this workflow is to automate intelligent, context-aware chat responses in Telegram using a robust AI model. It eliminates manual reply handling, enhances user engagement, and ensures 24/7 interaction capabilities — all through a no-code or low-code setup using n8n. 🛠️ Steps to Configure and Use ✅ Pre-Conditions / Requirements Telegram Bot Token**: Get it from @BotFather. Google Gemini API Key**: Available via Google Cloud PaLM/Gemini API access. n8n Instance**: Hosted or local instance with required nodes installed (Telegram, Basic LLM Chain, and Google Gemini support). 🔧 Setup Instructions Step 1: Telegram Trigger – Listen for Incoming Messages Add Telegram Trigger node. Select Trigger On: Message. Authenticate using your Telegram Bot Token. This will capture incoming messages from any user interacting with your bot. Step 2: Google Gemini AI – Generate a Smart Reply Add the Basic LLM Chain node. Connect the input message ({{$json.message.text}}) from the Telegram Trigger. System Prompt: > "You are an AI assistant. Reply to the following user message professionally:" Choose Google Gemini Chat Model (models/gemini-1.5-pro). Connect this node to receive the text input and pass it to Gemini for processing. Step 3: Telegram Reply – Send the AI Response Add a Telegram node (Operation: Send Message). Set Chat ID dynamically from the Telegram Trigger node. Input the generated message from the Gemini output. Enable Parse Mode as HTML for rich formatting. Final Step: Link All Nodes Receive Telegram Message → Generate AI Response → Send Telegram Reply. > Tip: Test the workflow by sending a message to your Telegram bot and ensure you receive an AI-generated reply. 🧩 Customization Guidance ✏️ Modify the AI tone by updating the system prompt. 🤖 Use other AI models (e.g., OpenAI GPT-4o). 🔍 Add filters to respond differently based on specific keywords. 📊 Extend the workflow to store chats in Google Sheets, Airtable, or databases for audit or analytics. 🌐 Multi-language support: Add translation layers before and after AI processing. 🛠️ Troubleshooting Guide No message received?** Check if your Telegram bot is active and webhook is working. AI not responding?** Validate your Google Gemini API key and usage quota. Wrong replies?** Refine the system prompt or validate message routing. Formatting issues?** Ensure Parse Mode is correctly set to HTML. 💡 Use Case Examples Customer Service Chatbot** for product queries. Educational Bots** for answering user questions on a topic. Mental Health Companion** that gives supportive replies. Event-based Announcers** or automatic responders during off-hours. > And many more! This workflow can be easily extended to support advanced use cases with just a few additional nodes. 👨💻 About the Creator This workflow is developed by WeblineIndia, a trusted provider of AI development services and process automation solutions. If you're looking to build or customize intelligent workflows like this, we invite you to get in touch with our team. We also offer specialized Python development and AI developer hiring services to supercharge your automation needs.