by bangank36
This workflow retrieves all Shopify Customers and saves them into a Google Sheets spreadsheet using the Shopify Admin REST API. It uses pagination to ensure all customers are collected efficiently. N8n does not have built-in actions for Customers, so I built the workflow using an HTTP Request node. How It Works This workflow uses the HTTP Request node to fetch paginated chunks manually. Shopify uses cursor-based pagination (page_info) instead of traditional page numbers. Pagination data is stored in the response headers, so we need to enable Include Response Headers and Status in the HTTP Request node. The workflow processes customer data, saves it to Google Sheets, and formats a compatible CSV for Squarespace Contacts import. This workflow can be run on demand or scheduled to keep your data up to date. Parameters You can adjust these parameters in the HTTP Request node: limit** โ The number of customers per request (default: 50, max: 250). fields** โ Comma-separated list of fields to retrieve. page_info** โ Used for pagination; only limit and fields are allowed when paginating. ๐ Note: When you query paginated chunks with page_info, only the limit and fields parameters are allowed. Credentials Shopify API Key** โ Required for authentication. Google Sheets API credentials** โ Needed to insert data into the spreadsheet. Google Sheets Template Clone this spreadsheet: ๐ Google Sheets Template According to Squarespace documentation, your spreadsheet can have up to three columns and must be arranged in this order (no header): Email Address First Name (optional) Last Name (optional) Shopify Customer ID (this field will be ignored) Exporting a Compatible CSV for Squarespace Contacts This workflow also generates a CSV file that can be imported into Squarespace Contacts. How to Import the CSV to Squarespace: Open the Lists & Segments panel and click on your mailing list. Click Add Subscribers, then select Upload a list. Click Add a CSV file and select the file to import. Toggle These subscribers accept marketing to confirm permission. Preview your list, then click Import. Who Is This For? Shopify store owners** who need to export all customers to Google Sheets. Anyone looking for a flexible and scalable** Shopify customers extraction solution. Squarespace website owners** who want to bulk-create their Contacts using CSV. Explore More Templates ๐ Check out my other n8n templates
by Julian Reich
This n8n workflow automates the transformation of press releases into polished articles. It converts the content of an email and its attachments (PDF or Word documents) into an AI-written article/blog post. What does it do? This workflow assists editors and journalists in managing incoming press-releases from governments, companies, NGOs, or individuals. The result is a draft article that can easily be reviewed by the editor, who receives it in a reply email containing both the original input and the output, plus an AI-generated self-assessment. This self-assessment represents an additional feedback loop where the AI compares the input with the output to evaluate the quality and accuracy of its transformation. How does it work? Triggered by incoming emails in Google, it first filters attachments, retaining only Word and PDF files while removing other formats like JPGs. The workflow then follows one of three paths: If no attachments remain, it processes the inline email message directly. For PDF attachments, it uses an extractor to obtain the document content. For Word attachments, it extracts the text content by a http request. In each case, the extracted content is then passed to an AI agent that converts the press release into a well-structured article according to predefined prompts. A separate AI evaluation step provides a self-assessment by comparing the output with the original input to ensure quality and accuracy. Finally, the workflow generates a reply email to the sender containing three components: the original input, the AI-generated article, and the self-assessment. This streamlined process helps editors and journalists efficiently manage incoming press releases, delivering draft articles that require minimal additional editing." How to set it up 1. Configure Gmail Connection: Create or use an existing Gmail address Connect it through the n8n credentials manager Configure polling frequency according to your needs Set the trigger event to "Message Received" Optional: Filter incoming emails by specifying authorized senders Enable the "Download Attachments" option 2. Set Up AI Integration: Create an OpenAI account if you don't have one Create a new AI assistant or use an existing one Customize the assistant with specific instructions, style guidelines, or response templates Configure your API credentials in n8n to enable the connection 3. Configure Google Drive Integration: Connect your Google Drive credentials in n8n Set the operation mode to "Upload" Configure the input data field name as "data" -Set the file naming format to dynamic: {{ $json.fileName }} 4. Configure HTTP Request Node: Set request method to "POST" Enter the appropriate Google API endpoint URL Include all required authorization headers Structure the request body according to API specifications Ensure proper error handling for API responses 5. Configure HTTP Request Node 2: Set request method to "GET" Enter the appropriate Google API endpoint URL Include all required authorization headers Configure query parameters as needed Implement response validation and error handling 6. Configure Self-Assessment Node: Set operation to "Message a Model" Select an appropriate AI model (e.g., GPT-4, Claude) Configure the following prompt in the Message field: Please analyze and compare the following input and output content: (for example) Original Input: {{ $('HTTP Request3').item.json.data }} {{ $('Gmail Trigger').item.json.text }} Generated Output: {{ $json.output }} Provide a detailed self-assessment that evaluates: Content accuracy and completeness Structure and readability improvements Tone and style appropriateness Any information that may have been omitted or misrepresented Overall quality of the transformation 7. Configure Reply Email Node: Set operation to "Send" and select your Gmail account Configure the "To" field to respond to the original sender: {{ $('Gmail Trigger').item.json.from }} Set an appropriate subject line: RE: {{ $('Gmail Trigger').item.json.subject }} Structure the email body with clear sections using the following template: handlebars EDITED ARTICLE* {{ $('AI Article Writer 2').item.json.output }} SELF-ASSESSMENT* Rating: 1 (poor) to 5 (excellent) {{ $json.message.content }} ORIGINAL MESSAGE* {{ $('Gmail Trigger').item.json.text }} ATTACHMENT CONTENT* {{ $('HTTP Request3').item.json.data }} Note: Adjust the template fields according to the input source (PDF, Word document, or inline message). For inline messages, you may not need the "ATTACHMENT CONTENT" section.
by Jonathan
This workflow is part of an MSP collection, which is publicly available on GitHub. This workflow archives or unarchives a Clockify projects, depending on a Syncro status. Note that Syncro should be setup with a webhook via 'Notification Set for Ticket - Status was changed'. It doesn't handle merging of tickets, as Syncro doesn't support a 'Notification Set' for merged tickets, so you should change a ticket to 'Resolved' first before merging it. Prerequisites A Clockify account and credentials Nodes Webhook node triggers the workflow. IF node filters projects that don't have the status 'Resolved'. Clockify nodes get all projects that (don't) have the status 'Resolved', based on the IF route. HTTP Request nodes unarchives unresolved projects, and archives resolved projects, respectively.
by Ghaith Alsirawan
๐ง This workflow is designed for one purpose only, to bulk-upload structured JSON articles from an FTP server into a Qdrant vector database for use in LLM-powered semantic search, RAG systems, or AI assistants. The JSON files are pre-cleaned and contain metadata and rich text chunks, ready for vectorization. This workflow handles Downloading from FTP Parsing & splitting Embedding with OpenAI-embedding Storing in Qdrant for future querying JSON structure format for blog articles { "id": "article_001", "title": "reseguider", "language": "sv", "tags": ["london", "resa", "info"], "source": "alltomlondon.se", "url": "https://...", "embedded_at": "2025-04-08T15:27:00Z", "chunks": [ { "chunk_id": "article_001_01", "section_title": "Introduktion", "text": "Vรคlkommen till London..." }, ... ] } ๐งฐ Benefits โ Automated Vector Loading Handles FTP โ JSON โ Qdrant in a hands-free pipeline. โ Clean Embedding Input Supports pre-validated chunks with metadata: titles, tags, language, and article ID. โ AI-Ready Format Perfect for Retrieval-Augmented Generation (RAG), semantic search, or assistant memory. โ Flexible Architecture Modular and swappable: FTP can be replaced with GDrive/Notion/S3, and embeddings can switch to local models like Ollama. โ Community Friendly This template helps others adopt best practices for vector DB feeding and LLM integration.
by Tom
This workflow shows a no code approach to creating Salesforce accounts and contacts based on data coming from Excel 365 (the online version of Microsoft Excel). For a version working with regular Excel files check out this workflow instead. To run the workflow: Make sure you have both Excel 365 and Salesforce authenticated with n8n. Have a Microsoft Excel workbook with contacts and their account names ready: Select the workbook and sheet in the Microsoft Excel node of the workflow, then configure the range to read data from: Hit the Execute Workflow button at the bottom of the n8n canvas: Here is how it works: The workflow first searches for existing Salesforce accounts by name. It then branches out depending on whether the account already exists in Salesforce or not. If an account does not exist yet, it will be created. The data is then normalised before both branches converge again. Finally the contacts are created or updated as needed in Salesforce.
by n8n Team
This workflow performs several data integration and synchronization tasks between Google Sheets and a MySQL database. Here is a step-by-step description of what this workflow does: Manual Trigger: The workflow starts when the user clicks "Execute Workflow." Schedule Trigger: This node schedules the workflow to run at specific intervals on weekdays (Monday to Friday) between 6 AM and 10 PM. It ensures regular data synchronization. Google Sheet Data: This node connects to a specific Google Sheets document and retrieves data from the "Form Responses 1" sheet, filtering by the "DB Status" column. SQL Get inquiries from Google: This node retrieves data from a MySQL database table named "ConcertInquiries" where the "source_name" is "GoogleForm." Rename GSheet variables: This node renames the columns retrieved from Google Sheets and transforms the data into a format suitable for MySQL, assigning a value for "source_name" as "GoogleForm." Compare Datasets: This node compares the data retrieved from Google Sheets and the MySQL database based on timestamp and source_name fields. It identifies changes and updates. No reply too long?: This node checks if there has been no reply within the last four hours, using the "timestamp" field from the Google Sheets data. DB Status assigned?: This node checks if the "DB Status" field is not empty in the compared dataset. Update GSheet status: If conditions are met in the previous nodes, this node updates the "DB Status" field in Google Sheets with the corresponding value from the MySQL dataset. DB Status in sync?: This node checks if the "source_name" field in Google Sheets is not empty. Sync MySQL data: If conditions are met in the previous nodes, this node updates the "source_name" field in the MySQL database to "GoogleFormSync." Send Notifications: If conditions are met in the "No reply too long?" node, this node sends notifications or performs actions as needed. Sticky Notes: These nodes provide additional information and documentation links for users.
by PretenderX
This template automates sending a DingTalk message on new Azure Dev Ops Pull Request Created Events. It uses a MySQL database to store mappings between Azure users and DingTalk users; so the right users get notified. Set up instructions Define the path value of ReceiveTfsPullRequestCreatedMessage Webhook node of your own, copy the webhook url to create a Azure DevOps ServiceHook that call webhook with Pull Request Created event. In order to configure the LoadDingTalkAccountMap node, you need to create a MySQL table as below: |Name|Type|Length|Key| |-|-|-|-| |TfsAccount|varchar|255| |UserName|varchar|255| |DingTalkMobile|varchar|255| You can customize the Ding Talk message content by editing the BuildDingTalkWebHookData node. Define the URL of SendDingTalkMessageViaWebHook Http Request node as your Ding Talk group chat robot webhook URL. Send test of production message from Azure DevOps to test.
by Harshil Agrawal
This workflow appends, lookup, updates, and reads data from a Google Sheet spreadsheet. Set node: The Set node is used to generate data that we want to add to Google Sheets. Depending on your use-case you might have data coming from a different source. For example, you might be fetching data from a WebHook call. Add the node that will fetch the data that you want to add to the Google Sheet. Use can then use the Set node to set the data that you want to add to the Google Sheets. Google Sheets node: This node will add the data from the Set node in a new row to the Google Sheet. You will have to enter the Spreadsheet ID and the Range to specify which sheet you want to add the data to. Google Sheets1 node: This node looks for a specific value in the Google Sheet and returns all the rows that contain the value. In this example, we are looking for the value Berlin in our Google Sheet. If you want to look for a different value, enter that value in the Lookup Value field, and specify the column in the Lookup Column field. Set1 node: The Set node sets the value of the rent by $100 for the houses in Berlin. We pass this new data to the next nodes in the workflow. Google Sheets2 node: This node will update the rent for the houses in Berlin with the new rent set in the previous node. We are mapping the rows with their ID. Depending on your use-case, you might want to map the values with a different column. To set this enter the column name in the Key field. Google Sheets3 node: This node returns the information from the Google Sheet. You can specify the columns that should get returned in the Range field. Currently, the node fetches the data for columns A to D. To fetch the data only for columns A to C set the range to A:C. This workflow can be broken down into different workflows each with its own use case. For example, we can have a workflow that appends new data to a Google Sheet, and another workflow that lookups for a certain value and returns that value. You can learn to build this workflow on the documentation page of the Google Sheets node.
by Davide
This workflow dynamically chooses between two new powerful Anthropic models โ Claude Opus 4 and Claude Sonnet 4 โ to handle user queries, based on their complexity and nature, maintaining scalability and context awareness with Anthropic web search function and Think tool. Key Advantages ๐ Dynamic Model Selection Automatically routes each user query to either Claude Sonnet 4 (for routine tasks) or Claude Opus 4 (for complex reasoning), ensuring optimal performance and cost-efficiency. ๐ง AI Agent with Tool Use The AI agent can utilize a web search tool to retrieve up-to-date information and a Think tool for complex reasoning processes โ improving response quality. ๐ Memory Integration Uses session-based memory to maintain conversational context, making interactions more coherent and human-like. ๐งฎ Built-in Calculation Tool Handles numeric queries using an integrated calculator tool, reducing the need for external processing. ๐ค Structured Output Parser Ensures outputs are always well-structured and formatted in JSON, which improves consistency and downstream integrations. ๐ Web Search Capability Supports real-time information retrieval for current events, statistics, or details not available in the AIโs base knowledge. Components Overview Trigger**: Listens for new chat messages. Routing Agent**: Analyzes the message and returns the best model to use. AI Agent**: Handles the conversation, decides when to use tools. Tools**: web_search for internet queries Think for reasoning Calculator for math tasks Models Used**: claude-sonnet-4-20250514: Optimized for general and business logic tasks. claude-opus-4-20250514: Best for deep, strategic, and analytical queries. How It Works Dynamic Model Selection The workflow begins when a chat message is received. The Anthropic Routing Agent analyzes the user's query to determine the most suitable model (either Claude Sonnet 4 or Claude Opus 4) based on the query's complexity and requirements. The routing agent uses predefined criteria to decide: Claude Sonnet 4: Best for standard tasks like real-time workflow routing, data validation, and routine business logic. Claude Opus 4: Reserved for complex scenarios requiring deep reasoning, advanced analysis, or high-impact decisions. Query Processing and Response Generation The selected model processes the query, leveraging tools like web_search for real-time information retrieval, Think for internal reasoning, and Calculator for numerical tasks. The AI Agent coordinates these tools, ensuring the response is accurate and context-aware. A Simple Memory node retains session context for coherent multi-turn conversations. The final response is formatted and returned to the user without intermediate steps or metadata. Set Up Steps Node Configuration Trigger: Configure the "When chat message received" node to handle incoming user queries. Routing Agent: Set up the "Anthropic Routing Agent" with the system message defining model selection logic. Ensure it outputs a JSON object with prompt and model fields. AI Model Nodes: Link the "Sonnet 4 or Opus 4" node to dynamically use the selected model. The "Sonnet 3.7" node powers the routing agent itself. Tool Integration Attach the "web_search" HTTP tool to enable internet searches, ensuring the API endpoint and headers (e.g., anthropic-version) are correctly configured. Connect auxiliary tools (Think, Calculator) to the "AI Agent" for extended functionality. Add the "Simple Memory" node to maintain conversation history. Credentials Provide an Anthropic API key to all nodes requiring authentication (e.g., model nodes, web search). Testing Activate the workflow and test with sample queries to verify: Correct model selection (e.g., Sonnet for simple queries, Opus for complex ones). Proper tool usage (e.g., web searches trigger when needed). Memory retention across chat turns. Deployment Once validated, set the workflow to active for live interactions. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Mauricio Perera
Overview This workflow exposes an HTTP endpoint (webhook) that accepts a JSON definition of an n8n workflow, validates it, andโif everything is correctโdynamically creates that workflow in the n8n instance via its internal API. If any validation fails or the API call encounters an error, an explanatory message with details is returned. Workflow Diagram Webhook โ โผ Validate JSON โโ fails validation โโโบ Validation Error โ โโ passes โโบ Validation Successful? โ โโ true โโบ Create Workflow โโโบ API Successful? โโโบ Success Response โ โ โ โโ false โโบ API Error โโ false โโบ Validation Error Step-by-Step Details 1. Webhook Type**: Webhook (POST) Path**: /webhook/create-workflow Purpose**: Expose a URL to receive a JSON definition of a workflow. Expected Input**: JSON containing the main workflow fields (name, nodes, connections, settings). 2. Validate JSON Type**: Code Node (JavaScript) Validations Performed**: Ensure that payload exists and contains both name and nodes. Verify that nodes is an array with at least one item. Check that each node includes the required fields: id, name, type, position. If missing, initialize connections, settings, parameters, and typeVersion. Output if Error**: { "success": false, "message": "<error description>" } Output if Valid**: { "success": true, "apiWorkflow": { "name": payload.name, "nodes": payload.nodes, "connections": payload.connections, "settings": payload.settings } } 3. Validation Successful? Type**: IF Node Condition**: $json.success === true Branches**: true: proceed to Create Workflow false: route to Validation Error 4. Create Workflow Type**: HTTP Request (POST) URL**: http://127.0.0.1:5678/api/v1/workflows Authentication**: Header Auth with internal credentials Body**: The apiWorkflow object generated earlier Options**: continueOnFail: true (to handle failures in the next IF) 5. API Successful? Type**: IF Node Condition**: $response.statusCode <= 299 Branches**: true: proceed to Success Response false: route to API Error 6. Success Response Type**: SET Node Output**: { "success": "true", "message": "Workflow created successfully", "workflowId": "{{ $json.data[0].id }}", "workflowName": "{{ $json.data[0].name }}", "createdAt": "{{ $json.data[0].createdAt }}", "url": "http://localhost:5678/workflow/{{ $json.data[0].id }}" } 7. API Error Type**: SET Node Output**: { "success": "false", "message": "Error creating workflow", "error": "{{ JSON.stringify($json) }}", "statusCode": "{{ $response.statusCode }}" } 8. Validation Error Type**: SET Node Output**: { "success": false, "message": "{{ $json.message }}" } Example Webhook Request curl --location --request POST 'http://localhost:5678/webhook/create-workflow' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "My Dynamic Workflow", "nodes": [ { "id": "start-node", "name": "Start", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [100, 100], "parameters": {} }, { "id": "set-node", "name": "Set", "type": "n8n-nodes-base.set", "typeVersion": 1, "position": [300, 100], "parameters": { "values": { "string": [ { "name": "message", "value": "Hello from a webhook-created workflow!" } ] } } } ], "connections": { "Start": { "main": [ [ { "node": "Set", "type": "main", "index": 0 } ] ] } }, "settings": {} }' Expected Success Response { "success": "true", "message": "Workflow created successfully", "workflowId": "abcdef1234567890", "workflowName": "My Dynamic Workflow", "createdAt": "2025-05-31T12:34:56.789Z", "url": "http://localhost:5678/workflow/abcdef1234567890" } Validation Error Response { "success": false, "message": "The 'name' field is required in the workflow" } API Error Response { "success": "false", "message": "Error creating workflow", "error": "{ ...full API response details... }", "statusCode": 401 }
by Nikan Noorafkan
๐งพ Template: Extract Ad Creatives from Googleโs Ads Transparency Center This n8n workflow pulls ad creatives from Google's Ads Transparency Center using SerpApi, filtered by a specific domain and region. It extracts, filters, categorizes, and exports ads into neatly formatted CSV files for easy analysis. ๐ค Whoโs it for? Marketing Analysts** researching competitive PPC strategies Ad Intelligence Teams** monitoring creatives from specific brands Digital Marketers** gathering visual and copy trends Journalists & Watchdogs** reviewing ad activity transparency โ Features Fetch creatives** using SerpApi's google_ads_transparency_center engine Filter results** to include only ads with an exact match to your target domain Categorize** by ad format: text, image, or video Export CSVs**: Generates a downloadable file for each format under the /files/ directory ๐ How to Use Edit the โSet Domain & Regionโ node domain: e.g. example.com region: SerpApi numeric region code โ See codes Add your SerpApi API key In the โGet Ads Page 1โ nodeโs credentials section. Run the workflow Click "Test workflow" to initiate the process. Download your results Navigate to /files/ to find: text_{domain}_ads.csv image_{domain}_ads.csv video_{domain}_ads.csv ๐ Notes Only the first page (up to 50 creatives) is fetched; pagination is not included. Sticky Notes inside the workflow nodes offer helpful internal annotations. CSV files include creative-level details: ad copy, images, video links, etc.
by Solomon
The Stripe API does not provide custom fields in invoice or charge data. So you have to get it from the Checkout Sessions endpoint. But that endpoint is not easy for begginners. It has dictionary parameters and pagination settings. This workflows solves that problem by having a preconfigured GET request that gets all the checkout sessions from the last 7 days. It then transforms the data to make it easier to work with and allows you to filter by the custom_fields you want to get. Want to generate Stripe invoices automatically? Open ๐ this workflow . Check out my other templates https://n8n.io/creators/solomon/