by Harshil Agrawal
This workflow automatically creates an event in PostHog when a request is made to a webhook URL. Prerequisites A PostHog account and credentials Nodes Webhook node triggers the workflow when a URL is accessed. PostHog node creates a new event in PostHog.
by Harshil Agrawal
This workflow allows you to create an invoice with the information received via Typeform submission. Typeform node: This node triggers the workflow. Whenever the form is submitted, the node triggers the workflow. We will use the information received in this node to generate the invoice. APITemplate.io node: This node generates the invoice using the information from the previous node.
by Harshil Agrawal
This workflow allows you to receive a message on Mattermost when your n8n instance starts. n8n Trigger node: The n8n Trigger node will trigger the workflow whenever the instance starts. Mattermost node: This node will send a message on Mattermost, notifying you when n8n starts.
by Jan Oberhauser
Simple workflow which allows to receive data from a Google Sheet via "REST" endpoint. Wait for Webhook Call Get data from Google Sheet Return data Example Sheet: https://docs.google.com/spreadsheets/d/17fzSFl1BZ1njldTfp5lvh8HtS0-pNXH66b7qGZIiGRU
by q
This workflow automatically notifies the team in a Slack channel when code in a GitHub repository gets a new release. Prerequisites A GitHub account and credentials A Slack account and credentials Nodes GitHub Trigger node triggers the workflow when a release event takes place in the specified repository. Slack node posts a message in a specified channel with the text "New release is available in {repository name}", along with further details and a link to the release.
by David Olusola
Overview This workflow regularly backs up a Google Sheet by exporting its data and saving it as a new file (CSV or XLSX) in a specified folder within your Google Drive. This ensures data redundancy and historical versions. Use Case: Critical business data backup, audit trails, historical data snapshots. How It Works This workflow operates in three main steps: Scheduled Trigger: A Cron node triggers the workflow at a set interval (e.g., daily, weekly). Read Google Sheet Data: A Google Sheets node reads all data from the specified tab of your target Google Sheet. Upload to Google Drive: A Google Drive node takes the data read from the sheet. It converts the data into a file (e.g., CSV or XLSX format). It then uploads this file to a pre-defined folder in your Google Drive, with a dynamic filename including the date for versioning. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Create Google Sheets and Google Drive Credentials in n8n In your n8n instance, go to Credentials in the left sidebar. Ensure you have a "Google Sheets OAuth2 API" credential set up. If not, create one. Ensure you have a "Google Drive OAuth2 API" credential set up. If not, create one. Make note of their Credential Names. Step 2: Prepare Your Google Sheet and Drive Folder Source Google Sheet: Identify the Google Sheet you want to back up. Copy its Document ID (from the URL). Note the Sheet Name (or GID) of the specific tab you want to back up. Destination Google Drive Folder: Go to your Google Drive (drive.google.com). Create a new folder for your backups (e.g., Google Sheets Backups). Copy the Folder ID from its URL. Step 3: Import the Workflow JSON Step 4: Configure the Nodes Read Google Sheet Data Node: Select your Google Sheets credential. Replace YOUR_SOURCE_GOOGLE_SHEET_ID with the ID of the Google Sheet you want to back up. Replace Sheet1 with the exact name of the tab you want to back up. Upload Backup to Google Drive Node: Select your Google Drive credential. Replace YOUR_DESTINATION_GOOGLE_DRIVE_FOLDER_ID with the ID of the Google Drive folder where you want to store backups. File Type: The fileType is set to csv. You can change this to xlsx if you prefer an Excel format for the backup (though CSV is often simpler for raw data backups). Step 5: Activate and Test the Workflow Click the "Activate" toggle button. To test immediately, click "Execute Workflow". Check your Google Drive backup folder. A new file named something like backup_Sheet1_2025-07-26.csv should appear.
by Baptiste Fort
Export Google Search Console Data to Airtable Automatically If you’ve ever downloaded CSV files from Google Search Console, opened them in Excel, cleaned the weird formatting, and pasted them into a sheet just to get a simple report… this workflow is made for you. Who Is This Workflow For? This automation is perfect for: SEO freelancers and consultants** → who want to track client performance without wasting time on manual exports. Marketing teams** → who need fresh daily/weekly reports to check what keywords and pages are performing. Website owners** → who just want a clean way to see how their site is doing without logging into Google Search Console every day. Basically, if you care about SEO but don't want to babysit CSV files, this workflow is your new best friend. If you need a professional n8n agency to build advanced data automation workflows like this, check out Vision IA's n8n automation services. What Does It Do? Here’s the big picture: It runs on a schedule (every day, or whenever you want). It fetches data directly from the Google Search Console API. It pulls 3 types of reports: By Query (keywords people used). By Page (URLs that ranked). By Date (daily performance). It splits and cleans the data so it’s human-friendly. It saves everything into Airtable, organized in three tables. End result: every time you open Airtable, you have a neat SEO database with clicks, impressions, CTR, and average position — no manual work required. Prerequisites You’ll need a few things to get started: Access to Google Search Console. A Google Cloud project with the Search Console API enabled. An Airtable account to store the data. An automation tool that can connect APIs (like the one we’re using here). That’s it! Step 1: Schedule the Workflow The very first node in the workflow is the Schedule Trigger. Why?** → So you don’t have to press “Run” every day. What it does** → It starts the whole workflow at fixed times. In the JSON, you can configure things like: Run every day at a specific hour (e.g., 8 AM). Or run every X hours/minutes if you want more frequent updates. This is the alarm clock of your automation ⏰. Step 2: Set Your Domain and Time Range Next, we define the site and the time window for the report. In the JSON, there’s a Set node with two important parameters: domain → your website (example: https://www.vvv.fr/). days → how many days back you want the data (default: 30). 👉 Changing these two values updates the whole workflow. Super handy if you want 7-day reports instead of 30. Step 3: Fetch Data from Google Search Console This is where the workflow talks to the API. There are 3 HTTP Request nodes: Get Query Report Pulls data grouped by search queries (keywords). Parameters in the JSON: startDate = today - 30 days endDate = today dimensions = "query" rowLimit = 25000 (maximum rows the API can return) Get Page Report Same idea, but grouped by page URLs. Parameters: dimensions = "page" Same dates and row limit. Get Date Report This one groups performance by date. Parameters: dimensions = "date" You get a day-by-day performance view. Each request returns rows like this: { "keys": ["example keyword"], "clicks": 42, "impressions": 1000, "ctr": 0.042, "position": 8.5 } Step 4: Split the Data The API sends results in a big array (rows). That’s not very usable directly. So we add a Split Out node for each report. What it does: breaks the array into single items → 1 item per keyword, per page, or per date. This way, each line can be saved individually into Airtable. 👉 Think of it like opening a bag of candy and laying each one neatly on the table 🍬. Step 5: Clean and Rename Fields After splitting, we use Edit Fields nodes to make the data human-friendly. For example: In the Query report → rename keys[0] into Keyword. In the Page report → rename keys[0] into page. In the Date report → rename keys[0] into date. This is also where we keep only the useful fields: Keyword / page / date clicks impressions ctr position Step 6: Save Everything into Airtable Finally, the polished data is sent into Airtable. In the JSON, there are 3 Airtable nodes: Queries table** → stores all the keywords. Pages table** → stores all the URLs. Dates table** → stores day-by-day metrics. Each node is set to: Operation** = Create → adds a new record. Base** = Search Console Reports. Table** = Queries, Pages, or Dates. Field Mapping For Queries: Keyword → {{ $json.Keyword }} clicks → {{ $json.clicks }} impressions → {{ $json.impressions }} ctr → {{ $json.ctr }} position → {{ $json.position }} 👉 Same logic for Pages and Dates, just replace Keyword with page or date. Expected Output Every time this workflow runs: Queries table** fills with fresh keyword performance data. Pages table** shows how your URLs performed. Dates table** tracks the evolution day by day. In Airtable, you now have a complete SEO database with no manual exports. Why This Is Awesome 🚫 No more messy CSV exports. 📈 Data is always up-to-date. 🎛 You can build Airtable dashboards, filters, and interfaces. ⚙️ Easy to adapt → just change domain or days to customize. And the best part? You can spend the time you saved on actual SEO improvements instead of spreadsheet gymnastics 💃. Need Help Automating Your Data Workflows? This n8n workflow is perfect for automating SEO reporting and data collection. If you want to go further with document automation, file processing, and data synchronization across your tools, our agency specializes in building custom automation systems. 👉 Explore our document automation services: Vision IA – Document Automation Agency We help businesses automate their data workflows—from collecting reports to organizing files and syncing information across CRMs, spreadsheets, and databases—all running automatically. Questions about this workflow or other automation solutions? Visit Vision IA or reach out for a free consultation.
by Eugene
Who is this for Marketing teams tracking AI SEO performance Content strategists planning editorial calendars SEO teams doing competitive intelligence What this workflow does Identify content opportunities by analyzing where competitors outrank you in AI search and traditional SEO. What you'll get AI visibility gaps across ChatGPT, Perplexity, and Gemini Keyword gaps with search volume and difficulty Competitor backlink authority metrics Prioritized opportunities with HIGH/MEDIUM/LOW scoring Actionable recommendations for each gap How it works Fetches AI search visibility for your domain and competitor Compares metrics across ChatGPT, Perplexity, and Gemini Extracts competitor's top-performing prompts and keywords Analyzes competitor backlink authority Calculates opportunity scores and prioritizes gaps Exports ranked opportunities to Google Sheets Requirements Self-hosted n8n instance SE Ranking community node installed SE Ranking API token (Get one here) Google Sheets account (optional) Setup Install the SE Ranking community node Add your SE Ranking API credentials Update the Configuration node with your domain and competitor Connect Google Sheets for export (optional) Customization Change source for different regions (us, uk, de, fr, etc.) Adjust volume/difficulty thresholds in Code nodes Modify priority scoring weights
by Robin Geuens
Overview Turn your keyword research into a clear, fact-based content outline with this workflow. It splits your keyword into 5-6 subtopics, makes research questions for those subtopics, and uses Tavily to pull answers from real search results. This way your outline is based on real data, not just AI training data, so you can create accurate and reliable content. How it works Enter a keyword in the form to start the workflow The OpenAI node splits the keyword into 5-6 research subtopics and makes a research question for each one. These questions will be used to enrich the outline later on We split the research questions into separate items so we can process them one by one Each research question is sent to Tavily. Tavily searches the web for answers and returns a short summary Next, we add the answers to our JSON sections We take all the separate items and join them into one list again The JSON outline is converted into Markdown using a code node. The code takes the JSON headers, turns them into Markdown headings (level 2), and puts the answers underneath Setup steps Get an OpenAI API key and set up your credentials inside n8n Sign up for a Tavily account and get an API key — you can use a free account for testing Install the Tavily community node. If you don’t want to use a community node, you can call Tavily directly using an HTTP node. Check their API reference for what endpoints to call Run the workflow and enter the keyword you want to target in the form Adjust the workflow to decide what to do with the Markdown outline Requirements An OpenAI API key A Tavily account The Tavily community node installed (Optional) If you don’t want to use the Tavily community node, use a regular HTTP node and call the API directly. Check their API reference for what endpoints to call Workflow customizations Instead of using a form to enter your keyword, you can keep all your research in a Google Doc and go through it row by row You can add another AI node at the end to turn the outline into a full article You can put the outline in a Google Doc and send it to a writer using the Google Docs node and the Gmail node
by Ranjan Dailata
This workflow automates brand intelligence analysis across AI-powered search results by combining SE Ranking’s AI Search data with structured processing in n8n. It retrieves real AI-generated prompts, answers, and cited sources where a brand appears, then normalizes and consolidates this data into a clean, structured format. The workflow eliminates manual review of AI SERPs and makes it easy to understand how AI search engines describe, reference, and position a brand. Who this is for This workflow is designed for: SEO strategists and growth marketers** analyzing brand visibility in AI-powered search engines Content strategists** identifying how brands are represented in AI answers Competitive intelligence teams** tracking brand mentions and narratives Agencies and consultants** building AI SERP reports for clients Product and brand managers** monitoring AI-driven brand perception What problem is this workflow solving? Traditional SEO tools focus on rankings and keywords but do not capture how AI search engines talk about brands. Key challenges this workflow addresses: No visibility into AI-generated prompts and answers mentioning a brand Difficulty extracting linked sources and references from AI SERPs Manual effort required to normalize and structure AI search responses Lack of export-ready datasets for reporting or downstream automation 3. What this workflow does At a high level, this workflow: Accepts a brand name and AI search parameters Fetches real AI search prompts, answers, and citations from SE Ranking Extracts and normalizes: Prompts with answers Supporting reference links Raw AI SERP JSON Merges all outputs into a unified structured dataset Exports the final result as structured JSON ready for analysis, reporting, or storage This enables brand-level AI SERP intelligence in a repeatable and automated way Setup Prerequisites n8n (self-hosted or cloud) Active SE Ranking API access HTTP Header authentication configured in n8n Local or server file system access for JSON export Setup Steps If you are new to SE Ranking, please signup on seranking.com Configure Credentials SE Ranking using HTTP Header Authentication. Please make sure to set the header authentication as below. The value should contain a Token followed by a space with the SE Ranking API Key. Set Input Parameters Brand name AI engine (e.g., Perplexity) Source/region Sorting preferences Result limits Configure Output Update file path in the “Write File to Disk” node Ensure write permissions are available Execute Workflow Click Execute Workflow Generated brand intelligence is saved as structured JSON How to customize this workflow You can easily adapt this workflow to your needs: Change Brand Focus** Modify the brand input to analyze competitors or product names Switch AI Engines** Compare brand narratives across different AI search engines Add AI Enrichment** Insert OpenAI or Gemini nodes to summarize brand sentiment or themes Classification & Tagging** Categorize prompts into awareness, comparison, pricing, reviews, etc. Replace File Export** Send results to: Databases Google Sheets Dashboards Webhooks or APIs Scale for Monitoring** Schedule runs to track brand perception changes over time Summary This workflow delivers true AI SERP brand intelligence by combining SE Ranking’s AI Search data with structured extraction and automation in n8n. It transforms opaque AI-generated brand mentions into actionable, exportable insights, enabling SEO, content, and brand teams to stay ahead in the era of AI-first search.
by Rapiwa
Who Is This For? This n8n workflow listens for order cancellations in Shopify, extracts relevant customer and order data, checks if the customer’s phone number is registered on WhatsApp via the Rapiwa API, and sends a personalised apology message with a re-order link. It also logs successful and unsuccessful attempts in Google Sheets for tracking. What This Workflow Does Listens for cancelled orders in your Shopify store Extracts customer details and order information Generates a personalised apology message including a reorder link Sends the message to customers via WhatsApp using a messaging API (e.g., Twilio or Rapiwa) Logs the communication results for tracking purposes Key Features Real-Time Cancellation Detection:** Automatically triggers when an order is cancelled Personalised Messaging:** Includes customer name, order details, and a direct reorder link WhatsApp Integration:** Sends messages via WhatsApp for higher engagement Error Handling:** Logs successful and failed message deliveries Reorder Link:** Provides a convenient link for customers to reorder with one click Requirements n8n instance with nodes: Shopify Trigger, HTTP Request (for WhatsApp API), Code, Google Sheets (optional) Shopify store with API access WhatsApp messaging provider account with API access Valid customer phone numbers stored in Shopify orders How to Use — Step-by-Step Setup Credentials Setup Shopify API: Configure Shopify API credentials in n8n to listen for order cancellations WhatsApp API: Set up WhatsApp messaging credentials (e.g., Twilio, Rapiwa, or any supported provider) Google Sheets (Optional): Configure Google Sheets OAuth2 if you want to log communications Configure Trigger Set the workflow to trigger on Shopify order cancellation events Customize Message Content Modify the apology message template to include your store branding and tone Ensure the reorder link dynamically includes the customer's cancelled order info Set Up WhatsApp Node Connect your WhatsApp API credentials Ensure the phone numbers are formatted correctly for WhatsApp delivery Google Sheet Required Columns You’ll need two Google Sheets (or two tabs in one spreadsheet): A Google Sheet formatted like this ➤ sample The workflow uses a Google Sheet with the following columns to track coupon distribution: | Name | Number | Email | Address | Price | Title | Re-order Link | Validity | Status | | -------------- | ------------- | --------------------------------------------------- | ----------------- | ----------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ | ---------- | | Abdul Mannan | 8801322827799 | contact@spagreen.net | Dhaka, Bangladesh | BDT 1955.00 | Pakistani Lawn | Link 🔗 | unverified | not sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | Dhaka, Bangladesh | BDT 1955.00 | Pakistani Lawn | Link 🔗 | verified | sent | Important Notes Phone Number Validation:** Ensure customer phone numbers are WhatsApp-enabled and formatted properly API Rate Limits:** Respect your WhatsApp provider’s API limits to avoid throttling Data Privacy:** Always comply with privacy laws when messaging customers Error Handling:** Monitor logs regularly to handle failed message deliveries Testing:** Test thoroughly with dummy data before activating the workflow live Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Guillaume Duvernay
This template provides a straightforward technique to measure and raise awareness about the environmental impact of your AI automations. By adding a simple calculation step to your workflow, you can estimate the carbon footprint (in grams of CO₂ equivalent) generated by each call to a Large Language Model. Based on the open methodology from Ecologits.ai, this workflow empowers you to build more responsible AI applications. You can use the calculated footprint to inform your users, track your organization's impact, or simply be more mindful of the resources your workflows consume. Who is this for? Environmentally-conscious developers:** Build AI-powered applications with an awareness of their ecological impact. Businesses and organizations:** Track and report on the carbon footprint of your AI usage as part of your sustainability goals. Any n8n user using AI:** A simple and powerful snippet that can be added to almost any AI workflow to make its invisible environmental costs visible. Educators and advocates:** Use this as a practical tool to demonstrate and discuss the real-world impact of AI technologies. What problem does this solve? Makes the abstract tangible:** The environmental cost of a single AI call is often overlooked. This workflow translates it into a concrete, measurable number (grams of CO₂e). Promotes responsible AI development:** Encourages builders to consider the efficiency of their prompts and models by showing the direct impact of the generated output. Provides a standardized starting point:** Offers a simple, transparent, and extensible method for carbon accounting in your AI workflows, based on a credible, open-source methodology. Facilitates transparent communication:** Gives you the data needed to transparently communicate the impact of your AI features to stakeholders and users. How it works This template demonstrates a simple calculation snippet that you can adapt and add to your own workflows. Set conversion factor: A dedicated Conversion factor node at the beginning of the workflow holds the gCO₂e per token value. This makes it easy to configure. AI generates output: An AI node (in this example, a Basic LLM Chain) runs and produces a text output. Estimate token count: The Calculate gCO₂e node takes the character length of the AI's text output and divides it by 4. This provides a reasonable estimate of the number of tokens generated. Calculate carbon footprint: The estimated token count is then multiplied by the conversion factor defined in the first node. The result is the carbon footprint for that single AI call. Setup Set your conversion factor (Critical Step): The default factor (0.0612) is for GPT-4o hosted in the US. Visit ecologits.ai/latest to find the specific conversion factor for your AI model and server region. In the Conversion factor node, replace the default value with the correct factor. Integrate the snippet into your workflow: Copy the Conversion factor and Calculate gCO₂e nodes from this template. Place the Conversion factor node near the start of your workflow (before your AI node). Place the Calculate gCO₂e node after your AI node. Link your AI output: Click on the Calculate gCO₂e node. In the AI output field, replace the expression with the output from your AI node (e.g., {{ $('My OpenAI Node').item.json.choices[0].message.content }}). The carbon calculation will now work with your data. Activate your workflow. The carbon footprint will now be calculated with each execution. Taking it further Improve accuracy with token counts:* If your AI node (like the native *OpenAI** node) directly provides the number of output tokens (e.g., completion_tokens), use that number instead of estimating from the text length. This will give you a more precise calculation. Calculate total workflow footprint:* If you have multiple AI nodes, add a calculation step after each one. Then, add a final *Set** node at the end of your workflow to sum all the individual gCO₂e values. Display the impact:** Add the final AI output gCO₂e value to your workflow's results, whether it's a Slack message, an email, or a custom dashboard, to keep the environmental impact top-of-mind. A note on AI agents:** This estimation method is difficult to apply accurately to AI Agents at this time, as the token usage of their intermediary "thinking" steps is not yet exposed in the workflow data.