by Ludovic Bablon
Who is this template for? This workflow template is built for SEO specialists and digital marketers looking to uncover keyword opportunities effortlessly. It uses Google's autocomplete magic to help you spot what's trending. How it works Just give it a keyword. The workflow then queries Google and collects all autocomplete suggestions by appending every letter from A to Z to your keyword. Output example with the keyword "n8n" : You can sort these keywords and give them to an LLM to produce entity-enriched text. Setup instructions It works right out of the box. 🛠️ However, you may want to tweak the output format to better fit your use case. Exporting the Keywords You can easily add a node to export the keywords in various ways: via a webhook by email as a file (e.g., saved to Google Drive) directly to a website Adapting the Language Autocomplete results depend on the selected language. You can change the &hl=en parameter in the Google Autocomplete node. Replace the "en" part with the language code of your choice. Examples: &hl=fr → French &hl=es → Spanish &hl=de → German
by Martech Mafia
Problem Monitoring SEO performance from Google Search Console (GSC) manually is repetitive and prone to human error. For marketers or analysts managing multiple domains, checking reports manually and copying data into spreadsheets or databases is time-consuming. There is a strong need for an automated solution that collects, stores, and updates SEO metrics regularly for easier analysis and dashboarding. Solution This workflow automatically pulls performance metrics from Google Search Console — including queries, pages, CTR, impressions, positions, and devices — and stores them in a structured format inside a NocoDB table. It’s ideal for SEO specialists, marketing teams, or data analysts who need to automate SEO reporting and centralize data for analytics or dashboards (like Superset or Metabase). Setup Instructions Authorize your Google Search Console account Connect via OAuth2 (requires GSC API access). Create a NocoDB table Define fields to match GSC response: query (text) page (URL) device (text) clicks (number) impressions (number) ctr (percentage) position (number) Add credentials in n8n Use credential nodes for both: Google OAuth2 NocoDB API Token Customize schedule trigger Set the frequency (e.g., weekly) and adjust the domain/date range as needed. Generalize domains Replace specific domains like martechmafia.net with your-domain.com before submission. NocoDB Table Structure The NocoDB table must match the fields coming from GSC's Search Analytics API. Here's a sample schema: { "query": "string", "page": "string", "device": "string", "clicks": "number", "impressions": "number", "ctr": "number", "position": "number" }
by Lucas Peyrin
How it works This workflow is a robust and forgiving JSON parser designed to handle malformed or "dirty" JSON strings often returned by AI models or scraped from web pages. It takes a text string as input and attempts to extract and parse a valid JSON object from it. Cleans Input: It starts by trimming whitespace and removing common Markdown code fences (like ` Applies Multiple Fixes: It systematically attempts to correct common JSON errors in a specific order: Escapes unescaped control characters (like newlines) within strings. Fixes invalid backslash escape sequences. Removes trailing commas. Intelligently attempts to fix unescaped double quotes inside string values. Parses Strategically: If a direct parse fails, it tries to extract a potential JSON object from the text (e.g., finding a {...} block inside a larger sentence) and then re-applies the cleaning logic to that extracted portion. Outputs Clean Data: If successful, it outputs the parsed JSON fields. By default, it removes the detailed parsing_status object, but you can deactivate the final "Set" node to keep it for debugging. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow and requires no internal setup. In your main workflow, add an Execute Sub-Workflow node where you need to parse a messy JSON string. In the Workflow parameter, select this "Robust JSON Parser" workflow. Ensure the data you send to the node is a JSON object containing a text field, where the value of text is the string you want to parse. For example: { "text": "{\\\"key\\\": \\\"some broken json...\\\"}" }. The workflow will return the successfully parsed data. To see a detailed log of the cleaning process, simply deactivate the final Remove parsing_status node inside this workflow.
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 2) Description This is the second part of the Luma AI Dream Machine automation. It captures the webhook response from Luma AI after video generation is complete, processes the data, and automatically updates Airtable with the video and thumbnail URLs. This completes the end-to-end automation for video creation and tracking. 👉 Airtable Base Template 👉 Tutorial Video Setup 1. Luma AI Setup Ensure you’ve created an account with Luma AI and generated an API key. Confirm that the API key has permission to manage video requests. 2. Airtable Setup Make sure your Airtable base includes the following fields (set up in Part 1): Use the Airtable Base Template linked above to simplify setup. Generation ID** – To match incoming webhook data. Status** – Workflow status (e.g., "Done"). Video URL** – Stores the generated video URL. Thumbnail URL** – Stores the thumbnail URL. 3. n8n Setup Ensure that the n8n workflow from Part 1 is set up and configured. Import this workflow and connect it to the webhook callback from Luma AI. How It Works 1. Webhook Trigger The Webhook node listens for a POST response from Luma AI once video generation is finished. The response includes: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Generation ID – Used to match the record in Airtable. 2. Process Webhook Data The Set node extracts the video data from the webhook response. The If node checks if the video URL is valid before proceeding. 3. Store in Airtable The Airtable node updates the record with: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Status – Marked as "Done." Uses the Generation ID to match and update the correct record. Why This Workflow is Useful ✅ Automates the completion step for video creation ✅ Ensures accurate record-keeping by matching generation IDs ✅ Simplifies the process of managing and organizing video content ✅ Reduces manual effort by automating the update process Next Steps Future Enhancements** – Adding more complex post-processing, video trimming, and multi-platform publishing.
by Airtop
Automating LinkedIn Competitive Monitoring Use Case Automatically track and summarize LinkedIn posts from key executives at competitor companies. This agent provides structured insights into hiring trends, product announcements, strategic shifts, and thought leadership, helping teams stay informed and responsive without manual monitoring. What This Automation Does This automation monitors and summarizes LinkedIn posts from competitor profiles and shares the results on Slack. It uses the following input parameters: Airtop Profile**: A browser profile authenticated to LinkedIn. Create one Google Sheet**: A document listing LinkedIn profile URLs of competitors, copy this one. Slack Channel**: The destination for sharing summarized post insights. How It Works Trigger: The workflow is scheduled to run weekly at a specific time. Data Collection: Retrieves the list of competitor LinkedIn URLs from a Google Sheet. Browser Automation: Uses Airtop to navigate to each LinkedIn profile and analyze up to 5 recent posts. Summarization: Summarizes number of recent posts, main topics, and engagement levels using Airtop’s AI. Slack Notification: Posts a formatted summary to a predefined Slack channel. Setup Requirements Airtop API Key — free to generate. An Airtop Profile authenticated to LinkedIn. Google Sheet with competitor post URLs, copy this one. Slack Bot credentials with access to the target channel. Next Steps Expand Coverage**: Add more competitor profiles to the Google Sheet to scale monitoring. Integrate with CRM**: Feed summarized insights into your CRM for competitor tracking. Enhance Analysis**: Include post-level engagement metrics over time for trend analysis. Read more about competitve analysis using Linkedin
by Fan Luo
Daily Company News Bot This n8n template demonstrates how to use Free FinnHub API to retrieve the company news from a list stock tickers and post messages in Slack channel with a pre-scheduled time. How it works We firstly define the list of stock tickers you are interested Loop over items to call FinnHub API to get the latest company news for the ticker Then we format the company news as a markdown text content which could be sent to Slack Post a new message in Slack channel Wait for 5 seconds, then move to the next ticker How to use Simply setup a scheduler trigger to automatically trigger the workflow Requirements FinnHub API Key Slack channel webhook Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Max Tkacz
This template demonstrates how to trigger an AI Agent with Siri and Apple Shortcuts, showing a simple pattern for voice-activated workflows in n8n. It's easy to customize—add app nodes before the AI Agent step to pass additional context, or modify the Apple Shortcut to send inputs like text, geolocation, images, or files. Set Up Basic instructions in template itself. Requirements n8n account** (cloud or self-hosted) Apple Shortcuts app** on iOS or macOS. Dictation ("Siri") must be activated. Download the Shortcuts template here. Key Features: Voice-Controlled AI:** Trigger AI Agent via Siri for real-time voice replies. Customizable Inputs:** Modify Apple Shortcut to send text, images, geolocation, and more. Flexible Outputs:** Siri can return the AI’s response as text, files, or customize it to trigger CRUD actions in connected apps. Context-Aware:** Automatically feeds the current date and time to the AI Agent, with easy options to pass in more data. How It Works: Activate Siri and speak your request. Siri sends the transcribed text to the n8n workflow via Apple Shortcuts. AI Agent processes the request and generates a response. Siri reads the response, or the workflow can return geolocation, files, or even perform CRUD actions in apps. Inspiration: Custom Use Cases Tweak this template and make it your own. Capture Business Cards:** Snap a photo of a business card and record a voice note. Have the AI Agent draft a follow-up email in Gmail, ready to send. Voice-to-Task Automation:** Speak a new to-do item, and the workflow will add it to a Notion task board. Business English on the Fly:** Convert casual speech into polished business language, and save the refined text directly to your pasteboard, ready to be pasted into any app. "It's late because of you" -> "There has been a delay, and I believe your input may have contributed to it."
by Yaron Been
Vcollos Trefilio AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the vcollos/trefilio model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: vcollos/trefilio API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yulia
This is an end-to-end workflow for creating a simple OpenAI Assistant. The whole process is done with n8n nodes and do not require any programming experience. The workflow is divided into three main steps: Step 1: Get a Google Drive File and Upload to OpenAI The workflow starts by retrieving a file from Google Drive using the "Get File" node. The example file used is a Music Festival document. The retrieved file is then uploaded to OpenAI using the "Upload File to OpenAI" node. Run this section only once. The file is stored persistently on the OpenAI side. Step 2: Set Up a New Assistant In this step, a new assistant is created using the "Create new Assistant" node. The assistant is given a name, description, and system prompt. The uploaded file from Step 1 is attached as a knowledge source for the assistant. Same as for Step 1, run this section only once. Step 3: Chat with the Assistant The "Chat Trigger" node initiates the conversation with the assistant. The "OpenAI Assistant" node handles the conversation, using the assistant created in Step 2. Step 4: Expand the Assistant This step provides resources for ideas on how to expand the Assistant's capabilities: Create a WhatsApp bot Create a simple Telegram bot Create a Telegram AI bot (YouTube video) By following this workflow, users can create their own AI-powered assistants using OpenAI's API and integrate them with various platforms like WhatsApp and Telegram.
by Yang
Who is this for? This workflow is perfect for lead generation experts, digital marketers, SEO professionals, and virtual assistants who need to quickly collect local business information based on specific search terms without manually navigating Google Places. What problem is this workflow solving? Manually searching Google Places for business leads is time-consuming and inconsistent. This workflow automates the entire process using Dumpling AI’s Google Places search endpoint, helping users collect accurate and structured business data and log it into a Google Sheet automatically. What this workflow does This workflow runs daily at 1 PM. It starts by reading a list of business-related search terms from a Google Sheet (for example, “dentists in Dallas”). Each term is sent to Dumpling AI’s search-places endpoint, which returns local business listings from Google Places. The data is split, structured, and logged row-by-row in a connected Google Sheet. Nodes Overview Run Every Day at 1 PM A scheduled trigger that executes the workflow daily. Google Sheets (Input) – Fetch Search Terms from Sheet Pulls a list of search terms from a Google Sheet. Each term should describe a business category and location (e.g., “coffee shops in Atlanta”). HTTP Request – Scrape Google Places via Dumpling AI Sends each search term to Dumpling AI’s /search-places endpoint, returning data like business names, phone numbers, websites, ratings, and categories. Split In Batches – Split Places Result Breaks the list of businesses returned for each search term into individual items for processing. Google Sheets (Output) – Save Each Business to Sheet Saves the scraped data into a second Google Sheet. Each row contains: title address rating category phoneNumber website 📝 Notes You must set up Dumpling AI and generate your API key from: Dumpling AI You can change the run schedule in the schedule node to fit your needs (e.g., weekly or hourly).
by Joachim Hummel
This workflow automatically fetches PDF invoices from a Nextcloud folder (/Invoice/Incoming), sends them via email to a fixed recipient (invoice@example.com), sends a Telegram notification, and archives the file to /Invoice/2025/archive. Key Steps: Triggered daily at 8 AM Lists files in /Invoice/Incoming Filters for existing entries Downloads the file Sends the invoice via email Sends a Telegram message with filename Moves the file to archive 📦 Technologies used: Nextcloud SMTP Email Telegram Bot ⚙️ Use case: Perfect for freelancers or small businesses to automate recurring invoice sending with minimal effort.
by nepomuc
This flow migrates all repositories of a Gitlab group to a Gitea organization by triggering Gitea's integrated migration tool. Set up steps: Copy this workflow Create an empty Gitea-organization you want to migrate to. (The flow will skip all projects which have the same name of possibly already existing repos in the target Gitea organization.) Create an access token in your Gitea (https://gitea.example.com/user/settings/applications), set it up as a Header Auth with it's name being "Authorization" and value being "token [your-gitea-token]" and select it for the "Gitea:"-named nodes. Create a Personal access token in Gitlab (https://gitlab.com/-/user_settings/personal_access_tokens), create a Header Auth with name "PRIVATE-TOKEN" and value "[your-gitlab-token]" and select it for the "Gitlab:"-named node. Also keep the value of your Gitlab-token available for step 5. Edit the Set node right after the trigger node and set paste your personal access token in there as well as the names of the Gitlab source group and the Gitea target organization. Use the url-friendly version of their names by simply copy&pasting them from their URLs. Run the flow and enjoy the show :)