by Naveen Choudhary
Description This workflow automates the process of scraping Google Events data using SerpApi and organizing it in Google Sheets for analysis and tracking. Who's it for Event organizers** who need to monitor competitor events in their area Marketing teams** tracking local events for partnership opportunities Researchers** collecting event data for analysis Business owners** monitoring industry events and conferences How it works The workflow searches Google Events using SerpApi's Google Events engine, processes the returned data, and saves it to a Google Sheets spreadsheet. It handles pagination automatically to collect multiple events and flattens the nested API response into a structured format. What it does Configures search parameters - Sets the search query, total events to fetch, and pagination settings Fetches events via SerpApi - Makes paginated requests to Google Events API with proper rate limiting Processes and flattens data - Transforms nested event data into a flat structure with all relevant fields Saves to Google Sheets - Appends the processed events to a Google Sheets document for easy analysis Requirements SerpApi account** with API key (Get one here) Google Sheets API access** (OAuth2 credentials) Google Sheets document** - Make a copy of this template sheet How to set up Configure SerpApi credentials in the HTTP Request node Set up Google Sheets OAuth2 authentication Update the Google Sheets document ID in the final node to point to your copy Modify search parameters in the "Set Search Parameters" node: Change query to your desired search terms Adjust total_events (10 events per page) Set start position for pagination Run the workflow using the manual trigger How to customize the workflow Search terms**: Modify the query in the Set node (e.g., "conferences in New York", "music events Los Angeles") Event count**: Adjust total_events to fetch more or fewer events Output format**: Modify the Google Sheets column mapping to include/exclude specific fields Rate limiting**: Adjust the requestInterval in the HTTP Request node if needed Scheduling**: Replace the Manual Trigger with a Schedule Trigger for automated runs Output data includes Event title, description, and direct link Start date and timing information Venue and address details Ticket information and pricing Event location map links Event images Original search query for tracking Note: This workflow respects SerpApi rate limits with built-in delays between requests and processes up to 10 events per API call efficiently.
by ist00dent
This n8n template empowers you to instantly fetch a list of public holidays for any given year and country using the Nager.Date API. This is incredibly useful for scheduling, planning, or integrating holiday data into various business and personal automation workflows. 🔧 How it works Receive Holiday Request Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing the year (e.g., 2025) and countryCode (e.g., US for United States, PH for Philippines, DE for Germany) for which you want to retrieve public holidays. Get Public Holidays: This node makes an HTTP GET request to the Nager.Date API (date.nager.at). It dynamically uses the year and countryCode from your webhook request to query the API. The API responds with a JSON array, where each object represents a public holiday with details like its date, name, and type. Respond with Holiday Data: This node sends the full list of public holidays received from Nager.Date back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Businesses with International Operations: Automatically check holidays for different country branches to adjust production schedules, customer service hours, or delivery estimates. HR & Payroll Departments: Accurately calculate workdays, plan leave schedules, or process payroll taking public holidays into account. Event Planners: Avoid scheduling events on public holidays, which could impact attendance or venue availability. Travel Agencies: Inform clients about holidays in their destination country that might affect local business hours or attractions. Content & Social Media Schedulers: Plan content around national holidays to maximize engagement or avoid insensitive postings. Personal Productivity & Travel Planning: Integrate holiday data into your calendar or task management tools to plan trips or personal time off more effectively. Developers: Easily integrate a reliable source of public holiday data into custom applications, dashboards, or internal tools without managing complex datasets. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "year": 2025, "countryCode": "PH" // Example: "US", "DE", "GB", etc. } You can find a comprehensive list of supported country codes on the Nager.Date API documentation: https://www.nager.at/Country The workflow will return a JSON array, where each element is a holiday object, like this example for a single holiday: [ { "date": "2025-01-01", "localName": "New Year's Day", "name": "New Year's Day", "countryCode": "PH", "fixed": true, "global": true, "counties": null, "launchYear": null, "types": [ "Public" ] } // ... more holiday objects ] ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Holiday Request Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /public-holidays). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow is a foundation for many powerful automations: Conditional Branching for Specific Holidays: Add an IF node after "Get Public Holidays" to check for a specific holiday (e.g., "Christmas Day"). You can then trigger different actions (e.g., send a reminder, adjust a schedule) only for that particular holiday. Filtering and Aggregating Data: Use a Filter node to only keep holidays of a certain type (e.g., "Public"). Use a Code or Function node to count the number of public holidays, or extract just the names and dates into a simpler list. Storing Holiday Data: Google Sheets/Airtable: Automatically append new holidays to a spreadsheet for easy reference or further analysis. Database: Store holiday data in a database (like PostgreSQL or MySQL) to build a custom holiday calendar application. Scheduling and Reminders: Connect this workflow to a Cron or Schedule node to run periodically (e.g., once a year at the start of the year). Use the retrieved holiday dates to set up reminders in your calendar (Google Calendar node) or send notifications (Slack, Email, SMS) a few days before an upcoming holiday. Integrate with Business Logic: Employee Leave Management: Cross-reference employee leave requests with public holidays to ensure accuracy. Automated Messages: Schedule automated "Happy Holiday" messages to customers or employees. E-commerce Shipping: Adjust estimated shipping times based on upcoming non-working days. API Key (Not needed for Nager.Date free tier): The Nager.Date API used here does not require an API key for basic public holiday lookups, which makes this template very easy to use out-of-the-box.
by n8n Team
This template shows how you can take any event from any service, transform its data and send an alert to your desired app. Specifically, this example monitors a Linear project for new bug submissions. Then it only sends a Slack notification to a channel if a new bug is urgent. You can swap the Linear trigger for another Task Management app such as Jira or Asana; or an entirely different usecase. Setup instructions are located inside the workflow template.
by Ludovic Bablon
Who is this template for? This workflow template is built for SEO specialists and digital marketers looking to uncover keyword opportunities effortlessly. It uses Google's autocomplete magic to help you spot what's trending. How it works Just give it a keyword. The workflow then queries Google and collects all autocomplete suggestions by appending every letter from A to Z to your keyword. Output example with the keyword "n8n" : You can sort these keywords and give them to an LLM to produce entity-enriched text. Setup instructions It works right out of the box. 🛠️ However, you may want to tweak the output format to better fit your use case. Exporting the Keywords You can easily add a node to export the keywords in various ways: via a webhook by email as a file (e.g., saved to Google Drive) directly to a website Adapting the Language Autocomplete results depend on the selected language. You can change the &hl=en parameter in the Google Autocomplete node. Replace the "en" part with the language code of your choice. Examples: &hl=fr → French &hl=es → Spanish &hl=de → German
by Martech Mafia
Problem Monitoring SEO performance from Google Search Console (GSC) manually is repetitive and prone to human error. For marketers or analysts managing multiple domains, checking reports manually and copying data into spreadsheets or databases is time-consuming. There is a strong need for an automated solution that collects, stores, and updates SEO metrics regularly for easier analysis and dashboarding. Solution This workflow automatically pulls performance metrics from Google Search Console — including queries, pages, CTR, impressions, positions, and devices — and stores them in a structured format inside a NocoDB table. It’s ideal for SEO specialists, marketing teams, or data analysts who need to automate SEO reporting and centralize data for analytics or dashboards (like Superset or Metabase). Setup Instructions Authorize your Google Search Console account Connect via OAuth2 (requires GSC API access). Create a NocoDB table Define fields to match GSC response: query (text) page (URL) device (text) clicks (number) impressions (number) ctr (percentage) position (number) Add credentials in n8n Use credential nodes for both: Google OAuth2 NocoDB API Token Customize schedule trigger Set the frequency (e.g., weekly) and adjust the domain/date range as needed. Generalize domains Replace specific domains like martechmafia.net with your-domain.com before submission. NocoDB Table Structure The NocoDB table must match the fields coming from GSC's Search Analytics API. Here's a sample schema: { "query": "string", "page": "string", "device": "string", "clicks": "number", "impressions": "number", "ctr": "number", "position": "number" }
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Lucas Peyrin
How it works This workflow is a robust and forgiving JSON parser designed to handle malformed or "dirty" JSON strings often returned by AI models or scraped from web pages. It takes a text string as input and attempts to extract and parse a valid JSON object from it. Cleans Input: It starts by trimming whitespace and removing common Markdown code fences (like ` Applies Multiple Fixes: It systematically attempts to correct common JSON errors in a specific order: Escapes unescaped control characters (like newlines) within strings. Fixes invalid backslash escape sequences. Removes trailing commas. Intelligently attempts to fix unescaped double quotes inside string values. Parses Strategically: If a direct parse fails, it tries to extract a potential JSON object from the text (e.g., finding a {...} block inside a larger sentence) and then re-applies the cleaning logic to that extracted portion. Outputs Clean Data: If successful, it outputs the parsed JSON fields. By default, it removes the detailed parsing_status object, but you can deactivate the final "Set" node to keep it for debugging. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow and requires no internal setup. In your main workflow, add an Execute Sub-Workflow node where you need to parse a messy JSON string. In the Workflow parameter, select this "Robust JSON Parser" workflow. Ensure the data you send to the node is a JSON object containing a text field, where the value of text is the string you want to parse. For example: { "text": "{\\\"key\\\": \\\"some broken json...\\\"}" }. The workflow will return the successfully parsed data. To see a detailed log of the cleaning process, simply deactivate the final Remove parsing_status node inside this workflow.
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 2) Description This is the second part of the Luma AI Dream Machine automation. It captures the webhook response from Luma AI after video generation is complete, processes the data, and automatically updates Airtable with the video and thumbnail URLs. This completes the end-to-end automation for video creation and tracking. 👉 Airtable Base Template 👉 Tutorial Video Setup 1. Luma AI Setup Ensure you’ve created an account with Luma AI and generated an API key. Confirm that the API key has permission to manage video requests. 2. Airtable Setup Make sure your Airtable base includes the following fields (set up in Part 1): Use the Airtable Base Template linked above to simplify setup. Generation ID** – To match incoming webhook data. Status** – Workflow status (e.g., "Done"). Video URL** – Stores the generated video URL. Thumbnail URL** – Stores the thumbnail URL. 3. n8n Setup Ensure that the n8n workflow from Part 1 is set up and configured. Import this workflow and connect it to the webhook callback from Luma AI. How It Works 1. Webhook Trigger The Webhook node listens for a POST response from Luma AI once video generation is finished. The response includes: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Generation ID – Used to match the record in Airtable. 2. Process Webhook Data The Set node extracts the video data from the webhook response. The If node checks if the video URL is valid before proceeding. 3. Store in Airtable The Airtable node updates the record with: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Status – Marked as "Done." Uses the Generation ID to match and update the correct record. Why This Workflow is Useful ✅ Automates the completion step for video creation ✅ Ensures accurate record-keeping by matching generation IDs ✅ Simplifies the process of managing and organizing video content ✅ Reduces manual effort by automating the update process Next Steps Future Enhancements** – Adding more complex post-processing, video trimming, and multi-platform publishing.
by Airtop
Automating LinkedIn Competitive Monitoring Use Case Automatically track and summarize LinkedIn posts from key executives at competitor companies. This agent provides structured insights into hiring trends, product announcements, strategic shifts, and thought leadership, helping teams stay informed and responsive without manual monitoring. What This Automation Does This automation monitors and summarizes LinkedIn posts from competitor profiles and shares the results on Slack. It uses the following input parameters: Airtop Profile**: A browser profile authenticated to LinkedIn. Create one Google Sheet**: A document listing LinkedIn profile URLs of competitors, copy this one. Slack Channel**: The destination for sharing summarized post insights. How It Works Trigger: The workflow is scheduled to run weekly at a specific time. Data Collection: Retrieves the list of competitor LinkedIn URLs from a Google Sheet. Browser Automation: Uses Airtop to navigate to each LinkedIn profile and analyze up to 5 recent posts. Summarization: Summarizes number of recent posts, main topics, and engagement levels using Airtop’s AI. Slack Notification: Posts a formatted summary to a predefined Slack channel. Setup Requirements Airtop API Key — free to generate. An Airtop Profile authenticated to LinkedIn. Google Sheet with competitor post URLs, copy this one. Slack Bot credentials with access to the target channel. Next Steps Expand Coverage**: Add more competitor profiles to the Google Sheet to scale monitoring. Integrate with CRM**: Feed summarized insights into your CRM for competitor tracking. Enhance Analysis**: Include post-level engagement metrics over time for trend analysis. Read more about competitve analysis using Linkedin
by Fan Luo
Daily Company News Bot This n8n template demonstrates how to use Free FinnHub API to retrieve the company news from a list stock tickers and post messages in Slack channel with a pre-scheduled time. How it works We firstly define the list of stock tickers you are interested Loop over items to call FinnHub API to get the latest company news for the ticker Then we format the company news as a markdown text content which could be sent to Slack Post a new message in Slack channel Wait for 5 seconds, then move to the next ticker How to use Simply setup a scheduler trigger to automatically trigger the workflow Requirements FinnHub API Key Slack channel webhook Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Dr. Firas
💥 Create viral Ads with NanoBanana & Seedance, publish on socials via upload-post Who is this for? This workflow is designed for marketers, content creators, and small businesses who want to automate the creation of engaging social media ads without spending hours on manual design, video editing, or publishing. What problem is this workflow solving? / Use case Manually creating ads for multiple platforms is time-consuming and repetitive. You need to generate visuals, edit videos, add music, and then publish them across social channels. This workflow automates the end-to-end ad production pipeline, saving time while ensuring consistent, professional-quality output. What this workflow does Receives ad ideas via Telegram. Uses NanoBanana to generate and edit realistic product images. Transforms images into engaging short videos with Seedance. Generates background music with Suno. Merges video and audio into a polished final ad. Reads brand info and generates ad copy with AI (OpenAI). Publishes ads to Instagram, TikTok, YouTube, Facebook, and X via upload-post. Stores media and campaign data in Google Drive and Google Sheets for tracking. Sends back notifications and previews via Telegram. Setup Connect your accounts: Telegram Google Drive Google Sheets OpenAI API NanoBanana API Seedance API Suno API Upload-post Prepare Google Sheets: Add a sheet for brand details (name, category, features, website). Add another sheet for video logs (status, links, captions). Configure upload-post: Ensure your social accounts (TikTok, Instagram, YouTube, Facebook, X) are linked to upload-post. How to customize this workflow to your needs Prompts* → Adjust the *image/video/music prompts** to better reflect your brand’s tone and products. Ad copy* → Modify the AI prompt inside the *Ads Copywriter Generator** to control wording, style, and structure. Publishing scope* → Choose only the platforms you want (TikTok, Instagram, etc.) inside the *upload-post** node. Storage** → Update Google Drive folder IDs and Google Sheets document IDs to match your own workspace. 👉 With this template, you get a fully automated viral ad production system powered by AI visuals, video rendering, and auto-publishing across social platforms. Perfect for scaling your content strategy while saving time. 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Lucas Peyrin
How it works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. The workflow is split into two main parts: Part 1: Indexing the Knowledge (Building the Library) This is a one-time process you run manually. The workflow automatically scrapes all pages of the n8n documentation, breaks them down into small, digestible chunks, and uses an AI model to create a special numerical representation (an "embedding") for each chunk. These embeddings are then stored in n8n's built-in Simple Vector Store. This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. Important: This in-memory knowledge base is temporary. It will be erased if you restart your n8n instance, and you will need to run the indexing process again. Part 2: The AI Agent (The Expert Librarian) This is the chat interface. When you ask a question, the AI agent doesn't guess the answer. Instead, it uses your question to find the most relevant "index cards" (chunks) from the knowledge base it just built. It then feeds these specific, relevant chunks to a powerful language model (Gemini) with a strict instruction: "Answer the user's question using ONLY this information." This ensures the answers are accurate, factual, and grounded in your provided documents. Set up steps Setup time: 2 minutes (plus 15-20 minutes for indexing) This template uses n8n's built-in tools, removing the need for an external database. Follow these simple steps to get started. Configure Google AI Credentials: You will need a Google AI API key for the Gemini models. In your n8n workflow, go to any of the three Gemini nodes (e.g., Gemini 2.5 Flash). Click the Credential dropdown and select + Create New Credential. Enter your Gemini API key and save. Apply Credentials to All Nodes: Your new Google AI credential is now saved. Go to the other two Gemini nodes (Gemini Chunk Embedding and Gemini Query Embedding) and select your newly created credential from the dropdown list. Build the Knowledge Base: Find the Start Indexing manual trigger node at the top-left of the workflow. Click its "Execute workflow" button to start the indexing process. ⚠️ Be Patient: This will take 15-20 minutes as it scrapes and processes the entire n8n documentation. You only need to do this once per n8n session. If you restart n8n, you must run this step again. Chat with Your Expert Agent: Once the indexing is complete, Activate the entire workflow using the toggle at the top of the screen. Open the RAG Chatbot chat trigger node (bottom-left) and copy its Public URL. Open the URL in a new tab and start asking questions about n8n! For example: "How does the IF node work?" or "What is a sub-workflow?".