by Custom Workflows AI
Introduction The "High-Level Service Page SEO Blueprint Report" workflow is a powerful, AI-driven solution designed to generate comprehensive SEO content strategies for service-based businesses. By analyzing competitor websites and user intent, this workflow creates a detailed blueprint that outlines the optimal structure, content, and conversion elements for a service page. The workflow leverages the JINA Reader API to extract content from competitor websites and uses Google Gemini AI to perform deep analysis across multiple dimensions: competitor content structure, user intent, strategic opportunities, and conversion optimization. The final output is a professionally formatted Markdown document that provides actionable guidance for creating a high-performing service page that satisfies both user needs and search engine requirements. This workflow eliminates the time-consuming process of manually analyzing competitors and developing content strategies, providing a data-driven foundation for service page creation that would typically require hours of expert analysis. Who is this for? This workflow is designed for digital marketers, SEO specialists, content strategists, and web developers who need to create or optimize service pages for businesses. It's particularly valuable for marketing agencies and freelancers who regularly develop content strategies for clients across various industries. Users should have a basic understanding of SEO concepts, content marketing, and website structure. While technical SEO knowledge is beneficial, the workflow is designed to provide comprehensive guidance even for those with intermediate-level expertise. The ideal user is someone who wants to streamline their content planning process and ensure their service pages are built on data-driven insights rather than guesswork. What problem is this workflow solving? Creating effective service pages that rank well in search engines while converting visitors is a complex challenge that typically requires extensive competitive research, content planning, and conversion optimization expertise. This workflow addresses several key pain points: Time-consuming competitor analysis: Manually analyzing multiple competitor websites to identify content patterns, heading structures, and meta tag strategies can take hours. Difficulty identifying content gaps: Determining what topics competitors are missing that could provide a competitive advantage requires deep analysis and industry knowledge. Balancing SEO and conversion elements: Creating content that satisfies both search engines and user needs while driving conversions is a delicate balance that many struggle to achieve. Lack of structured approach: Many content creators work without a comprehensive blueprint, leading to inconsistent results and missed opportunities. Difficulty translating analysis into actionable recommendations: Even when analysis is performed, turning those insights into a concrete content plan can be challenging. This workflow automates these processes, providing a structured, data-driven approach to service page creation that saves hours of research and planning time. What this workflow does Overview The workflow takes a list of competitor URLs and a target keyword as input, then performs a multi-stage analysis to generate a comprehensive service page blueprint. It extracts and analyzes competitor content, evaluates user intent, identifies strategic opportunities, and creates detailed recommendations for page structure, content, and conversion elements. The final output is a professionally formatted Markdown document that serves as a complete roadmap for creating an effective service page. Process Data Collection: The workflow begins with a form that collects essential information: competitor URLs, target keyword, services offered, brand name, and whether the page is a homepage. Competitor Content Extraction: The workflow processes each competitor URL, using the JINA Reader API to extract the HTML content from each site. Content Structure Analysis: For each competitor site, the workflow extracts and analyzes heading structures, meta tags, schema markup, and recurring phrases (n-grams). Competitor Analysis Report: The AI synthesizes the competitive data to identify patterns in meta titles/descriptions, common outline sections, key heading concepts, and structural elements. User Intent Analysis: The workflow analyzes the target keyword to determine primary and secondary user intents, user personas, and their position in the buyer's journey. Gap Analysis: The AI identifies content overlaps ("table stakes"), content gaps (opportunities), SEO keyword priorities, and potential UX/conversion advantages. Page Outline Generation: Based on the previous analyses, the workflow creates an optimal page structure with H1, H2s, H3s, and potentially H4s, with justifications for each section. UX & Conversion Recommendations: The workflow adds detailed recommendations for calls-to-action, trust signals, copywriting tone, visual elements, and risk reversal strategies. Final Blueprint Creation: All analyses and recommendations are compiled into a comprehensive, well-structured Markdown document that serves as a complete service page blueprint. Setup Download or import the "High-Level Service Page SEO Blueprint Report" workflow JSON file into your n8n instance. Create a JINA Reader API key by visiting https://jina.ai/api-dashboard/key-manager. You can claim a free API key that allows up to 1 million tokens. Set up Google Gemini (PaLM) credentials by following the guide at https://docs.n8n.io/integrations/builtin/credentials/googleai/#using-geminipalm-api-key. Update the "Edit Fields" node with: Your JINA Reader API Key Adjust the "Waiting Time" to 20 seconds if using the free Google Gemini API tier (which limits to 5 requests per minute) Optionally change the Gemini model if needed Activate the workflow and start the form trigger. Complete the form with: Competitors (up to 5 direct competitor URLs) Target Keyword (the query related to your service) Services Offered (details of your complete service offerings) Brand Name (your company name) Whether the page is a homepage After processing, download the generated .txt file, which contains the blueprint in Markdown format. How to customize this workflow to your needs Adjust AI parameters: Modify the temperature settings in the Google Gemini Chat Model nodes to control creativity vs. precision in the AI outputs. Customize extraction logic: Edit the "Extract HTML Elements" code node to focus on specific HTML elements that are most relevant to your industry or content type. Modify analysis prompts: Customize the prompts in the various analysis nodes to focus on specific aspects of SEO or content strategy that are most important for your use case. Add industry-specific guidance: Enhance the prompts with industry-specific instructions or examples to make the output more relevant to particular sectors. Integrate with content management systems: Extend the workflow to automatically send the blueprint to content management systems, project management tools, or document storage platforms. Add competitor scoring: Implement a scoring system to evaluate and rank competitors based on specific criteria relevant to your strategy. Expand the analysis: Add additional analysis nodes to evaluate other aspects of competitor websites, such as page speed, mobile-friendliness, or backlink profiles.
by Airtop
Use Case Turn any web page into a compelling LinkedIn post — complete with an AI-generated image. This automation is ideal for sharing content like blog posts, case studies, or product updates in a polished and engaging format. What This Automation Does Given a page URL and optional user instructions, this automation: Scrapes the content of the webpage Uses AI to write a clear, educational, and LinkedIn-optimized post Generates a brand-aligned visual that matches the content Sends both to Slack for review and approval Handles feedback and revisions via Slack interactions Input: Page URL** — The link to the webpage (required) Instructions** — Optional notes on tone, emphasis, or format Output: LinkedIn post text AI-generated visual prompt and image Slack message with review/approval options How It Works Form Submission: User inputs a web page and optional instructions. Web Scraping: Uses Airtop to extract page content. Post Generation: AI agent writes a post based on the page and instructions. Visual Generation: Another AI model creates an image prompt; this is sent to a sub-workflow for image rendering. Slack Review Flow: Post and image sent to Slack for feedback User can approve, request revisions, or decline Revisions trigger reprocessing steps automatically Final Post Delivery: Approved post and image are sent back to Slack, ready to publish. Setup Requirements Airtop API key OpenAI credentials for post and image prompt generation Slack OAuth integration with a review channel A sub-workflow for branded image generation Next Steps Post Directly**: Add LinkedIn publishing to automate the full content workflow. Template Variations**: Offer post style presets (e.g., technical, story-driven, short-form). CRM Sync**: Save approved posts and stats in Airtable or Notion for team use. Read more about content generation with Airtop
by Zach @BrightWayAI
Who's it for Content creators, researchers, educators, and digital marketers who need to discover high-quality YouTube training videos on specific topics. Perfect for building curated learning resource lists, competitive research, or content inspiration. What it does This workflow automatically searches YouTube using multiple search queries, filters for quality content, scores videos by relevance, and exports the top results to Google Sheets. It processes hundreds of videos and delivers only the most valuable educational content ranked by custom relevance criteria. The workflow searches for videos using 10 different AI automation-related queries (easily customizable), filters out low-quality content like shorts and clickbait, then ranks results based on title keywords, view counts, and engagement metrics. How it works Multi-query search: Searches YouTube with an array of related queries to get comprehensive coverage Content filtering: Removes shorts, spam, and low-quality videos using regex patterns Quality assessment: Filters videos based on view count, likes, and publication date Relevance scoring: Assigns scores based on title keywords and engagement metrics Result ranking: Sorts videos by relevance score and limits to top 50 results Export to Sheets: Delivers clean, organized data to Google Sheets with all metadata Requirements YouTube Data API v3 credentials from Google Cloud Console Google Sheets credentials for n8n workspace A Google Sheets document to receive the results How to set up Enable YouTube Data API v3 in your Google Cloud Console Add YouTube OAuth2 credentials to your n8n workspace Add Google Sheets credentials to your n8n workspace Create a Google Sheet and update the Google Sheets node with your document ID Customize search queries in the "Set Query" node for your topic Adjust filtering criteria in the Filter nodes based on your quality requirements How to customize the workflow Search topics: Modify the query array in the "Set Query" node to research any topic: [ "Python tutorial", "JavaScript course", "React beginner guide", // Add your queries here ] Quality thresholds: Adjust minimum views, likes, and date ranges in the "Filter for Quality" node Relevance scoring: Customize keyword weightings in the "Relevance Score" node to match your priorities Result limits: Change the number of final results in the "Limit" node (default: 50) Output format: Modify the "Set Fields" node to include additional YouTube metadata like duration, thumbnails, or category information The workflow is designed to be easily adaptable for any research topic while maintaining high content quality standards.
by InfyOm Technologies
✅ What problem does this workflow solve? If you're using a self-hosted n8n instance, there's no built-in version history or undo for your workflows. If a workflow is accidentally modified or deleted, there's no way to roll back. This backup workflow solves that problem by automatically syncing your workflows to Google Drive, giving you version control and peace of mind. ⚙️ What does this workflow do? ⏱ Runs on a set schedule (e.g., daily or every 12 hours). 🔍 Fetches all workflows from your self-hosted n8n instance. 🧠 Detects changes to avoid duplicate backups. 📁 Creates a dedicated folder for each workflow in Google Drive. 💾 Uploads new or updated workflow files in JSON format. 🗃️ Keeps backup history organized by date. 🔄 Allows for easy restore by importing backed-up JSON into n8n. 🔧 Setup Instructions 1. Google Drive Setup Connect your Google Drive account using the Google Drive node in n8n. Choose or create a root folder (e.g., n8n-workflow-backups) where backups will be stored. 2. n8n API Credentials Generate a Personal Access Token from your self-hosted n8n instance: Go to Settings → API in your n8n dashboard. Copy the token and use it in the HTTP Request node headers as: Authorization: Bearer <your_token> 3. Schedule the Workflow Use the Cron node to schedule this workflow to run at your desired frequency (e.g., once a day or every 12 hours). 🧠 How it Works Step-by-Step Flow: Scheduled Trigger The workflow begins on a timed schedule using the Cron node. Fetch All Workflows Uses the n8n API (/workflows) to retrieve a list of all existing workflows. Loop Through Workflows For each workflow: A folder is created in Google Drive using the workflow name. The workflow’s last updated timestamp is checked against Google Drive backups. Smart Change Detection If the workflow has changed since the last backup: A new .json file is uploaded to the corresponding folder. The file is named with the last updated date of the workflow (YYYY-MM-DD-HH-mm-ss.json) to maintain a versioned history. If no change is detected, the workflow is skipped. 🗂 Google Drive Folder Organization Backups are neatly organized by workflow and version: /n8n-workflow-backups/ ├── google-drive-backup-KqhdMBHIyAaE7p7v/ │ ├── 2025-07-15-13-03-32.json │ ├── 2025-07-14-03-08-12.json ├── resume-video-avatar-KqhdMBHIyAaE8p8vr/ │ ├── 2025-07-15-23-05-52.json Each folder is named after the workflow's name+id and contains timestamped versions. 🔧 Customization Options 📅 Change Backup Frequency Adjust the Cron node to run backups daily, weekly, or even hourly based on your needs. 📤 Use a Different Storage Provider You can swap out Google Drive for Dropbox, S3, or another cloud provider with minimal changes. 🧪 Add Workflow Filtering Only back up workflows that are active or match specific tags by filtering results from the n8n API. ♻️ How to Restore a Workflow from Backup Go to the Google Drive backup folder for the workflow you want to restore. Download the desired .json file (based on the date). Open your self-hosted n8n instance. Click Import Workflow from the sidebar menu. Upload the JSON file to restore the workflow. > You can choose to overwrite an existing workflow or import it as a new one. 👤 Who can use this? This template is ideal for: 🧑💻 Developers running self-hosted n8n 🏢 Teams managing large workflow libraries 🔐 Anyone needing workflow versioning, rollback, or disaster recovery 💾 Productivity enthusiasts looking for automated backups 📣 Tip Consider enabling version history in Google Drive so you get even more fine-grained backup recovery options on top of what this workflow provides! 🚀 Ready to use? Just plug in your n8n token, connect Google Drive, and schedule your backups. Your workflows are now protected!
by ARRE
Good to know: This workflow automatically transcribes your favorite podcasts or videos saved in a YouTube playlist and generates a comprehensive, AI-powered summary—so you can quickly understand the main topics and insights without having to watch or listen to the entire episode. 👤 Who is this for? Podcast fans who want to save time and get the key points from episodes Busy professionals who follow educational or industry videos and need quick takeaways Content creators or researchers who organize and review large amounts of video/audio material Anyone who wants to efficiently capture and summarize information from YouTube playlists ❓ What problem is this workflow solving? This workflow solves the challenge of information overload from long-form podcasts and videos. It: Automatically transcribes each video or podcast episode in your chosen YouTube playlist Uses AI to create a clear, well-structured summary of the content Lets you learn and extract valuable information without watching or listening to the entire recording Organizes everything in a Google Sheets document for easy tracking and future reference ✅ What this workflow does: 📺 Fetches all videos from a specified YouTube playlist 🔗 Extracts video titles, URLs, and IDs 📝 Retrieves and combines transcripts for each video or podcast episode 📜 Processes transcript data for clarity 🤖 Uses AI to generate a detailed, sectioned summary that covers all main topics and insights 📊 Automatically logs video titles, transcripts, summaries, and row numbers to a Google Sheets spreadsheet ⚙️ How it works: 🟢 Trigger: Start the workflow manually or on a schedule 📺 Fetch videos from your chosen YouTube playlist 🔗 Extract and organize video details (title, URL, ID) 📝 Retrieve the transcript for each video or podcast episode 📜 Combine transcript segments into a single script ✂️ Extract the first sentences for focused summarization 🤖 AI agent creates a comprehensive summary of the episode or video 📊 Save all data—title, transcript, summary, and row number—to Google Sheets 🛠️ How to use: Set up YouTube OAuth2 credentials in n8n Configure Google Sheets OAuth2 credentials Set up API credentials for transcript and AI processing Create and link your Google Sheets document Input your playlist ID and adjust any filters as needed Activate the workflow 📝 Requirements: n8n instance (cloud or self-hosted) YouTube account with OAuth2 access Google Sheets account Access to transcript and AI APIs Basic n8n workflow knowledge 🟢 Customizing this workflow: Change the YouTube playlist ID to target your preferred podcasts or video series Adjust the transcript retrieval process for other APIs or formats Customize the AI prompt for different summary styles or focus areas Add or remove fields in the Google Sheets output Change the workflow trigger or polling frequency Switch to a different AI model if desired This workflow is designed to help you quickly learn from podcasts and videos you care about—without spending hours consuming the full content.
by Gleb D
This n8n workflow template automates the process of collecting and analyzing Twitter (X) posts for any public profile, then generates a clean, AI-powered summary including key metrics, interests, and activity trends. 🚀 What It Does Accepts a user's full name and date range through a public form. Automatically finds the person’s X (formerly Twitter) profile using a Google search. Uses Bright Data to retrieve full post data from the X.com profile. Extracts key post metrics like views, likes, reposts, hashtags, and mentions. Uses Google Gemini (PaLM) to generate a personalized summary: tone, themes, popularity, and sentiments. Stores both raw data and the AI summary into a connected Google Sheet for further review or team collaboration. 🛠️ Step-by-Step Setup Deploy the public form to collect full name and date range. Build a Google search query using the name to find their X profile. Scrape the search results via Bright Data (Web Unlocker zone). Parse the page content using the HTML node. Use Gemini AI to extract the correct X profile URL. Pull full post data via Bright Data dataset snapshot API. Transform post data into clean structured fields: date_posted, description, hashtags, likes, views, quoted_post.date_posted, quoted_post.description, replies, reposts, quotes, and tagged_users.profile_name. Analyze all posts using Google Gemini for interest detection and persona generation. Save results to a Google Sheet: structured post data + AI-written summary. Show success or fallback messages depending on profile detection or scraping status. 🧠 How It Works: Workflow Overview Trigger: When user submits form Search & Match: Google search → HTML parse → Gemini filters matching X profile Data Gathering: Bright Data → Poll for snapshot completion → Fetch post data Transformation: Extract and restructure key fields via Code node AI Summary: Use Gemini to analyze tone, interests, and trends Export: Save results to Google Sheet Fallback: Display custom error message if no X profile found 📨 Final Output A record in your Google Sheet with: Clean post-level data Profile-level engagement summary An AI-written overview including tone, common topics, and post popularity 🔐 Credentials Used Bright Data account** (for search & post scraping) Google Gemini (PaLM)** or Gemini Flash via - OpenAI/Google Vertex API Google Sheets (OAuth2) account** (for result storage) ⚠️Community Node Dependency This workflow uses a custom community node: n8n-nodes-brightdata Install it via UI (Settings → Community Nodes → Install).
by Mario
Purpose This workflow creates a versioned backup of an entire Clockify workspace split up into monthly reports. How it works This backup routine runs daily by default The Clockify reports API endpoint is used to get all data from the workspace based on time entries A report file is being retrieved for every month starting with the current one, going back 3 month in total by default If changes happened during a day to any report, it is being updated in Github Prerequisites Create a private Github repository Create credentials for both Clockify and Github (make sure to give permissions for read and write operations) Setup Clone the workflow and select the belonging credentials Follow the instructions given in the yellow sticky notes Activate the workflow
by AlexAy
Who is this workflow template for? This workflow template is perfect for freelancers, small business owners, accounting teams, or anyone responsible for managing and recording invoices regularly. If you deal with multiple invoices and spend considerable time manually entering invoice data into a database, this automation will significantly simplify your daily operations and reduce potential errors. What this workflow does The workflow automates the entire invoice logging process. It continuously monitors a designated Google Drive folder every minute for new PDF invoice uploads. Once a new invoice is detected, it is automatically converted from PDF to an image format using the ILovePDF API. After conversion, Google's Gemini AI analyzes the image, intelligently extracting essential details such as vendor name, item description, invoice amount, invoice date, payment date, and bank reference numbers. Finally, this structured data is automatically recorded in an Airtable database (or optionally in a Google Sheet), ensuring organized, accessible records. Detailed Workflow Explanation Step 1: Invoice Detection** Monitors Google Drive for newly uploaded PDF invoices. Step 2: PDF to Image Conversion** Converts PDFs into images using ILovePDF. Step 3: Data Extraction via Gemini AI** Uses Gemini AI to analyze the invoice image. Extracts data such as Vendor, Description, Amount, Invoice Date, Paid Date, and Bank Reference. Provides clear descriptions even when original invoice descriptions are vague or missing by analyzing vendor context. Step 4: Structured Data Storage** Automatically sends extracted data to Airtable or Google Sheets. Step 5: File Management** Moves processed PDF files into a separate "Done" folder to clearly differentiate between processed and unprocessed invoices. Step-by-Step Setup Instructions Set Up Google Drive: Log in to Google Drive and create two folders: One named Invoices (for incoming PDF files) One named Processed (for processed files) Obtain API Credentials: ILovePDF API: Sign up at ILovePDF Developers. Retrieve your API key from your account dashboard. Google Gemini AI API: Register at Google AI and generate an API key. Airtable Database Preparation: Create an Airtable base with the following columns: Vendor (Text) Description (Text) Amount (Number or Text) Invoice Date (Date) Paid Date (Date) Bank Reference (Text) Import and Configure Workflow in n8n: Import the provided workflow JSON file into your n8n instance. Connect your Google Drive, ILovePDF, Google Gemini AI, and Airtable accounts by entering your credentials in their respective nodes. Adjust Workflow Settings: In the Google Drive nodes, ensure your newly created Invoices and Processed folders are correctly selected. Update the ILovePDF public key in the appropriate HTTP Request node. Customize the Gemini AI prompt to refine or expand data extraction according to your specific needs. Testing Your Setup: Upload a sample PDF invoice into the Invoices folder. Execute the workflow by clicking Test Workflow in n8n and verify if data extraction and Airtable logging operate correctly. Airtable Column Specifications Ensure your Airtable includes the following structure: Vendor**: Single Line Text Description**: Single Line Text Amount**: Currency or Single Line Text Invoice Date**: Date (formatted as YYYY-MM-DD) Paid Date**: Date (formatted as YYYY-MM-DD) Bank Reference**: Single Line Text How to Customize the Workflow System Prompt:** Adjust the AI instructions by modifying the prompt text to focus on additional or fewer invoice details. Structured Output Parser:** Modify the JSON schema in the parser node to match the structure and data points your project specifically requires: By following these instructions, you’ll have a fully automated, reliable system for handling and logging invoice data, significantly enhancing your productivity.
by Robert Breen
Extract Local Business Contacts with Google Sheets, SerpAPI & GPT‑4o Status: Ready for Use ✅ Disclaimer: This workflow relies on community nodes that are not part of n8n’s core package. Install the following from n8n → Community Nodes before running: n8n-nodes-langchain** n8n-nodes-openai** (Structured Output Parser) n8n-nodes-apify** 📝 Description This n8n workflow automates discovery of local‑business contact details by search term and location, then enriches the results with publicly listed email addresses using GPT‑4o AI. 🔑 Key Features 🔗 Google Sheets Integration Reads search terms and locations from a Google Sheet. Processes only rows that are not marked Complete, preventing duplicates. 🗺️ Google Maps Search via SerpAPI Queries Google Maps through SerpAPI for every search‑term‑and‑location pair. Retrieves the following fields: business name, website, street address, and phone number. 🧠 Website Scraping & Email Extraction Scrapes the business homepage content with Apify’s Fast Website Content Crawler. Sends the scraped HTML to a GPT‑4o AI Agent. Extracts any publicly listed email address. Returns a clean, structured JSON object for downstream use. 💾 Data Storage & Tracking Writes every result to a Results tab in the same Google Sheet. Marks the corresponding row in the Searches tab as Complete once finished. 🧱 Extensible Design The workflow uses modular sub‑workflows and AI agents. You can easily extend it to add: Phone‑number verification with Twilio Social‑media enrichment with Clearbit Exports to HubSpot, Salesforce, Airtable, PostgreSQL, or CSV files 📄 Google Sheet Setup Create a Searches tab with these exact columns (one header row): Search | Area | Area Name | Complete Create a results tab with these columns title | website | address | phone | Search | Search Name | Area | email (Manual Entry) ⚙️ Prerequisites Google Cloud Project with Google Sheets API and Google Drive API enabled SerpAPI account (free trial or paid) – obtain an API key Apify account (free trial or paid) with the Fast Website Content Crawler actor installed OpenAI account with an API key that can access GPT‑4o models 🚀 Setup Instructions Copy the Google Sheet Make a personal copy of the template sheet. Ensure the tab names are Searches and Results. https://docs.google.com/spreadsheets/d/1QgcVMlXRlM_5ZFFUHr6bVK-93Tzia9XseTX03ZYnowI/edit?usp=sharing Configure Google Sheets nodes in n8n Open the workflow. Update the nodes Extract Search Terms and Save Emails to Sheet to point at your copied sheet. Authenticate using Google OAuth2 credentials that have access to the sheet. Add SerpAPI credentials Sign in at <https://serpapi.com>. Copy your API key. In the Search Google Maps node, create a new credential and paste the key. Set up Apify Sign up at <https://apify.com>. Add the Fast Website Content Crawler actor to your account. In the Scrape Web Page HTTP node, append ?token=YOUR_API_KEY to the actor URL. Add your OpenAI API key Go to <https://platform.openai.com>. Generate an API key. Add it to the AI Agent and OpenAI Chat Model node credentials. ✅ Running the Workflow Click Execute Workflow in n8n. For each unprocessed row in the Searches tab, the automation will: Retrieve business information from Google Maps via SerpAPI. Scrape the business website using Apify. Use GPT‑4o to extract a public email address. Write all collected data to the Results tab. Mark the original row as Complete. 🧩 Example Use Cases Build highly targeted lead lists for sales and marketing outreach. Compile local business directories for regional websites or apps. Automate contact‑information collection for lead‑generation campaigns and reduce manual data entry. 🤝 Connect with Me Description I’m Robert Breen, founder of Ynteractive — a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. I’ve helped clients build everything from intelligent chatbots to complex sales automations, and I’m always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, I’d love to hear from you. Links 🌐 Website: https://www.ynteractive.com 📺 YouTube: @ynteractivetraining 💼 LinkedIn: https://www.linkedin.com/in/robert-breen 📬 Email: rbreen@ynteractive.com
by Audun
Who is this for? Security professionals Developers Individuals interested in data breach awareness Use Case Automated monitoring for new breaches Proactive identity protection Demonstration of simple cache mechanism What this workflow does Checks the Have I Been Pwned API every 15 minutes for the latest breaches. Compares new breach data against previously notified breaches. Demonstrates a simple cache mechanism to track previously seen breaches. How the Cache Functionality Works Read from Cache**: Retrieves the last known breach from cache.json to avoid redundant alerts for the same breach. Compare Against Current Breach**: The workflow checks if the latest fetched breach differs from the cached one. Update the Cache**: If a new breach is detected, it updates cache.json with the latest breach data. Setup instructions The endpoint used in this workflow does not require an API key. Add your desired alert mechanism in the red box attached to the New breach node. How to customize this workflow to your needs Modify Notification Settings**: Tailor where alerts are sent (email, Slack, etc.). Add the desired node after the New breach node. This node contains all the data from the breach so it is eaisily available. You can choose from a variety of n8n nodes to send alerts when a new breach is detected. Below are a few common options you might consider adding after the New breach node: Email Node What it does: Sends an email notification to one or more recipients. Use case: Great for simple alerts to your inbox or a team distribution list. Customization: You can include breach details in the subject or body of the email, using data from the New breach node. Slack Node What it does: Sends a message to a Slack channel or user. Use case: Perfect for real-time alerts to your team in Slack. Customization: You can post breach details directly in a channel or DM. You can also format the message (bold, code blocks, etc.). Microsoft Teams Node What it does: Sends a message to a Teams channel. Use case: For organizations that use Microsoft Teams for communication. Customization: Similar to Slack, you can customize the message content and include all relevant breach information. Discord Node What it does: Sends an alert message to a Discord channel. Use case: Useful for teams or communities that coordinate via Discord. Customization: Add formatted messages with breach details for easy viewing. Telegram Node What it does: Sends messages to a Telegram chat or group. Use case: Good for mobile notifications and fast alerts. Customization: You can include breach summaries or detailed information, and even use bots to automate this. Webhook Node (as a sender) What it does: Sends breach data to another service via a webhook. Use case: If you have an external system or app that handles alerts, you can push the data directly to it. Customization: Send JSON payloads with detailed breach information to trigger actions in other systems. SMS Nodes (like Twilio) What it does: Sends an SMS notification to one or more phone numbers. Use case: For urgent alerts that need to be seen immediately. Customization: Keep messages concise, including key breach details like the time, type of breach, and affected system. Adjust Check Frequency**: Change the interval in the Schedule Trigger node (e.g., hourly or daily).
by Jimleuk
If you have a shared or personal drive location with a high frequency of files created by humans, it can become difficult to organise. This may not matter... until you need to search for something! This n8n workflow works with the local filesystem to target the messy folder and categorise as well as organise its files into sub directories automatically. Disclaimer Unfortunately due to the intended use-case, this workflow will not work on n8n Cloud and a self-hosted version of n8n is required. How it works Uses the local file trigger to activate once a new file is introduced to the directory The new file's filename and filetype are analysed using AI to determine the best location to move this file. The AI assess the current subdirectories as to not create duplicates. If a relevant subdirectory is not found, a new subdirectory is suggested. Finally, an Execute Command node uses the AI's suggestions to move the new file into the correct location. Requirements Self-hosted version of n8n. The nodes used in this workflow only work in the self-hosted version. If you are using docker, you must create a bind mount to a host directory. Mistral.ai account for LLM model Customise this workflow If the frequency of files created is high enough, you may not want the trigger to active on every new file created event. Switch to a timer to avoid concurrency issues. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1iqJ_zCGussXpfaUBYGrN5opziEFAEQMu/view?usp=sharing
by Sarfaraz Muhammad Sajib
📧 Email Validation Workflow Using APILayer API This n8n workflow enables users to validate email addresses in real time using the APILayer Email Verification API. It's particularly useful for preventing invalid email submissions during lead generation, user registration, or newsletter sign-ups, ultimately improving data quality and reducing bounce rates. ⚙️ Step-by-Step Setup Instructions Trigger the Workflow Manually: The workflow starts with the Manual Trigger node, allowing you to test it on demand from the n8n editor. Set Required Fields: The Set Email & Access Key node allows you to enter: email: The target email address to validate. access_key: Your personal API key from apilayer.net. Make the API Call: The HTTP Request node dynamically constructs the URL: https://apilayer.net/api/check?access_key={{ $json.access_key }}&email={{ $json.email }} It sends a GET request to the APILayer endpoint and returns a detailed response about the email's validity. (Optional): You can add additional nodes to filter, store, or react to the results depending on your needs. 🔧 How to Customize Replace the manual trigger with a webhook or schedule trigger to automate validations. Dynamically map the email and access_key values from previous nodes or external data sources. Add conditional logic to filter out invalid emails, log them into a database, or send alerts via Slack or Email. 💡 Use Case & Benefits Email validation is crucial in maintaining a clean and functional mailing list. This workflow is especially valuable in: Sign-up forms where real-time email checks prevent fake or disposable emails. CRM systems to ensure user-entered emails are valid before saving them. Marketing pipelines to minimize email bounce rates and increase campaign deliverability. Using APILayer’s trusted validation service, you can verify whether an email exists, check if it’s a role-based address (like info@ or support@), and identify disposable email services—all with a simple workflow. Keywords: email validation, n8n workflow, APILayer API, verify email, real-time email check, clean email list, reduce bounce rate, data accuracy, API integration, no-code automation