by PollupAI
LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Fresh LinkedIn Profile Data API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria**: Modify the filter step to include additional conditions for processing profiles. API Configuration**: Adjust API parameters to retrieve specific fields or extend usage. Output Format**: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling**: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.
by Hubschrauber
A single workflow with 2 flows/paths that combine to handle the backup sequence for Zigbee device configuration from HomeAssistant / zigbee2mqtt. This provides a way to automate a periodic capture of Zigbee coordinators and device pairings to speed the recovery process when/if the HomeAssistant instance needs to be rebuilt. Setting up similar automation without n8n (e.g. shell scripts and system timers) is consiterably more challenging. n8n makes it easy and this template should remove any other excuse not to do it. Flow 1 Triggered by Cron/Timer set whatever interval for backups sends mqtt message to request zigbee2mqtt backup (via separate message) Flow 2 Triggered by zigbee2mqtt backup message Extracts zip file from the message and stores somewhere, with a date-stamp in the filename, via sftp Setup Create a MQTT connection named "MQTT Account" with the appropriate protocol (mqtt), host, port (1883), username, and password Create an sftp connection named "SFTP Zigbee Backups" with the appropriate host, port (22), username, and password or key. Reference This article describes the mqtt parts.
by Thomas Janssen
Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG) Tutorial Click here to watch the full tutorial on YouTube How it works This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client. AI Agent Template Click here to navigate to the AI Agent n8n workflow which uses this MCP server Warning This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. Activate the n8n flow Run the "RAG Ingestion Pipeline" and upload some PDF documents How to use it Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API. More detailed instructions Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data
by n8n Team
This workflow adds a new product in Stripe whenever a new product has been added to Pipedrive. Prerequisites Stripe account and Stripe credentials Pipedrive account and Pipedrive credentials How it works Pipedrive trigger node starts the workflow when a new product is added. HTTP Request node creates a new product in Stripe using previuos input. Merge node combines data of both Pipedrive and Stripe inputs. The output will contain the data of Pipedrive input merged with the data of Stripe input. The merge occurs based on the index of the items. The Item Lists node splits prices to separate items. HTTP Request node creates price records in Stripe.
by Yaron Been
🚀 Automated Investor Intelligence: CrunchBase to Google Sheets Data Harvester! Workflow Overview This cutting-edge n8n automation is a sophisticated investor intelligence tool designed to transform market research into actionable insights. By intelligently connecting CrunchBase, data processing, and Google Sheets, this workflow: Discovers Investor Insights: Automatically retrieves latest investor data Tracks key investment organizations Eliminates manual market research efforts Intelligent Data Processing: Filters investor-specific organizations Extracts critical investment metrics Ensures comprehensive market intelligence Seamless Data Logging: Automatically updates Google Sheets Creates real-time investor database Enables rapid market trend analysis Scheduled Intelligence Gathering: Daily automated tracking Consistent investor insight updates Zero manual intervention required Key Benefits 🤖 Full Automation: Zero-touch investor research 💡 Smart Filtering: Targeted investment insights 📊 Comprehensive Tracking: Detailed investor intelligence 🌐 Multi-Source Synchronization: Seamless data flow Workflow Architecture 🔹 Stage 1: Investor Discovery Scheduled Trigger**: Daily market scanning CrunchBase API Integration** Intelligent Filtering**: Investor-specific organizations Key investment metrics Most recent data 🔹 Stage 2: Data Extraction Comprehensive Metadata Parsing** Key Information Retrieval** Structured Data Preparation** 🔹 Stage 3: Data Logging Google Sheets Integration** Automatic Row Appending** Real-Time Database Updates** Potential Use Cases Venture Capitalists**: Investment ecosystem mapping Startup Scouts**: Investor trend analysis Market Researchers**: Comprehensive investment insights Business Development**: Strategic partnership identification Investment Analysts**: Market intelligence gathering Setup Requirements CrunchBase API API credentials Configured access permissions Investor organization tracking setup Google Sheets Connected Google account Prepared tracking spreadsheet Appropriate sharing settings n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Advanced investment trend analysis 📊 Multi-source investor aggregation 🔔 Customizable alert mechanisms 🌐 Expanded investment stage tracking 🧠 Machine learning insights generation Technical Considerations Implement robust error handling Use secure API authentication Maintain flexible data processing Ensure compliance with API usage guidelines Ethical Guidelines Respect business privacy Use data for legitimate research Maintain transparent information gathering Provide proper attribution Hashtag Performance Boost 🚀 #InvestorIntelligence #VentureCapital #MarketResearch #AIWorkflow #DataAutomation #StartupEcosystem #InvestmentTracking #BusinessIntelligence #TechInnovation #StartupFunding Workflow Visualization [Daily Trigger] ⬇️ [Fetch Investor Data] ⬇️ [Extract Investor Fields] ⬇️ [Log to Google Sheets] Connect With Me Ready to revolutionize your investor research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your market intelligence with intelligent, automated workflows!
by Emmanuel Bernard
🎥 AI Video Generator with HeyGen 🚀 Create AI-Powered Videos in n8n with HeyGen This workflow enables you to generate realistic AI videos using HeyGen, an advanced AI platform for video automation. Simply input your text, choose an AI avatar and voice, and let HeyGen generate a high-quality video for you – all within n8n! ✅ Ideal for: Content creators & marketers 🏆 Automating personalized video messages 📩 AI-powered video tutorials & training materials 🎓 🔧 How It Works 1️⃣ Provide a text script – This will be spoken in the AI-generated video. 2️⃣ Select an Avatar & Voice – Choose from a variety of AI-generated avatars and voices. 3️⃣ Run the workflow – HeyGen processes your request and generates a video. 4️⃣ Download your video – Get the direct link to your AI-powered video! ⚡ Setup Instructions 1️⃣ Get Your HeyGen API Key Sign up for a HeyGen account. Go to your account settings and retrieve your API Key. 2️⃣ Configure n8n Credentials In n8n, create new credentials and select "Custom Auth" as the authentication type. In the Name provide : X-Api-Key And in the value paste your API key from Heygen Update the 2 http node with the right credentials. 3️⃣ Select an AI Avatar & Voice Browse available avatars & voices in your HeyGen account. Copy the Avatar ID and Voice ID for your video. 4️⃣ Run the Workflow Enter your text, avatar ID, and voice ID. Execute the workflow – your video will be generated automatically! 🎯 Why Use This Workflow? ✔️ Fully Automated – No manual editing required! ✔️ Realistic AI Avatars – Choose from a variety of digital avatars. ✔️ Seamless Integration – Works directly within your n8n workflow. ✔️ Scalable & Fast – Generate multiple videos in minutes. 🔗 Start automating AI-powered video creation today with n8n & HeyGen!
by Baptiste Fort
Who is it for? This workflow is for marketers, sales teams, and local businesses who want to quickly collect leads (business name, phone, website, and email) from Google Maps and store them in Airtable. You can use it for real estate agents, restaurants, therapists, or any local niche. How it works Scrape Google Maps with Apify Google Maps Extractor. Clean and structure the data (name, address, phone, website). Visit each website and retrieve the raw HTML. Use GPT to extract the most relevant email from the site content. Save everything to Airtable for easy filtering and future outreach. It works for any location or keyword – just adapt the input in Apify. Requirements Before running this workflow, you’ll need: ✅ Apify account (to use the Google Maps Extractor) ✅ OpenAI API key (for GPT email extraction) ✅ Airtable account & base with the following fields: Business Name Address Website Phone Number Email Google Maps URL Airtable Structure Your Airtable base should contain these columns: Airtable Structure | Title | Street | Website | Phone Number | Email | URL | |-------------------------|-------------------------|--------------------|-----------------|------------------------|----------------------| | Paris Real Estate Agency| 10 Rue de Rivoli, Paris | https://agency.fr | +33 1 23 45 67 | contact@agency.fr | maps.google.com/... | | Example Business 2 | 25 Avenue de l’Opéra | https://example.fr | +33 1 98 76 54 | info@example.fr | maps.google.com/... | | Example Business 3 | 8 Boulevard Haussmann | https://demo.fr | +33 1 11 22 33 | contact@demo.fr | maps.google.com/... | Error Handling Missing websites:** If a business has no website, the flow skips the scraping step. No email found:** GPT returns Null if no email is detected. API rate limits:** Add a Wait node between requests to avoid Apify/OpenAI throttling. Now let’s take a detailed look at how to set up this automation, using real estate agencies in Paris as an example. Step 1 – Launch the Google Maps Scraper Start with a When clicking Execute workflow trigger to launch the flow manually. Then, add an HTTP Request node with the method set to POST. 👉 Head over to Apify: Google Maps Extractor On the page: https://apify.com/compass/google-maps-extractor Enter your business keyword (e.g., real estate agency, hairdresser, restaurant) Set the location you want to target (e.g., Paris, France) Choose how many results to fetch (e.g., 50) Optionally, use filters (only places with a website, by category, etc.) ⚠️ No matter your industry, this works — just adapt the keyword and location. Once everything is filled in: Click Run to test. Then, go to the top right → click on API. Select the API endpoints tab. Choose Run Actor synchronously and get dataset items. Copy the URL and paste it into your HTTP Request (in the URL field). Then enable: ✅ Body Content Type → JSON ✅ Specify Body Using JSON` Go back to Apify, click on the JSON tab, copy the entire code, and paste it into the JSON body field of your HTTP Request. At this point, if you run your workflow, you should see a structured output similar to this: title subTitle price categoryName address neighborhood street city postalCode ........ Step 2 – Clean and structure the data Once the raw data is fetched from Apify, we clean it up using the Edit Fields node. In this step, we manually select and rename the fields we want to keep: Title → {{ $json.title }} Address → {{ $json.address }} Website → {{ $json.website }} Phone → {{ $json.phone }} URL → {{ $json.url }}* This node lets us keep only the essentials in a clean format, ready for the next steps. On the right: a clear and usable table, easy to work with. Step 3 – Loop Over Items Now that our data is clean (see step 2), we’ll go through it item by item to handle each contact individually. The Loop Over Items node does exactly that: it takes each row from the table (each contact pulled from Apify) and runs the next steps on them, one by one. 👉 Just set a Batch Size of 20 (or more, depending on your needs). Nothing tricky here, but this step is essential to keep the flow dynamic and scalable. Step 4 – Edit Field (again) After looping through each contact one by one (thanks to Loop Over Items), we're refining the data a bit more. This time, we only want to keep the website. We use the Edit Fields node again, in Manual Mapping mode, with just: Website → {{ $json.website }} The result on the right? A clean list with only the URLs extracted from Google Maps. 🔧 This simple step helps isolate the websites so we can scrape them one by one in the next part of the flow. Step 5 – Scrape Each Website with an HTTP Request Let’s continue the flow: in the previous step, we isolated the websites into a clean list. Now, we’re going to send a request to each URL to fetch the content of the site. ➡️ To do this, we add an HTTP Request node, using the GET method, and set the URL as: {{ $json.website }} This value comes from the previous Edit Fields input This node will simply “visit” each website automatically and return the raw HTML code (as shown on the right). 📄 That’s the material we’ll use in the next step to extract email addresses (and any other useful info). We’re not reading this code manually — we’ll scan through it line by line to detect patterns that matter to us. This is a technical but crucial step: it’s how we turn a URL into real, usable data. Step 6 – Extract the Email with GPT Now that we've retrieved all the raw HTML from the websites using the HTTP Request node, it's time to analyze it. 💡 Goal: detect the most relevant email address on each site (ideally the main contact or owner). 👉 To do that, we’ll use an OpenAI node (Message a Model). Here’s how to configure it: ⚙️ Key Parameters: Model: GPT-4-1-MINI (or any GPT-4+ model available) Operation: Message a Model Resource: Text Simplify Output: ON Prompt (message you provide): Look at this website content and extract only the email I can contact this business. In your output, provide only the email and nothing else. Ideally, this email should be of the business owner, so if you have 2 or more options, try for most authoritative one. If you don't find any email, output 'Null'. Exemplary output of yours: name@examplewebsite.com {{ $json.data }} Step 7 – Save the Data in Airtable Once we’ve collected everything — the business name, address, phone number, website… and most importantly the email extracted via ChatGPT — we need to store all of this somewhere clean and organized. 👉 The best place in this workflow is Airtable. 📦 Why Airtable? Because it allows you to: Easily view and sort the leads you've scraped Filter, tag, or enrich them later And most importantly… reuse them in future automations ⚙️ What we're doing here We add an Airtable → Create Record node to insert each lead into our database. Inside this node, we manually map each field with the data collected in the previous steps: | Airtable Field | Description | Value from n8n | | -------------- | ------------------------ | ------------------------------------------ | | Title | Business name | {{ $('Edit Fields').item.json.Title }} | | Street | Full address | {{ $('Edit Fields').item.json.Address }} | | Website | Website URL | {{ $('Edit Fields').item.json.Website }} | | Phone Number | Business phone number | {{ $('Edit Fields').item.json.Phone }} | | Email | Email found by ChatGPT | {{ $json.message.content }} | | URL | Google Maps listing link | {{ $('Edit Fields').item.json.URL }} | 🧠 Reminder: we’re keeping only clean, usable data — ready to be exported, analyzed, or used in cold outreach campaigns (email, CRM, enrichment, etc.). ➡️ And the best part? You can rerun this workflow automatically every week or month to keep collecting fresh leads 🔁.
by Oneclick AI Squad
AI-Powered Email Draft Automation Workflow In this guide, we’ll walk you through setting up an AI-driven workflow that automatically processes incoming emails using a custom AI model (e.g., Llama), prepares email content, and saves it as a Gmail draft. Ready to automate your email drafting process? Let’s dive in! What’s the Goal? Automatically detect and process new emails via IMAP. Use a custom AI model to analyze and generate email content. Prepare structured and relevant email responses. Save the generated content as a Gmail draft for review or sending. Enable 24/7 email automation with seamless integration. By the end, you’ll have a self-running email assistant that drafts responses effortlessly. Why Does It Matter? Manual email drafting is time-consuming and prone to delays. Here’s why this workflow is a game changer: Zero Human Error:** AI ensures consistent and accurate drafts. Time-Saving Automation:** Instantly process and draft emails, boosting efficiency. 24/7 Availability:** Handle emails anytime without manual intervention. Focus on Strategy:** Free your team from repetitive drafting tasks. Think of it as your tireless email drafting assistant that never misses a beat. How It Works Here’s the step-by-step magic behind the automation: Step 1: Trigger the Workflow Detect new emails using IMAP via the Check New Email (IMAP) node. Capture incoming email content for processing. Step 2: Process Email with AI Send the email text to a custom AI model (e.g., Llama) for analysis. Use the Custom AI Model node to generate a context-aware response or draft content. Step 3: Prepare Email Content Format the AI-generated content into a polished email structure using the Prepare Email Content node. Ensure the content is ready for drafting with proper salutations and structure. Step 4: Save as Gmail Draft Route the prepared email content to the Save as Gmail Draft node. Save the draft in Gmail for review or manual sending. Step 5: Log & Optimize Log all processed emails and drafts in a database (e.g., Airtable, Google Sheets). Continuously improve the AI model based on feedback or new email patterns. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built or shared workflows to save time. Below is a step-by-step guide to importing the Smart Email Draft Generator workflow in n8n, based on the official documentation and community resources. Steps to Import a Workflow in n8n 1. Obtain the Workflow JSON Source the Workflow:** Workflows are typically shared as JSON files or code snippets. You might receive them from: The n8n community (e.g., n8n.io workflows page). A colleague or tutorial (e.g., a .json file or copied JSON code). Exported from another n8n instance. Format:** Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or as text copied to your clipboard. 2. Access the n8n Workflow Editor Log in to n8n:** Open your n8n instance (via n8n Cloud or your self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Open a New Workflow:** Click Add Workflow to create a blank workflow, or open an existing workflow if you want to merge the imported workflow. 3. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code of the workflow into the provided text box. Click Import to load the workflow into the editor. Option 2: Import via JSON File: In the n8n editor, click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Setup Notes: IMAP Credentials:** Configure IMAP settings in the Check New Email (IMAP) node with your email account credentials (e.g., Gmail IMAP settings). Custom AI Model:** Set up the Custom AI Model node with your AI model credentials (e.g., Llama API key or endpoint). Gmail Integration:** Authorize the Save as Gmail Draft node with Gmail API credentials to save drafts. Content Customization:** Adjust the Prepare Email Content node to tailor the email structure or tone as needed.
by Hostinger
This n8n workflow template is designed to help system administrators and DevOps professionals monitor key resource usage metrics — CPU, RAM, and Disk — on a VPS (Virtual Private Server). The workflow automatically checks these resources every 15 minutes and sends an email alert if any resource usage exceeds the 80% threshold. This proactive monitoring helps maintain optimal server performance and prevents resource-related downtimes. Who This Workflow Is For • System Administrators managing Linux-based servers who need to ensure their systems are running smoothly without manual monitoring. • DevOps Professionals who manage multiple environments and need automated tools to alert them to potential issues before they affect operations. • IT Support Teams who require an easy way to keep tabs on server health across an organization’s infrastructure. How It Works Schedule Trigger: The workflow is triggered every 15 minutes by a Cron node. Resource Checks: Separate SSH Command nodes are configured to execute specific commands that check the current usage of RAM, Disk, and CPU. Data Aggregation: The results from each check are merged using a Merge node, which combines the data into a single payload for analysis. Threshold Analysis: A Function node evaluates whether any resource’s usage exceeds the predefined 80% threshold. Alerts: If any metric exceeds the threshold, an email alert is sent through an Email node, ensuring that administrators can react promptly to potential issues. Setup Steps Configure SSH Nodes: Update each SSH node with the appropriate credentials and target server details where the resource checks will be performed. Set Thresholds: If different sensitivity levels are required, review and adjust the resource usage thresholds within the Function node. Email Configuration: Enter the correct email addresses in the Email node for where alerts should be sent. Ensure that your email-sending credentials and server details are correctly configured.
by Zacharia Kimotho
This workflow takes off the task of backing up workflows regularly on Github and uses Google Drive as the main tool to host these. This can be a good way to keep track of your workflows so that you never lose any workflows in case your n8n goes down. How does it work Creates a new folder within a specified folder with the time its backed up Loops around all workflows, converts them to a JSON file and uploads them to the created folder Gets the previous backups and deletes them This has a clean feel and look as it simplifies the backup while not keeping a cache of workflows on your drive. Setup Create a new folder Create new service account credentials Share the folder with the service account email Upload this workflow to your canvas and map the credentials Set the schedule that you need your workflows to run and manage your backups Activate the workflow Happy Productivity! @Imperol
by Roger Filomeno
Introduction: This workflow template helps you determine if a Twitch user's stream is currently live or offline. Setup Instructions: The Document node holds the sample Twitch username you wish to check, you may adapt it in your workflow by replacing this with a chain that contains the Twitch username you want to check. This value is passed to the GraphQL node query as $('Document').item.json.twitch so make sure to change this based on your workflow. How it Works: The important nodes here are the GrapQL and IF nodes. The GrapQL queries the Twitch API, and then the output returns a document with the stream property. The IF node then checks if this property has a value, if null means the user is offline, otherwise the user is online or live. Common Use Cases: You can use this with other workflow templates to post live stream alerts to Twitter/X, Bluesky, and Discord via webhooks, etc to notify your community to join youR stream. You may also use an LLM node to write a custom alert based on the value of property title How to adjust this template If you want to check a list of Twitch channels, you can simply exchange the Document set node in the beginning with your list of channels. For more information on the GraphQL output please see the official Twitch API documentation: Get Streams
by Pat
Who is this for? This workflow template is perfect for content creators, researchers, students, or anyone who regularly works with audio files and needs to transcribe and summarize them for easy reference and organization. What problem does this workflow solve? Transcribing audio files and summarizing their content can be time-consuming and tedious when done manually. This workflow automates the process, saving users valuable time and effort while ensuring accurate transcriptions and concise summaries. What this workflow does This template automates the following steps: Monitors a specified Google Drive folder for new audio files Sends the audio file to OpenAI's Whisper API for transcription Passes the transcribed text to GPT-4 for summarization Creates a new page in Notion with the summary Setup To set up this workflow: Connect your Google Drive, OpenAI, and Notion accounts to n8n Configure the Google Drive node with the folder you want to monitor for new audio files Set up the OpenAI node with your API key and desired parameters for Whisper and GPT-4 Specify the Notion database where you want the summaries to be stored How to customize this workflow Adjust the Google Drive folder being monitored Modify the OpenAI node parameters to fine-tune the transcription and summarization process Change the Notion database or page properties to match your preferred structure With this AI-powered workflow, you can effortlessly transcribe audio files, generate concise summaries, and store them in a structured manner within Notion. Streamline your audio content processing and organization with this automated template.