by Andrew
Who is this for? This workflow is ideal for n8n self-hosted users, DevOps engineers, and automation developers who want to automatically back up their n8n workflows to GitHub on a regular basis. What problem is this workflow solving Manually backing up n8n workflows can be time-consuming and prone to human error. This workflow automates the backup process, ensuring that all workflows are safely stored in a version-controlled GitHub repository every 24 hours. What this workflow does This automation runs daily to back up all workflows from your n8n instance to a specified GitHub repository. Each workflow is saved as a .json file using its unique ID, organized into a folder path defined by repo_path. The workflow is designed to manage memory usage efficiently by recursively calling itself. Once the backup is complete, it optionally sends a Slack notification to confirm success. Setup Configure the Config node in the subworkflow to set: GitHub Repo Owner GitHub Repo Name Main folder path (repo_path) Connect your GitHub and (optionally) Slack credentials. Set the workflow to run on a daily cron schedule. Test the workflow manually to confirm the GitHub integration works. Sign up for a free consultation and find out how n8n can help you.
by Oskar
This workflow uses OpenAI Assistant to compose draft replies for labeled email messages. It automatically connects the drafts to Gmail threads. 💡 You can add knowledge base to your OpenAI Assistant and make your reply drafts very customized (e.g. compose response with product information in response to inquiry from customer). 🎬 See this workflow in action in my YouTube video about automating Gmail. How it works? The workflow is triggered at regular intervals (default: every 1 minute – you can change this value) to check for messages with a specific label (e.g., "AI"). The content of the retrieved email message is then forwarded to the OpenAI Assistant node, and a reply draft is generated. Next, the response from the Assistant is converted to HTML, and a raw message in RFC standard is composed. 💡 You can learn more about composing drafts with the Gmail API in the official Google documentation. The raw email message (reply draft) is encoded and attached to the original thread ID. Finally, the trigger label (in this case: "AI") is removed to prevent the workflow from looping. Set up steps Set credentials for Gmail and OpenAI. Add new label in Gmail account for messages that should be handled by the workflow (e.g. name it "AI"). Select this label in the first and last Gmail nodes in workflow. Create and configure your OpenAI Assistant. Select your assistant in "OpenAI Assistant" node. Optionally: change trigger interval (by default interval is 1 minute). If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Javier Hita
Who is this for? This workflow is perfect for sales teams, business development professionals, recruitment agencies, and fractional CFO service providers who need to identify and qualify companies actively hiring. Whether you're prospecting for new clients, building a database of potential customers, or researching market opportunities, this automated solution saves hours of manual research while delivering high-quality, AI-analyzed leads. What problem is this workflow solving? Finding qualified prospects in the finance sector is time-consuming and often inefficient. Traditional methods involve: Manually browsing LinkedIn job postings for hours Difficulty distinguishing between genuine opportunities and recruitment spam Inconsistent lead categorization and qualification Risk of contacting the same companies multiple times Lack of structured data for sales team follow-up This workflow automates the entire lead generation process, from data collection to AI-powered qualification, ensuring you focus only on the most promising opportunities. What this workflow does This comprehensive lead generation system performs six key functions: Automated LinkedIn Job Scraping: Uses Apify's reliable LinkedIn Jobs Scraper to extract detailed job postings for finance positions, including company information, job descriptions, and contact details. Smart Data Processing: Removes duplicates, filters companies by size, and structures data for consistent analysis across all leads. Intelligent Lead Categorization: Compares new leads against your existing database to optimize processing and avoid duplicate work. AI-Powered Qualification: Leverages OpenAI's GPT-4 Mini to analyze each lead and determine: Company Category: Consumer companies, Fractional CFO services, Recruiting agencies, or Other Finance Role Validation: Confirms the position is genuinely finance-related Seniority Level: Entry, Mid, Senior, Director, or C-Level classification Job Summary: Concise description for quick sales team review Automated Database Management: Stores qualified leads in Airtable with comprehensive profiles, preventing duplicates while maintaining data integrity. Lead Scoring & Routing: Prioritizes leads based on processing status and qualification results for efficient sales team follow-up. Setup Prerequisites You'll need accounts for three services: Airtable** (Free tier supported) - For lead storage and management Apify** (14-day free trial available) - For LinkedIn job scraping OpenAI** (Pay-per-use) - For AI-powered lead analysis Step 1: Create Required Credentials Apify API Credential Sign up for an Apify account at apify.com Navigate to Settings → Integrations → API tokens Create a new API token In n8n, create a new Apify API credential with your token OpenAI API Credential Create an account at platform.openai.com Generate an API key in the API section In n8n, create a new OpenAI credential with your key Airtable Personal Access Token Go to airtable.com/create/tokens Create a personal access token with the following scopes: data.records:read data.records:write schema.bases:read In n8n, create a new Airtable Personal Access Token credential Step 2: Set Up Airtable Base Create a new Airtable base with the following structure: Table Name: Qualified Leads Required Fields: Company Name (Single line text) Job Title (Single line text) Is Finance Job (Checkbox) Seniority Level (Single select: Entry, Mid, Senior, Director, C-Level) Company Category (Single select: Consumer, Recruiting, Fractional CFO, Other) Job Summary (Long text) Company LinkedIn (URL) Job Link (URL) Posted Date (Date) Location (Single line text) Industry (Single line text) Company Employees (Number) Step 3: Configure the Workflow Import the Workflow: Copy the JSON and import it into your n8n instance Update Credentials: Replace placeholder credential IDs with your actual credential IDs in: "Scrape LinkedIn Jobs" node (Apify credential) "OpenAI GPT-4 Mini" node (OpenAI credential) "Save to Airtable" and "Get Existing Leads" nodes (Airtable credential) Configure Airtable Connection: Update the base ID and table ID in both Airtable nodes Set Search Parameters: In the "Edit Variables" node, configure: linkedinUrls: Your target LinkedIn job search URLs maxEmployees: Maximum company size filter (default: 200) batchSize: Processing batch size for API efficiency (default: 5) Step 4: Test the Workflow Start with a small test by setting count: 50 in the HTTP Request node Use a specific LinkedIn job search URL (e.g., "CFO jobs in New York") Execute the workflow manually and verify results in your Airtable base Review the AI categorization accuracy and adjust prompts if needed How to customize this workflow to your needs Targeting Different Roles Modify the LinkedIn search URLs in the "Edit Variables" node to target different positions: "https://www.linkedin.com/jobs/search/?keywords=Controller" "https://www.linkedin.com/jobs/search/?keywords=Finance%20Director" "https://www.linkedin.com/jobs/search/?keywords=VP%20Finance" Adjusting Company Size Filters Change the maxEmployees parameter to focus on different company segments: Startups: 1-50 employees SMBs: 51-500 employees Enterprise: 500+ employees Customizing AI Analysis Enhance the GPT-4 prompt in the "AI Lead Analyzer" node to include: Industry-specific criteria Geographic preferences Technology stack requirements Company growth stage indicators Integration Options Extend the workflow by adding: Slack notifications** for new qualified leads Email alerts** for high-priority prospects CRM integration** (Salesforce, HubSpot, Pipedrive) Lead enrichment** with additional data sources Scheduling Automation Set up the workflow to run automatically: Daily**: For active prospecting campaigns Weekly**: For ongoing market research Monthly**: For periodic database updates Performance & Cost Optimization API Efficiency**: The workflow processes leads in batches to optimize API usage Smart Deduplication**: Avoids re-processing existing leads to reduce costs Configurable Limits**: Adjust batch sizes and employee count filters based on your needs Expected Costs**: Approximately $0.05-$0.20 per 100 analysed leads (OpenAI costs) Troubleshooting Common Issues: Rate Limiting**: Increase delays between API calls if you encounter rate limits Data Quality**: Review LinkedIn search URLs for relevance to your target market AI Accuracy**: Adjust prompts if categorisation doesn't match your criteria Airtable Errors**: Verify field names match exactly between workflow and base structure Support Resources: Apify LinkedIn Scraper Documentation OpenAI API Documentation Airtable API Reference Transform your lead generation process with this powerful, AI-driven workflow that delivers qualified prospects ready for immediate outreach.
by InfraNodus
This template can be used to find the content gaps in your competitors' discourse: identifying the topics they are not yet connecting and giving you an opportunity to fill in this gap with your content and product ideas. It will also generate research questions that will help bridge the gaps and generate new ideas. The template showcases the use of multiple n8n nodes and processes: enriching Google sheets file with the new data data extraction content enhancement using GraphRAG approach content gap / research question generation This approach can be very useful for research, marketing, and SEO applications as you can quickly get an overview of the main topics that are available online for a certain niche and understand what is missing. What are Content Gaps in Marketing and SEO? In the context of SEO, content gaps are usually understood as the topics that your competitors rank for but you do not. However, it's hard to rank for these topics because there's very high competition. So a much more effective way is to identify the gaps between the topics your competitors are talking about that are not yet bridged in their discourse. If you address these gaps in your content, you will increase the informational gain that your content offers and also offer a novel perspective while touching upon the topics that are relevant in your field. For example, if we analyze the top websites for "body and physical practices, fitness, etc." we will see that most of them are talking about the health and fitness aspects and another big topic is the community aspect. However, there is a gap between the two topics: which means that most of the websites (companies) that talk about this topic don't mention the two in the same context. This might be an opportunity: bridging the gap between health, fitness but also emphasizing the community aspect that comes with a collective practice. How it works This template consists of the two stages: 1) Data enrichment of a Google sheet file with a list of your competitors using InfraNodus' GraphRAG to generate topical summaries and graph summaries for every URL you're analyzing. 2) Insight generation (using InfraNodus to identify the main topical clusters and gaps in those summaries, these insights are then added to the Google sheet file. Additionally, it contains a sub workflow that you can activate and launch to ask Perplexity model to conduct a market research and find the companies that operate in your field and populate the original Google sheet file. Here's a description step by step: Step 0: Populate the Google sheets file with the company data (either manually or using the sub-workflow provided or Manus AI / Deep Research) Steps 1-2: Triggering and Launching the workflow, extracting the company URL from the Google sheet row Step 3: Scraping the url content from the companies' websites and cleaning the data Steps 5-7: Use InfraNodus GraphRAG Content Enhancer to get a topical summary and graph summary. This is what you're going to get: Steps 8-10: Use InfraNodus AI to generate insight advice and research questions based on the content gaps How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key A Google Sheet account and an authorization key Note: OpenAI key is not required. But you might want to get a Perplexity AI key if you'd like to use the sub-workflow that populates the Google sheet with your competitors' website addresses (if you don't have this list yet). Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20234254556828-Find-Content-Gaps-in-Websites-Market-Research-and-SEO-n8n-Workflow Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video tutorial with a demo: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Billy Christi
Who is this for? This workflow is ideal for: Finance teams** that need to process incoming invoices faster with minimal errors Small to mid-sized businesses** that want to automate invoice intake, review, and storage Operations managers** who require approval workflows and centralized record-keeping What problem is this workflow solving? Manually processing invoices is time-consuming, error-prone, and often lacks structure. This workflow solves those challenges by: Automating the intake of invoices** from multiple sources (email, Google Drive, web form) Extracting invoice data using AI**, eliminating manual data entry Implementing an email-based approval system** to add human oversight Automatically storing approved invoice data** in Google Sheets for easy access and reporting Notifying stakeholders** when invoices are approved or rejected What this workflow does This end-to-end invoice processing workflow includes: Three invoice input methods: Google Drive folder monitor, Gmail attachments, and web form uploads PDF to text extraction for each input method using native PDF parsing AI-powered invoice analysis with GPT-4 to extract structured fields such as vendor, total, and due date Dynamic categorization of invoice type (e.g., Travel, Software, Utilities) via AI Email-based approval workflow with embedded forms to collect decisions and notes Automated Google Sheets logging of all invoice data, approval status, and reviewer feedback Rejection notifications sent automatically to your finance team for transparency and follow-up Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser with Approval Workflow – Google Sheet Template Connect your Google Drive account and specify the invoice folder ID Set up Gmail to monitor incoming invoices with PDF attachments Enable your form trigger to accept direct uploads from your internal or external users Enter your OpenAI API key in the AI processing node for data extraction Configure Google Sheets with a target spreadsheet to store invoice data Set recipient email addresses for invoice approvals and rejection notifications Test with a sample invoice to ensure end-to-end flow is working How to customize this workflow to your needs Change input sources**: Replace Gmail with Outlook or use Slack uploads instead Add validation steps**: Include regex or keyword checks before AI analysis Customize the AI schema**: Modify the expected JSON structure based on your internal finance system Integrate with accounting tools**: Add Xero, QuickBooks, or custom API nodes to push data Route based on category**: Add conditional logic to handle invoices differently based on vendor or category Multi-level approvals**: Add additional email steps if higher-level signoff is needed Audit logging**: Use database or Google Sheets to maintain a historical log of approvals and rejections
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Brave Search Structured Data Extractor workflow is designed for professionals and teams that need high-quality, structured insights from Brave search results in real time. Whether you're performing market research, tracking competitors, training AI models, or powering content engines, this workflow offers a robust and automated solution. This workflow is tailored for: Market Researchers - Who analyze trends across multimedia channels AI Developers - Who require clean, structured datasets for model fine-tuning SEO & Content - Analysts looking to monitor visibility across news, images, and videos Media Researchers - Curating timely and relevant information across formats Automation Engineers - Integrating search insights into downstream workflows What problem is this workflow solving? Traditional web scraping and search result parsing is fragmented, inconsistent, and prone to errors, especially when dealing with multimedia (images, videos, news) data from search engines. This workflow provides: Centralized Brave search data extraction across all content types. Switches the search execution based upon the type of search that is being set. ex: news, images, videos, all Automated structured data transformation using Google Gemini Unified output persistence and notification across disk, webhook, and Google Sheets What this workflow does Input Configuration Define your Brave search query Set the search type: videos, images, news, or all Configure your Bright Data MCP zone Bright Data MCP Search Execution Initiates a Brave search via Bright Data MCP using the correct URL pattern for each search type Returns raw HTML of search results Google Gemini LLM Structured Data Extraction Transforms raw results into structured data (e.g., title, URL, source, snippet) Output Handling Save to disk (e.g., JSON or CSV file) Send Webhook notification with structured data (e.g., Slack, internal dashboards) Store in Google Sheets for team-wide access or dashboarding Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Enhance Output Analysis Add additional LLM prompts for topic classification, sentiment scoring, or trend forecasting. Output Format Options Choose to output CSV, Markdown, or HTML reports based on your integration target. Schedule Automation Trigger the workflow on a schedule (daily/weekly) to keep monitoring topical content.
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Enter your content idea in the Edit Fields node in a "raw" format. Ex : Boil Eggs Perfectly 2 - LLM create 3 keywords request based on the idea and Apify scrape the YTB Search 3 - Wait until the dataset is completed in Apify 4 - Retrieve Dataset from Apify, calculate approximation of CTR and filter top performing videos 5 - LLM analyze patterns of best performing titles and create a prompt based on it. Another LLM create 5 titles based on these criteria 6 - LLM analyze patterns of best performing thumbnails and create a prompt based on it. Another LLM create 1 thumbnail based on these criteria 7 - Return titles and thumbnail in a HTML Page 📺 YouTube Video Tutorial: SETUP Setup Input Content Idea : Enter Keyword Related to the niche you want. Trigger can be replaced with anything as long as you retrieve a content idea. For example : Form submission, Database entry, etc ... If you want to change the number of keywords, update the data accordingly in the "Create Keywords" LLM Chain node ➡️ Structured Output Parser AND in the "YTB Search Scrape" HTTP Request Node in Body ➡️ JSON ➡️ searchQueries. If you want to change the number of scraped videos for each keyword, update the data accordingly in the "Create Videos Dataset" HTTP Request Node in Body ➡️ JSON ➡️ maxResults. If you want to adjust the CTR Calculation feel free to update it in the Code Node ➡️ Follow the Comments (after "//") to find what you're looking for. If you want to adjust the level of virality of the videos kept for analaysis go to Filter Node ➡️ Value. Setup Output HTML Page : You can also replace this part with any type of storage. For example : Airtable Database, Google Drive/Google Sheet, Send to an email, etc ... APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started OpenAI : https://platform.openai.com/docs/overview (base URL : https://api.openai.com/v1) OR OpenRouter : https://openrouter.ai/docs/quickstart (base URL : https://openrouter.ai/api/v1) HuggingFace (FLUX.1) : https://huggingface.co/docs 👨💻 More Workflows : https://n8n.io/creators/nasser/
by Billy Christi
Who is this for? This workflow is perfect for: HR professionals** seeking to automate employee and department management Startups and SMBs** that want an AI-powered HR assistant on Telegram Internal operations teams** that want to simplify onboarding and employee data tracking What problem is this workflow solving? Managing employee databases manually is error-prone and inefficient—especially for growing teams. This workflow solves that by: Enabling natural language-based HR operations directly through Telegram Automating the creation, retrieval, and deletion of employee records in Airtable Dynamically managing related data such as departments and job titles Handling data consistency and linking across relational tables automatically Providing a conversational interface backed by OpenAI for smart decision-making What this workflow does Using Telegram as the interface and Airtable as the backend database, this intelligent HR workflow allows users to: Chat in natural language (e.g. “Show me all employees” or “Create employee: Sarah, Marketing…”) Interpret and route requests via an AI Agent that acts as the orchestrator Query employee, department, and job title data from Airtable Create or update records as needed: Add new departments and job titles automatically if they don’t exist Create new employees and link them to the correct department and job title Delete employees based on ID Respond directly in Telegram, providing user-friendly feedback Setup View & Copy the Airtable base here: 👉 Employee Database Management – Airtable Base Template Telegram Bot: Set up a Telegram bot and connect it to the Telegram Trigger node Airtable: Prepare three Airtable tables: Employees with links to Departments and Job Titles Departments with Name & Description Job Titles with Title & Description Connect your Airtable API key and base/table IDs into the appropriate Airtable nodes Add your OpenAI API key to the AI Agent nodes Deploy both workflows: the main chatbot workflow and the employee creation sub-workflow Test with sample messages like: “Create employee: John Doe, john@company.com, Engineering, Software Engineer” “Remove employee ID rec123xyz” How to customize this workflow to your needs Switch databases**: Replace Airtable with Notion, PostgreSQL, or Google Sheets if desired Enhance security**: Add authentication and validation before allowing deletion Add approval flows**: Integrate Telegram button-based approvals for sensitive actions Multi-language support**: Expand system prompts to support multiple languages Add logging**: Store every user action in a log table for auditability Expand capabilities**: Integrate payroll, time tracking, or Slack notifications Extra Tips This is a two-workflow setup. Make sure the sub-workflow is deployed and accessible from the main agent. Use Simple Memory per chat ID to preserve context across user queries. You can expand the orchestration logic by adding more tools to the main agent—such as “Get active employees only” or “List employees by job title.”
by Samir Saci
Tags: Scrapping, Events, European Union, Networking Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable and data-driven operations across all types of organizations. This workflow is part of our networking strategy (as a business) to track official EU events that may relate to topics we cover. > Want to stay ahead of critical EU meetings and events without checking the website every day? This n8n workflow automatically scrapes the EU’s official event portal and logs the latest entries with clean metadata including date, location, category, and link. 📬 For collaborations, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Policy & public affairs teams** following institutional activities Sustainability teams** watching for relevant climate-related summits NGOs and researchers** interested in event calendars Data teams** building dashboards on public event trends What does it do? This n8n workflow: 🌐 Scrapes the EU events portal for new meetings and conferences 📅 Extracts event metadata (title, date, location, type, and link) 🔁 Handles pagination across multiple pages 🚫 Checks for duplicates already stored 📊 Saves new records into a connected Google Sheet How it works Triggered daily via cron HTTP node loads the event listing HTML Extract HTML blocks for each event article Parse event name, link, type, location, and full date Concatenate and clean dates for easy tracking Store non-duplicate entries in Google Sheets The workflow uses static data to track pagination and ensure only new events are stored, making it ideal for building up a clean dataset over time. What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance No code or AI tools needed — just n8n and this template Follow the Guide! Sticky notes are included directly inside the workflow to guide you step-by-step through setup and customisation. 🎥 Watch My Tutorial Notes This is ideal for analysts and consultants who want clean, structured data from the EU portal You can add filtering, email alerts, or AI classifiers later This workflow was built using n8n version 1.93.0 Submitted: June 1, 2025
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Sobek
📝 DESCRIPTION OF THE WORKFLOW This workflow connects Salesforce and Geotab to streamline fleet tracking for field service jobs (Work Orders). When a new Work Order is created in Salesforce (with a 'New' status and valid coordinates), it creates a circular geofence zone in Geotab and updates the Work Order with the zone ID. If geolocation is missing, an alert email is sent to dedicated email. The workflow uses a Salesforce Outbound Message to trigger an n8n webhook. It includes robust credential handling and optional logic to skip or notify on bad data. Use Cases: Automating vehicle geofence setup for service visits Enhancing CRM-to-fleet system synchronisation Enforcing work orders data quality via alerts Integrations Used: Salesforce Geotab API Microsoft Outlook (or any SMTP-compatible service) Tags: geotab, salesforce, fleet management, gps tracking, field service, crm, automation, webhook, integration ADDITIONAL RESOURCES 🔗 Salesforce Salesforce Login \[Salesforce Setup (Admin Console)]\(https://login.salesforce.com/ → click “Setup” gear icon) Outbound Messages Documentation Salesforce Developer Documentation Salesforce Workbench (API Testing Tool) 🔗 Geotab Geotab Login (MyGeotab) Geotab Developer Portal Geotab API Explorer Geotab SDK (JavaScript Samples) Geotab Support Centre