by Sleak
Who is this template for? This workflow template is designed for people seeking alerts when certain specific changes are made to any web page. Leveraging agentic AI, it analyzes the page every day and autonomously decides whether to send you an e-mail notification. Example use cases Track price changes on [competitor's website]. Notify me when the price drops below €50. Monitor new blog posts on [industry leader's website] and summarize key insights. Check [competitor's job page] for new job postings related to software development. Watch for new product launches on [e-commerce site] and send me a summary. Detect any changes in the terms and conditions of [specific website]. Track customer reviews for [specific product] on [review site] and extract key themes. How it works When clicking 'test workflow' in the editor, a new browser tab will open where you can fill in the details of your espionage assignment Make sure you be as concise as possible when instructing AI. Instruct specific and to the point (see examples at the bottom). After submission, the flow will start off by extracting both the relevant website url and an optimized prompt. OpenAI's structured outputs is utilized, followed by a code node to parse the results for further use. From here on, the endless loop of daily checks will begin: Initial scrape 1 day delay Second scrape AI agent decides whether or not to notify you Back to step 1 You can cancel an espionage assignment at any time in the executions tab Set up steps Insert your OpenAI API key in the structured outputs node (second one) Create a Firecrawl account and connect your Firecrawl API key in both 'Scrape page'-nodes Connect your OpenAI account in the AI agents' model node Connect your Gmail account in the AI agents' Gmail tool node
by Artur
Overview This automated workflow fetches Upwork job postings using Apify, removes duplicate job listings via Airtable, and sends new job opportunities to Slack. Key Features: Automated job retrieval** from Upwork via Apify API Duplicate filtering** using Airtable to store only unique jobs Slack notifications** for new job postings Runs every 30 minutes** during working hours (9 AM - 5 PM) This workflow requires an active Apify subscription to function, as it uses the Apify Upwork API to fetch job listings. Who is This For? This workflow is ideal for: Freelancers looking to track Upwork jobs in real time Recruiters automating job collection for analytics Developers who want to integrate Upwork job data into their applications What Problem Does This Solve? Manually checking Upwork for jobs is time-consuming and inefficient. This workflow: Automates job discovery based on your keywords Filters out duplicate listings, ensuring only new jobs are stored Notifies you on Slack when new jobs appear How the Workflow Works 1. Schedule Trigger (Every 20 Minutes) Triggers the workflow at 20-minute intervals Ensures job searches are only executed during working hours (9 AM - 5 PM) 2. Query Upwork for Jobs Uses Apify API to scrape Upwork job posts for specific keywords (e.g., "n8n", "Python") 3. Find Existing Jobs in Airtable Searches Airtable to check if a job (based on title and link) already exists 4. Filter Out Duplicate Jobs The Merge Node compares Upwork jobs with Airtable data The IF Node filters out jobs that are already stored in the database 5. Save Only New Jobs in Airtable The Insert Node adds only new job listings to the Airtable collection 6. Send a Slack Notification If a new job is found, a Slack message is sent with job details Setup Guide Required API Keys Upwork Scraper (Apify Token) – Get your token from Apify Airtable Credentials Slack API Token – Connect Slack to n8n and set the channel ID (default: #general) Configuration Steps Modify search keywords in the 'Assign Parameters' node (startUrls) Adjust the Working Hours in the 'If Working Hours' node Set your Slack channel in the Slack node Ensure Airtable is connected properly - you'll need to create a table with 'title' and 'link' columns. Adjust the 'If Working Hours' node to match your timezone and hours, or remove it altogether to receive notifications and updates constantly. How to Customize the Workflow Change keywords: update the startUrls in the 'Assign Parameters' node to track different job categories Change 'If Working Hours': Modify conditions in the IF Node to filter times based on your needs Modify Slack Notifications: Adjust the Slack message format to include additional job details Why Use This Workflow? Automated job tracking without manual searches Prevents duplicate entries in Airtable Instant Slack notifications for new job opportunities Customizable – adapt the workflow to different job categories Next Steps Run the workflow and test with a small set of keywords Expand job categories for better coverage Enhance notifications by integrating Telegram, Email, or a dashboard This workflow ensures real-time job tracking, prevents duplicates, and keeps you updated effortlessly.
by ömer
Generate and Publish AI Content to LinkedIn and X (Twitter) with n8n Overview This n8n workflow automates the generation and publishing of AI-powered social media content across LinkedIn and X (formerly Twitter). By leveraging AI, this workflow helps social media managers, marketers, and content creators streamline their posting process. Who is this for? Social media managers Content creators Digital marketers Businesses looking to automate content generation Features AI-powered content creation** tailored for LinkedIn and X (Twitter) Automated publishing** to both platforms Structured output parsing** to ensure consistency OAuth2 authentication** for secure posting Merge and confirmation steps** to track successful postings Setup Instructions Prerequisites Before using this workflow, ensure you have: An n8n instance set up API credentials for: Google Gemini AI (for content generation) X Developer Account with OAuth2 authentication LinkedIn Developer Account with OAuth2 authentication A form submission service integrated with n8n Workflow Breakdown 1. Trigger: Form Submission A user submits a form containing the post title. The form is secured with Basic Authentication. The submitted title is passed to the AI Agent. 2. AI Content Generation The Google Gemini Chat Model processes the title and generates: LinkedIn post content Twitter (X) post content Hashtags Call-to-action (LinkedIn) Character limit check (Twitter) 3. Parsing AI Output A structured output parser converts the AI-generated content into a JSON format. Ensures correct formatting for LinkedIn and Twitter (X). 4. Publishing to Social Media X (Twitter) Posting Extracts the Twitter post from the AI output. Publishes it via an OAuth2-authenticated X (Twitter) account. LinkedIn Posting Extracts the LinkedIn post from the AI output. Publishes it via an OAuth2-authenticated LinkedIn account. 5. Merging Post Results Merges the response data from both LinkedIn and Twitter after publishing. 6. Confirmation Step Displays a final confirmation form once the posts are successfully published. Benefits Save time** by automating content creation and publishing. Ensure consistency** across platforms with structured AI-generated posts. Secure authentication** using OAuth2 for LinkedIn and Twitter. Increase engagement** with AI-optimized hashtags and CTAs. This workflow enables seamless social media automation, helping professionals post engaging AI-powered content effortlessly. 🚀
by Dave Long
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Using the serial number for assets, this workflow will create a ticket with the subject "Found duplicate Serial Numbers" with a list of all of the duplicate assets for a technician to review and merge. Duplicate assets causes incorrect billing (if customers are billed based on asset counts), and additional overhead when reviewing the history of assets when that history is spread across multiple instances. Note: Due to limitations of the Syncro API, automatically merging duplicate assets is not possible. How it works: Get a list of all assets in Syncro and summarize the list based on the Customer ID, Asset Type, and Asset Serial Create a new ticket listing all of the duplicate assets Set up steps: Install the Syncro RMM community node Connect a Syncro RMM account* Open the "Create a ticket" node and update the customer ID *See Syncro RMM Community Node documentation for details about how to get a Syncro API key and what permissions the Syncro API key needs
by FORK SOFTWARE TECHNOLOGIES INC.
Overview This n8n workflow is specifically designed to monitor the USDT ERC-20 balance within a specific wallet. It uses Etherscan's public blockchain database, which does not require API authentication, to periodically check and process transaction data. This workflow is ideal for users who need an automated solution to track ERC-20 wallet transactions. Features Automatic Monitoring**: Executes every 5 minutes to capture new transactions. Customizable Filters**: Customize tracking based on parameters like transaction duration and wallet addresses. Data Aggregation**: Compiles transaction data into a single, structured list. Formatted Outputs**: Presents processed data in an organized format. Telegram Tracking**: Tracks wallet balances via Telegram notifications using the bot. Requirements n8n Setup**: Requires a self-hosted or cloud-based n8n instance. Basic Understanding**: Basic knowledge of n8n workflows and nodes. Installation and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure the User Data Node: Enter your ERC-20 Wallet Address in the 'Your Wallet Address' field. Enter your Etherscan API Key in the “Your Etherscan API Key” field. Enter your USDT ERC-20 Contract Address in the "Your ERC-20 USDT Contract Address" field (0xdAC17F958D2ee523a2206206994597C13D831ec7). You can also monitor another token by entering a different contract address. Configure the Telegram Node: Go to Telegram and search for "BotFather". Select /newbot from the BotFather menu to create your bot. Get the API key BotFather provides. Go to Telegram and search for "Get My ChatID". Start the conversation and get your ChatID. Use this information to configure the Telegram Node. Schedule Trigger Node: By default, the workflow is triggered every 5 minutes. Adjust this according to your needs. Test the Workflow: Execute the workflow manually to ensure everything is working as expected. How It Works Schedule Trigger: Starts the workflow at predetermined intervals. Edit Fields: Sets the wallet address, Etherscan API key, and USDT ERC-20 token address. Edit Telegram Settings: Create a bot via BotFather. Configure the API key and Telegram Chat ID. Etherscan Data Import: Collects transaction data from the ERC-20 wallet using Etherscan's public database. Final Results: Organizes and formats the transaction data for review. Telegram Bot Message Sending: If there is a balance change, it sends a formatted message about the balance change. If there is no balance change, it sends a message that your balance has not changed. You can configure it to avoid sending a message when there is no change.
by Luke
Built this for a dedicated Slack outage-notifications channel — works well on both desktop and mobile. This is for: IT Administrators & small MSPs looking to streamline M365 alerts from one or multiple mailboxes into a single or specific Slack channels IT Admins who prefer ChatOps over management-by-email What does it do Scans for M365 outage alerts emails (every 1 min) Checks if it impacts a specific user region (if the alert calls it out, countries have to be manually set) Summarizes the incident using OpenAI o4-mini (cheap model - or you can swap for local Ollama) Sends a Slack Block to your outage channel with incident link (can be extended) Deletes the original alert email after successful delivery Credentials Outlook: Create an Outlook credential (OAuth2.0) to point to the mailbox (regular or shared) where M365 service alerts will be received Slack: Create a Slack bot credential with access to the slack channel you want updates posted to OpenAI: Create a OpenAI credential that has access to the GPT-4O-MINI model. Recommend you use projects in OpenAI so that you may set a per-project-budget and not impact other projects. Review this OpenAI documentation for more info on managing Projects in the API portal. Expect this to consume no more than 1-2 cents per month on average. Setup Download & import the workflow Modify the first Outlook block (Check for 365 Service Alert) to use the Outlook credential Modify the OpenAI block's system prompt to call out the countries your users reside in ie. "- Assume the organization has users primarily in the U.S. and Australia. If those regions are affected, state: "Your users may have been affected." Otherwise, add: "No impact expected for your user base."" ← swap U.S. & Australia for desired countries Modify the Slack block (Post outage to Slack) to specify the channel updates will be posted to Sample Slack Output Workflow Diagram
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Ranjan Dailata
Who is this for? The Capture Website Screenshots with Bright Data Web Unlocker and Save to Disk workflow is built for automation professionals and developers who need reliable, high-quality screenshots from any website even those protected by anti-bot technologies. It is ideal for: Compliance Teams - Capturing visual records of web content for legal or audit purposes. Product Managers - Tracking visual changes across competitor landing pages. Digital Marketers - Archiving campaign pages and offer variations. Developers and QA Teams - Validating UI deployments or rendering issues. Growth Hackers and Scrapers - Who need to bypass bot protection and capture visual snapshots of restricted content. What problem is this workflow solving? Websites today are highly protected with anti-bot tools like Cloudflare, bot detection scripts, and geo-restrictions. These protections often break traditional screenshot tools or prevent headless browsers from accessing content. This workflow solves the following problems: Bypasses anti-bot defenses using Bright Data Web Unlocker. Automatically captures screenshots without manual browser steps. Stores images locally for easy access or reporting. Operates headlessly and at scale, perfect for automations or scheduled jobs. What this workflow does Sets the target URL, file name, and Bright Data zone name using the Set URL, Filename and Bright Data Zone node. Sends an HTTP POST request to Bright Data Web Unlocker API to capture a screenshot. Saves the screenshot image (.png) to a specified disk location using the Write a file to disk node. Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Ensure the URL, file name, and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the screenshot. How to customize this workflow to your needs Change the target URL: Modify the value in the **Set URL, Filename and Bright Data Zone node to capture different websites. Set dynamic filenames**: Use expressions in n8n to generate filenames based on date/time or URL. Specify custom save paths: Adjust the path in the **Write a file to disk node to store screenshots in your preferred directory. Enhance with notifications**: Add additional nodes to send alerts or log activity after each screenshot is taken. Integrate with external systems**: Send screenshots to cloud storage (e.g: AWS S3, Google Drive) or link into monitoring/reporting tools.
by Batu Öztürk
🚀 Transform LinkedIn Post Reactions into Content Ideas with Airtable 📝 Description This workflow helps you to turn your LinkedIn activity into a powerful content ideation engine. It captures your most recent post reactions on LinkedIn automatically, filters them based on recency, and structures the content into Airtable—ready for brainstorming, inspiration, or publication planning. ⚙️ What It Does Fetches* the latest liked posts from LinkedIn via a public API (rapidapi.com/Real-Time Linkedin Scraper*). Filters** posts to include only those marked as your decided reaction and posted in the last 7 days. Extracts** the post text, author, links and more. Formats** the data into a database-friendly structure. Saves** the output in Airtable for easy tracking, tagging, or team collaboration. 💡 Use Cases Build a content idea vault from posts you admire. Capture inspiration from thought leaders. Identify trends based on what you find insightful. Supercharge your personal brand or newsletter by turning likes into learning. 🛠 Prerequisites Before using this template, make sure you have: ✅ A RapidAPI account and access to the linkedin-api8 endpoint. ✅ Your RapidAPI key and the target LinkedIn username. ✅ An Airtable account with a base/table set up. 🧰 Setup Instructions Clone this template into your n8n instance. Open the Fetch LinkedIn Likes node and enter: Your LinkedIn username. Your RapidAPI key in the headers. Open the Save to Airtable node and: Connect your Airtable account. Link the correct base (Content Hub) and table (Ideas). Set your desired schedule in the Trigger node. Activate the workflow and you're done! 📋 Airtable Setup Create a base called Content Hub and a table named Ideas with the following columns: | Column Name | Type | Required | Notes | |-------------|------------|----------|----------------------------| | Title | Single line text | ✅ | Generated from author info | | Description | Long text | ✅ | Contains post content | | Source | URL | ✅ | Link to the original post | | Type | Single select | ✅ | Value: Linkedin
by Juan Carlos Cavero Gracia
Image Carousel Publisher for Instagram and TikTok Description This automation template is designed for content creators, digital marketers, and social media managers looking to streamline their image carousel posting workflow. It automates the process of uploading multiple images as carousels to Instagram and slideshows to TikTok, making your visual content management more efficient across platforms. Who Is This For? Content Creators & Influencers:** Simplify posting image collections and focus more on creating visual content. Digital Marketers:** Ensure consistent carousel posts across multiple platforms with minimal manual effort. Social Media Managers:** Automate repetitive image uploading tasks and maintain visual engagement. What Problem Does This Workflow Solve? Manually uploading image carousels to different platforms can be time-consuming and inconsistent. This workflow addresses these challenges by: Automating Multi-Image Uploads:** Processes multiple images and prepares them for platform-specific formats. Supporting Cross-Platform Publishing:** Simultaneously posts your image carousels to Instagram and TikTok slideshows. Maintaining Visual Consistency:** Ensures your visual stories remain consistent across platforms. Streamlining Batch Processing:** Handles the technical complexity of multi-image uploads with a single workflow trigger. How It Works Image Selection: Trigger the workflow with your selected images. Image Processing: The workflow automatically processes and prepares your images for both platforms. Content Distribution: Uploads the images as a carousel to Instagram and as a slideshow to TikTok. Platform Optimization: Formats the uploads according to each platform's requirements. Setup API Token Generation: Visit upload-post.com and create an account Navigate to the API settings section Generate a new API token Copy the token for use in the next steps Platform Configuration: In the "Upload to Instagram" node: Paste your API token in the designated field Configure your Instagram account settings Set your preferred posting parameters In the "Upload to TikTok" node: Add the same API token Set up your TikTok account credentials Adjust posting preferences Content Parameters Setup: Rename the "HTTP Request" node to "Social Media Upload Request" Configure your account information: Username Account ID Content title format Posting schedule (if applicable) Image Source Configuration: Set up your image source directory Configure image format requirements Test with sample images before going live About upload-post.com Upload-post.com is a third-party service that acts as a bridge between your workflow and social media platforms. It provides: Secure API endpoints for multi-platform posting Image format validation and optimization Queue management for scheduled posts Analytics and posting status tracking Cross-platform compatibility handling Requirements Accounts:** upload-post.com account with access to Instagram and TikTok publishing. API Keys:** Upload-post.com API token. Images:** Properly formatted images that meet Instagram and TikTok specifications: Instagram: Up to 10 images per carousel, 1:1 to 4:5 aspect ratio TikTok: Compatible with slideshow format, 9:16 aspect ratio recommended Use this template to enhance your visual storytelling, maintain consistency across social platforms, and engage your audience with compelling image carousels and slideshows.
by Aditya Sharma
Description This intelligent n8n automation streamlines the process of collecting, extracting, and scoring resumes sent to a Gmail inbox—making it an ideal solution for recruiters who regularly receive hundreds of applications. The workflow scans incoming emails with attachments, extracts relevant candidate information from resumes using AI, evaluates each candidate based on customizable criteria, and logs their scores alongside contact details in a connected Google Sheet. Who Is This For? Recruiters & Hiring Managers**: Automate the resume screening process and save hours of manual work. HR Teams at Startups & SMBs**: Quickly evaluate talent without needing large HR ops infrastructure. Agencies & Talent Acquisition Firms**: Screen large volumes of resumes efficiently and with consistent criteria. Solo Founders Hiring for Roles**: Use AI to help score and shortlist top candidates from email applications. What Problem Does This Workflow Solve? Manually reviewing resumes is time-consuming, error-prone, and inconsistent. This workflow solves these challenges by: Automatically detecting and extracting resumes from Gmail attachments. Using OpenAI to intelligently extract candidate info from unstructured PDFs. Scoring resumes using customizable evaluation criteria (e.g., relevant experience, skills, education). Logging all candidate data (Name, Email, LinkedIn, Score) in a centralized, filterable Google Sheet. Enabling faster, fairer, and more efficient candidate screening. How It Works 1. Gmail Trigger Runs on a scheduled interval (e.g., every 6 or 24 hours). Scans a connected Gmail inbox (using OAuth credentials) for unread emails that contain PDF attachments. 2. Extract Attachments Downloads the attached resumes from matching emails. 3. Parse Resume Text Sends the PDF file to OpenAI's API (via GPT-4 or GPT-3.5 with file support or via base64 + PDF-to-text tool). Prompts GPT with a structured format to extract fields like Name, Email, LinkedIn, Skills, and Education. 4. Score Resume Evaluates the resume on predefined scoring logic using AI or logic inside the workflow (e.g., "Has X skill = +10 points"). 5. Log to Google Sheets Appends a new row in a connected Google Sheet, including: Candidate Name Email Address LinkedIn URL Resume Score Setup Accounts & API Keys You’ll need accounts and credentials for: n8n** (hosted or self-hosted) Google Cloud Platform** (for Gmail, Drive, and Sheets APIs) OpenAI** (for GPT model access) Google Sheet Make a Google Sheet and connect it via Google Sheets node in n8n. Columns should include: Name Email LinkedIn Score Configuration Google Cloud: Enable Gmail API and Google Sheets API. Set up OAuth 2.0 Credentials in Google Console. Connect n8n Gmail, Drive, and Sheets nodes to these credentials. OpenAI: Generate an API Key. Use the HTTP Request node or official OpenAI node to send prompt requests. n8n Workflow: Add Gmail Trigger. Add extraction logic (e.g., filter PDFs). Add OpenAI prompt for resume parsing and scoring. Connect structured output to a Google Sheets node. Requirements Accounts: n8n** Google** (Gmail, Sheets, Drive, Cloud Console) OpenAI** API Keys & Credentials: OpenAI API Key Google Cloud OAuth Credentials Gmail Access Scopes (for reading attachments) Configured Google Sheet OpenAI usage (after free tier) Google Cloud API usage (if exceeding free quota)
by Pavel Duchovny
Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired. But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks: Store and recall long-term memory from MongoDB Query vector embeddings stored in Atlas Vector Search Use these results in your LLM chains and automation logic In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to. Prerequisites MongoDB Atlas project and Cluster OpenAI Valid API Key for embeddings (can be other provider) Gemini API Key for the LLM (can be other provider) How it works: There are 2 main flows. One is ingesting flow: Gets a document from a webhook and use MongoDB Vector Atlas to embed the document title and description into points_of_interest collection. Embeddings are stored in a field named embedding Embeddings used are OpenAI's but it can be any type of supported embedders. Second flow is an AI Agent node with Chat Memory Stored in MongoDB Atlas and a Vector Search node as a tool: Chat Message Trigger**: Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node. When data is necessary like a location search or details it will go to the "Vector Search" tool. Vector Search Tool** - uses Atlas Vector Search index created on the points_of_interest collection: // index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } Additional Resources MongoDB Atlas Vector Search n8n Atlas Vector Search docs