by Lucas Peyrin
How it works This template provides a complete, ready-to-use web application for generating high-quality AI prompts. It features a user-friendly web form where you can describe your goal, and it leverages an AI model (Google Gemini) to create a structured, reusable prompt for you. The workflow is a full-stack application built entirely within n8n: Frontend (The Form): A Form Trigger node creates a beautiful, public-facing web form. Here, a user describes the prompt they need and selects which structural components to include (like system instructions, examples, or input variables). Backend (The AI Logic): A LangChain Chain node takes the user's request and constructs a "meta-prompt"—a set of instructions for the AI on how to generate the final prompt. The Google Gemini node executes this meta-prompt, creating a well-structured output with clear sections and tags. The Result (The Webpage): After generation, the user is automatically redirected to a new URL. This URL is handled by another Webhook node, which serves a custom-coded HTML page. This beautiful, dark-themed webpage displays the generated prompt and includes a one-click "Copy" button, making it easy to use the result immediately. This template is a perfect example of how to build interactive web tools with n8n, combining a user interface, backend logic, and a dynamic web response in a single workflow. Set up steps Setup time: ~1-3 minutes This workflow requires a Google AI credential to function. Configure Google AI Credentials: This workflow uses a Google Gemini model. You will need a Google AI API key. In n8n, go to Credentials and click Add credential. Search for Google Gemini and enter your API key. Go back to the workflow, open the Gemini 2.5 Flash node, and select your newly created credential from the dropdown. Activate the Workflow: Click the Active toggle in the top-right corner to turn the workflow on. Access Your Prompt Maker: Open the Prompt Request (Form Trigger) node. Copy the Public URL provided. This is the link to your new web application! Open the link in your browser, fill out the form, and see the magic happen. Note: This workflow uses environment variables like {{ $env.WEBHOOK_URL }} to build the redirect URL. These are typically set automatically by n8n and should work out-of-the-box on most standard n8n setups.
by Belgacem Dhiflaoui
Description What Problem Does This Solve? 🛠️ This workflow automates the process of extracting key information from resumes received as email attachments and storing that data in a structured format within a Supabase database. It eliminates the manual effort of reviewing each resume, identifying relevant details, and entering them into a database. This streamlines the hiring process, making it faster and more efficient for recruiters and HR professionals. Target audience: Recruiters, HR departments, and talent acquisition teams. What Does It Do? 🌟 Monitors a designated email inbox for new messages with resume attachments. Extracts key information such as name, contact details, education, work experience, and skills from the attached resumes. Cleans and formats the extracted data. Stores the processed data securely in a Supabase database. Key Features 📋 Automatic email monitoring for resume attachments. Intelligent data extraction from various resume formats (e.g., PDF, DOC, DOCX). Customizable data fields to capture specific information. Seamless integration with Supabase for data storage. Uses OpenRouter to streamline API key management for services such as AI-powered parsing. Setup Instructions Prerequisites ⚙️ n8n Instance**: Self-hosted or cloud instance of n8n. Email Account**: Gmail account with Gmail API access for receiving resumes. Supabase Account**: A Supabase project with a database/table ready to store extracted resume data. You'll need the Supabase URL and API key. OpenRouter Account**: For managing AI model API keys centrally when using LLM-based resume parsing. Installation Steps 📦 1. Import the Workflow: Copy the exported workflow JSON. Import it into your n8n instance via “Import from File” or “Import from URL”. 2. Configure Credentials: In n8n > Credentials, add credentials for: Email account (Gmail API): Provide Client ID and Client Secret from the Google Cloud Platform. Supabase: Provide the Supabase URL and the anon public API key. OpenRouter (Optional): Add your OpenRouter API key for use with any AI-powered resume parsing nodes. Assign these credentials to their respective nodes: Gmail Trigger → Email credentials. Supabase Insert → Supabase credentials. AI Parsing Node → OpenRouter credentials. 3. Set Up Supabase Table: Create a table in Supabase with columns such as: name, email, phone, education, experience, skills, received_date, etc. Make sure the field names align with the structure used in your workflow. 4. Customize Nodes: Parsing Node(s):* Modify the workflow to use an *OpenAI model* directly for field extraction, replacing the *Basic LLM Chain** node that utilizes OpenRouter. 5. Test the Workflow: Send a test email with a resume attachment. Check n8n's execution log to confirm the workflow triggered, parsed the data, and inserted it into Supabase. Verify data integrity in your Supabase table. How It Works High-Level Workflow 🔍 Email Monitoring: Triggered when a new email with an attachment is received (via Gmail API). Attachment Check: Verifies the email contains at least one attachment. Prepare Data: Extracts the attachment and prepares it for analysis. Data Extraction: Uses OpenRouter-powered LLM (if configured) to extract structured information from the resume. Data Storage: The structured information is saved into the Supabase database. Node Names and Actions (Example) Gmail Trigger:** Triggers when a new email is received. IF**: Checks whether the received email includes any attachments. Get Attachments:** Retrieves attachments from the triggering email. Prepare Data:** Prepares the attachment content for processing. Basic LLM Chain:** Uses an AI model via OpenRouter to extract relevant resume data and returns it as structured fields. Supabase-Insert:** Inserts the structured resume data into your Supabase database.
by Akhil Varma Gadiraju
n8n Workflow: Sync Workflows with GitLab How It Works This workflow ensures that your self-hosted n8n workflows are version-controlled in a GitLab repository. It compares each current workflow from n8n with its stored counterpart in GitLab. If any differences are detected, the GitLab file is updated with the latest version. Core Logic: Retrieve Workflows – Fetch all workflows from the n8n REST API. Compare with GitLab – For each workflow, fetch the corresponding file from GitLab and compare the JSON. Update if Changed – If differences exist, commit the updated workflow to GitLab using its API. Setup Before using the workflow, ensure the following: Prerequisites: n8n**: Self-hosted instance with access to the /rest/workflows API. GitLab**: A repository where workflows will be stored, and a Personal Access Token (PAT) with api and write_repository permissions. n8n Nodes Required**: HTTP Request (to call n8n and GitLab APIs) Code or Function nodes (for diffing and formatting) Looping (SplitInBatches or similar) Configuration: Set environment variables or workflow credentials for: GITLAB_TOKEN GITLAB_REPO GITLAB_BRANCH (e.g., main) GITLAB_FILE_PATH_PREFIX (e.g., n8n-workflows/) How to Use Import the Workflow into your n8n instance. Configure GitLab API Credentials: Set the GitLab PAT as a header in the HTTP Request node: Private-Token: {{ $env.GITLAB_TOKEN }} Map Workflows to GitLab Paths: Use the workflow name or ID to create the file path. Example: n8n-workflows/workflow-name.json Trigger the Workflow: Can be manually triggered, or scheduled to run at intervals (e.g., daily). Review Commits in GitLab: Each updated workflow will be committed with a message like: "Update workflow: Sample Workflow" Disclaimer This workflow does not handle merge conflicts or manual edits made directly in GitLab. Always ensure proper coordination if multiple sources are modifying workflows. Only structural changes are tracked. Non-functional metadata (like timestamps or IDs) may trigger false positives unless filtered. Use at your own risk. Test in a safe environment before applying to production workflows.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for: Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Data’s MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.
by Alfonso Corretti
Who is this for? Everyone! Did you dream of asking an AI "what hotel did I stay in for holidays last summer?" or "what were my marks last semester like?". Dream no more, as vector similarity searches and this workflow are the foundations to make it possible (as long as the information appears in your e-mails 😅). 100% Local and Open Source! This workflow is designed to use locally-hosted open source. Ollama as LLM provider, nomic-embed-text as the embeddings model, and pgvector as the vector database engine, on top of Postgres. Structured AND Vectorized This workflow combines structured and semantic search on your e-mail. No need for enterprise setups! Leverage the convenience of n8n and open source to get a bleeding edge solution. Setup You will need a PGVector database with embeddings for all your email. Use my other template Gmail to Vector Embeddings with PGVector and Ollama to set it up in a breeze! Make a copy of my Email Assistant: Convert Natural Language to SQL Queries with Phi4-mini and PostgreSQL, you will need it for structured searches. Install this template and modify the Call the SQL composer Workflow step, to point at your copy of the SQL workflow. Adjust the rest of necessary steps: Telegram Trigger, AI Chat model, AI Embeddings... Activate the workflow and chat around!
by Yaron Been
🔍 Competitor Review Scraper & Ad Copy Generator (Trustpilot + Bright Data + GPT-4o-mini) 📌 Who It's For Marketers, business owners, and agencies looking to: Analyze competitor pain points Generate high-impact Facebook ad copy Automate manual data processing 🧩 How It Works This n8n-based workflow combines Bright Data, Google Sheets, and OpenAI to scrape, process, and transform Trustpilot reviews into ready-to-use ad copy. 🔹 Step-by-Step Breakdown Trigger (Manual Form Submission) Input required: Competitor’s Trustpilot URL Review timeframe (30d, 3m, 6m, 12m) Fetch Reviews Calls Bright Data’s Dataset API with URL & timeframe Polls until snapshot is ready Retrieve & Store Extracts all reviews Saves them into a structured Google Sheet Filter & Aggregate Filters to only 1–2 star reviews Summarizes common negative feedback Generate Ad Copy Sends the summary to OpenAI GPT-4o-mini Produces 3 variations of ad copy targeting pain points Distribute Insights Sends ad copy + summary via email to the marketing team ✅ Requirements -LLM Account -Google Sheets - Copy this sheet: https://docs.google.com/spreadsheets/d/1Zi758ds2_aWzvbDYqwuGiQNaurLgs-leS9wjLWWlbUU/edit?gid=0#gid=0 -Bright Data account ⚙️ Setup Instructions **Step 1: Google Sheets ** Copy this Google Sheets template Do not change column headers **Step 2: n8n Credential Setup ** Google Sheets: OAuth2 Bright Data: Authorization Header OpenAI: API Key for GPT-4o-mini **Step 3: Import Workflow ** Import the .json file into n8n Configure your sheet + dataset ID Adjust GPT prompts as needed **Step 4: Run the Workflow ** Trigger via form Receive ad copy + review insights via email 🧠 Tips & Best Practices Bright Data snapshots may take time — polling is handled Focusing on 1–2 star reviews yields the most actionable pain points You can customize GPT-4o-mini prompts for tone or vertical 💬 Support & Feedback Need help or customization? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Jimleuk
This n8n demonstrates how to build your own Qdrant MCP server to extend its functionality beyond that of the official implementation. This n8n implementation exposes other cool API features from Qdrant such as facet search, grouped search and recommendations APIs. With this, we can build an easily customisable and maintainable Qdrant MCP server for business intelligence. This MCP example is based off an official MCP reference implementation which can be found here - https://github.com/qdrant/mcp-server-qdrant How it works A MCP server trigger is used and connected to 5 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. We use a mix of n8n supported Qdrant nodes for simple operations such as insert documents and similarity search, and HTTP node to hit the Qdrant API directly for Facet search, group search and recommendations. We use "Edit Field" and "Aggregate" nodes to return suitable responses to the MCP client. How to use This Qdrant MCP server allows any compatible MCP client to manage a Qdrant Collection by supporting select and create operations. You will need to have a collection available before you can use this server. Use the Prerequisite manual steps to get started! Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you help me list the available companies in the collection?" "What do customers say about product deliveries from company X?" "What do customers of company X and company Y say about product ease of use?" Requirements Qdrant for vector store. This can be an a cloud-hosted instance or one you can self-host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Depending on what queries you'll receive, adjust the tool inputs to make it easier for the agent to set the right parameters. Not interested in Reviews? The techniques shared in this template can be used for other types of collections. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Lucien
Overview Automated LinkedIn content generator that: Fetches trending AI news using NewsAPI Enhances content with Qdrant vector store context Generates professional LinkedIn posts using GPT-4o-mini Tracks email interactions in Google Sheets 🛠️ Prerequisites API Keys : NewsAPI, OpenAI (GPT-4o-mini), Qdrant Accounts : Gmail Oauth, Google Sheets, LinkedIn developer API Environment Variables : OPENAI_API_KEY NEWSAPI_KEY QDRANT_URL/QDRANT_API_KEY 📁 Google Sheets Setup Create a spreadsheet with these columns: ISO date Email address Unique ID "Approve" or "Reject" ⚙️ Setup Instructions Pre-populate Qdrant : Create collection "posts" with LinkedIn post examples Add 10+ example posts for style reference Node Configuration : Update Gmail credentials (OAuth2) Set fromEmail/toEmail in email nodes Configure Google Sheets document IDs Test Workflow : Run Schedule Trigger manually first Verify email notifications work Check Qdrant vector store connectivity 🎨 Customization Options Tone Adjustment : Modify system message in "AI Agent" Post Style : Update prompt in "Generate LinkedIn Post" Filter Criteria : Edit NewsAPI URL parameters Scheduling : Change interval in Schedule Trigger
by Joseph
📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. 🚀 Setup Steps: 1. Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader 2. Paste the Script Code Replace the default Code.gs content with the following (your custom script): function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOUR_FOLDER_ID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } 3. Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; 4. Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: var folderId = "1Xabc12345678defGHIJklmn"; 5. Set Up Deployment as Web App Click “Deploy” > “Manage Deployments” > “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL 📤 How to Use in n8n 1. HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key 2. Rename Node A simple drive update node to rename the file using the file drive url returned from the script.
by Jimleuk
This n8n template uses existing emails from customers as context to customise and "finetune" outreach emails to them using AI. By now, it should be common knowledge that we can leverage AI to generate unique emails but in a way, they can remain generic as the AI lacks the customer context to be truly personalised. One way to solve this is by pulling in a source of customer data - and what better way then by using existing email correspondence. How it works Customers to target are pulled from Hubspot and each customer is then run in a loop. We're using a loop as the retrieved emails for each customer become separate items and a loop helps with item reference. We connect to our Gmail account to pull all emails recieved from the customer. The contents of the email will be suitable to build a short persona of the customer. We use the Information Extractor to get our AI model to pull out the key attributes of this persona such as decision making style and communication preferences. With this persona, we can now pass this to our AI model to generate a personalised outreach email specifically for our customer. Finally, a draft email is created for human review before sending. If you would rather send the email straight away, this is also possible. How to use Define the topic of the outreach email in the "variables" node. This directs the AI on what outreach email to generate. Ensure the emails are pulled from the right account. If emails may contain sensitive data, adjust the filters and text parsing to ensure these are not leaked to the AI (which might then leak into the generated email). Requirements Hubspot for Contacts List OpenAI for LLM Gmail for Existing Emails and Sending Emails Customising this workflow Not using Hubspot? Any CRM would work just as well or even a simple text csv! If you have customer past deals or engagements in your CRM, consider using this as additional context for the AI to use.