by Arnaud MARIE
Monthly Spotify Track Archiving and Playlist Classification This n8n workflow allows you to automatically archive your monthly Spotify liked tracks in a Google Sheet, along with playlist details and descriptions. Based on this data, Claude 3.5 is used to classify each track into multiple playlists and add them in bulk. Who is this template for? This workflow template is perfect for Spotify users who want to systematically archive their listening history and organize their tracks into custom playlists. What problem does this workflow solve? It automates the monthly process of tracking, storing, and categorizing Spotify tracks into relevant playlists, helping users maintain well-organized music collections and keep a historical record of their listening habits. Workflow Overview Trigger Options**: Can be initiated manually or on a set schedule. Spotify Playlists Retrieval**: Fetches the current playlists and filters them by owner. Track Details Collection**: Retrieves information such as track ID and popularity from the user’s library. Audio Features Fetching**: Uses Spotify's API to get audio features for each track. Data Merging**: Combines track information with their audio features. Duplicate Checking**: Filters out tracks that have already been logged in Google Sheets. Data Logging**: Archives new tracks into a Google Sheet. AI Classification**: Uses an AI model to classify tracks into suitable playlists. Playlist Updates**: Adds classified tracks to the corresponding playlists. Setup Instructions Credentials Setup: Make sure you have valid Spotify OAuth2 and Google Sheets access credentials. Trigger Configuration: Choose between manual or scheduled triggers to start the workflow. Google Sheets Preparation: Set up a Google Sheet with the necessary structure for logging track details. Spotify Playlists Setup: Have a diverse range of playlists and exhaustive description (see example) ready to accommodate different music genres and moods. Customization Options Adjust Playlist Conditions**: Modify the AI model’s classification criteria to align with your personal music preferences. Enhance Track Analysis**: Incorporate additional audio features or external data sources for more refined track categorization. Personalize Data Logging**: Customize which track attributes to log in Google Sheets based on your archival preferences. Configure Scheduling**: Set a preferred schedule for periodic track archiving, e.g., monthly or weekly. Cost Estimate For 300 tracks, the token usage amounts to approximately 60,000 tokens (58,000 for input and 2,000 for completion), costing around 20 cents with Claude 3.5 Sonnet (as of October 2024). Playlists' Description Examples | Playlist Name | Playlist Description | |-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Classique | Indulge in the timeless beauty of classical music with this refined playlist. From baroque to romantic periods, this collection showcases renowned compositions. | | Poi | Find your flow with this dynamic playlist tailored for poi, staff, and ball juggling. Featuring rhythmic tracks that complement your movements. | | Pro Sound | Boost your productivity and focus with this carefully selected mix of concentration-enhancing music. Ideal for work or study sessions. | | ChillySleep | Drift off to dreamland with this soothing playlist of sleep-inducing tracks. Gentle melodies and ambient sounds create a peaceful atmosphere for restful sleep. | | To Sing | Warm up your vocal cords and sing your heart out with karaoke-friendly tracks. Featuring popular songs, perfect for solo performances or group sing-alongs. | | 1990s | Relive the diverse musical landscape of the 90s with this eclectic mix. From grunge to pop, hip-hop to electronic, this playlist showcases defining genres. | | 1980s | Take a nostalgic trip back to the era of big hair and neon with this 80s playlist. Packed with iconic hits and forgotten gems, capturing the energy of the decade.| | Groove Up | Elevate your mood and energy with this upbeat playlist. Featuring a mix of feel-good tracks across various genres to lift your spirits and get you moving. | | Reggae & Dub | Relax and unwind with the laid-back vibes of reggae and dub. This playlist combines classic reggae tunes with deep, spacious dub tracks for a chilled-out vibe. | | Psytrance | Embark on a mind-bending journey with this collection of psychedelic trance tracks. Ideal for late-night dance sessions or intense focus. | | Cumbia | Sway to the infectious rhythms of Cumbia with this lively playlist. Blending traditional Latin American sounds with modern interpretations for a danceable mix. | | Funky Groove | Get your body moving with this collection of funk and disco tracks. Featuring irresistible basslines and catchy rhythms, perfect for dance parties. | | French Chanson | Experience the romance and charm of France with this mix of classic and modern French songs, capturing the essence of French musical culture. | | Workout Motivation | Push your limits and power through your exercise routine with this high-energy playlist. From warm-up to cool-down, these tracks will keep you motivated. | | Cinematic Instrumentals | Immerse yourself in a world of atmospheric sounds with this collection of cinematic instrumental tracks, perfect for focus, relaxation, or contemplation. |
by Yulia
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
by Jimleuk
This n8n template scrapes a list of AI grants from grants.gov and qualifies them using AI; determining interest and eligibility for the business. It then sends an email alert of interesting items to team members in an email. The template also shows how you can use the "Remove Duplicates" node to simplify deduplication of external listings without the need to manage this yourself. Not particularly interested in AI Grants? This template works for other tender websites as long as you're able to scrape them. How it works A scheduled trigger is set to fetch a list of AI grants listed on the grants.gov website in the past day. A Remove Duplicates node is used to track Grant IDs to filter out those already processed by the workflow. New grants are summarized and analysed by AI nodes to determine eligibility and interest which is then saved to an Airtable database. Another scheduled trigger starts a little later than the first to collect and summarize the new grants The results are then compiled into an email template using the HTML node, in the form of a newsletter designed to alert and brief team members of new AI grants. This email is then sent to a list of subscribers using the gmail node. How to use Make a copy of sample Airtable here: https://airtable.com/appiNoPRvhJxz9crl/shrRdP6zstgsxjDKL The filters for fetching the grants is currently set to the "AI" category. Feel free to change this to include more categories. Not interested in grants, this template can works for other sources of leads just change the endpoint and how you're defining the item ID to track. Requirements Airtable for database OpenAI for LLM Note: These are not hard requirements and can be exchanged for services available to you. customising the workflow "Eligibility" criteria at this stage may be better served by identifying hard blockers instead ie. certifications, geographical considerations or certain legal checks. Be sure to mention any hard blockers into the Eligibility prompt. Not particularly interested in AI prompts? This template works for other tender websites as long as you're able to scrape them.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by Angel Menendez
Analyze & Sort Suspicious Email Contents with ChatGPT and Jira Who is this for? This workflow is tailored for IT security teams, managed service providers (MSPs), and organizations aiming to streamline the detection and reporting of phishing emails. It's especially useful for teams handling high email volumes and requiring quick, automated analysis. What problem is this workflow solving? Phishing emails pose a significant cybersecurity threat, and manual review processes are time-consuming and prone to human error. This workflow automates the identification of malicious emails, provides AI-driven insights, and generates structured reports, enabling faster and more efficient responses to email-based threats. What this workflow does This workflow integrates Gmail or Microsoft Outlook to monitor and capture incoming emails. It processes the email content and headers, converts the email's body to a visual screenshot for clarity, and uses ChatGPT's advanced AI to analyze the email for phishing indicators. Based on the analysis, it categorizes emails as potentially malicious or benign, creating detailed Jira tickets for each case. Attachments, including the email body and screenshots, are automatically uploaded for comprehensive reporting. Key steps include: Email Integration: Captures emails from Gmail or Microsoft Outlook. Content Processing: Extracts and organizes email content and metadata. AI Analysis: Uses ChatGPT to evaluate email content and headers. Classification: Categorizes emails as malicious or benign. Automated Reporting: Creates Jira tickets with detailed analysis and attachments. Setup Authentication: Configure Gmail or Microsoft Outlook credentials in n8n. API Keys: Add credentials for the HTML screenshot service (hcti.io) and OpenAI. Jira Configuration: Set up project and issue types in the Jira nodes. Customization: Update sticky notes and nodes to fit your organizational requirements, such as modifying the AI prompt or Jira ticket fields. How to customize this workflow to your needs Adjust email triggers to include or exclude specific senders or subjects. Refine the AI prompt in the ChatGPT node to tailor phishing detection criteria. Modify Jira ticket content to include additional fields or match specific workflows. This workflow is ideal for automating email threat detection, reducing response times, and enhancing overall cybersecurity processes. By leveraging AI-powered insights, it helps organizations stay ahead of phishing attacks.
by Mikal Hayden-Gates
Overview Automates your complete social media content pipeline: sources articles from Wallabag RSS, generates platform-specific posts with AI, creates contextual images, and publishes via GetLate API. Built with 63 nodes across two workflows to handle LinkedIn, Instagram, and Bluesky—with easy expansion to more platforms. Ideal for: Content marketers, solo creators, agencies, and community managers maintaining a consistent multi-platform presence with minimal manual effort. How It Works Two-Workflow Architecture: Content Aggregation Workflow Monitors Wallabag RSS feeds for tagged articles (#to-share-linkedin, #to-share-instagram, etc.) Extracts and converts content from HTML to Markdown Stores structured data in Airtable with platform assignment AI Generation & Publishing Workflow Scheduled trigger queries Airtable for unpublished content Routes to platform-specific sub-workflows (LinkedIn, Instagram, Bluesky) LLM generates optimized post text and image prompts based on custom brand parameters Optionally generates AI images and hosts them on Imgbb CDN Publishes via GetLate API (immediate or draft mode) Updates Airtable with publication status and metadata Key Features: Tag-based content routing using Wallabag's native system Swappable AI providers (Groq, OpenAI, Anthropic) Platform-specific optimization (tone, length, hashtags, CTAs) Modular design—duplicate sub-workflows to add new platforms in \~30 minutes Centralized Airtable tracking with 17 data points per post Set Up Steps Setup time: \~45-60 minutes for initial configuration Create accounts and get API keys (\~15 min) Wallabag (with RSS feeds enabled) GetLate (social media publishing) Airtable (create base with provided schema—see sticky notes) LLM provider (Groq, OpenAI, or Anthropic) Image service (Hugging Face, Fal.ai, or Stability AI) Imgbb (image hosting) Configure n8n credentials (\~10 min) Add all API keys in n8n's credential manager Detailed credential setup instructions in workflow sticky notes Set up Airtable database (\~10 min) Create "RSS Feed - Content Store" base Add 19 required fields (schema provided in workflow sticky notes) Get Airtable base ID and API key Customize brand prompts (\~15 min) Edit "Set Custom SMCG Prompt" node for each platform Define brand voice, tone, goals, audience, and image preferences Platform-specific examples provided in sticky notes Configure platform settings (\~10 min) Set GetLate account IDs for each platform Enable/disable image generation per platform Choose immediate publish vs. draft mode Adjust schedule trigger frequency Test and deploy Tag test articles in Wallabag Monitor the first few executions in draft mode Activate workflows when satisfied with the output Important: This is a proof-of-concept template. Test thoroughly with draft mode before production use. Detailed setup instructions, troubleshooting tips, and customization guidance are in the workflow's sticky notes. Technical Details 63 nodes**: 9 Airtable operations, 8 HTTP requests, 7 code nodes, 3 LangChain LLM chains, 3 RSS triggers, 3 GetLate publishers Supports**: Multiple LLM providers, multiple image generation services, unlimited platforms via modular architecture Tracking**: 17 metadata fields per post, including publish status, applied parameters, character counts, hashtags, image URLs Prerequisites n8n instance (self-hosted or cloud) Accounts: Wallabag, GetLate, Airtable, LLM provider, image generation service, Imgbb Basic understanding of n8n workflows and credential configuration Time to customize prompts for your brand voice Detailed documentation, Airtable schema, prompt examples, and troubleshooting guides are in the workflow's sticky notes. Category Tags #social-media-automation, #ai-content-generation, #rss-to-social, #multi-platform-posting, #getlate-api, #airtable-database, #langchain, #workflow-automation, #content-marketing
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Yaron Been
🔍 Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. 🧩 How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. ✅ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key ⚙️ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use 🧠 Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance 💬 Support & Feedback For help or customization: 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Jimleuk
This n8n template introduces the Dynamic Prompts Ai workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the video demo I did for n8n Studio here: https://www.youtube.com/watch?v=_fNAD1u8BZw Check out the example Airtable here: https://airtable.com/appAyH3GCBJ56cfXl/shrXzR1Tj99kuQbyL Looking for the Baserow Version? https://n8n.io/workflows/2780-ai-data-extraction-with-dynamic-prompts-and-baserow/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Airtable fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to run the "create airtable webhooks" mini-flow to configure your Airtable to send change events to the n8n template. This mini-flow exists in the template but you'll have to update the IDs. Check the template for more instructions. Requirements Airtable for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Jimleuk
This n8n template introduces the Dynamic Prompts AI workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the n8n Studio Episode here: https://www.youtube.com/watch?v=_fNAD1u8BZw Community post here: https://community.n8n.io/t/dynamic-prompts-with-n8n-baserow-and-airtable/72052 Looking for the Airtable Version? https://n8n.io/workflows/2771-ai-data-extraction-with-dynamic-prompts-and-airtable/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Baserow fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to complete the "create Baserow webhooks" steps to configure your baserow to send change events to the n8n template. Baserow webhooks are created in the Baserow web interface. Check the template for more instructions. Requirements Baserow for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options 🧠 DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup 🛠️ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details 🔧 User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Yaron Been
LinkedIn Enrichment & Ice Breaker Generator For SDRs, growth marketers, and founders looking to scale personalized outreach. This workflow enriches LinkedIn profile data using Bright Data and generates AI-powered ice breakers using Claude (Anthropic). It automates research and messaging to help you connect smarter and faster — without manual effort. 🧩 How It Works This workflow combines Google Sheets, Brigt Data, and Claude (Anthropic) to fully automate your outreach research: Trigger Manually trigger the workflow or run it on a schedule (via Manual Trigger or Schedule Trigger). Read Input Sheet Fetches rows from a Google Sheet. Each row must contain at least a Linkedin_URL_Person and row_number. Prepare Input Formats each row for Bright Data’s API using Set and SplitInBatches nodes. Enrich Profile (Bright Data API) Sends LinkedIn URLs to Bright Data’s Dataset API via HTTP Request. Waits for snapshot to be ready using polling logic with Wait, If, and Snapshot Progress nodes. Once ready, retrieves the enriched profile data including: Name City Current company About section Recent posts Update Sheet with Profile Data Writes the retrieved enrichment data into the corresponding row in Google Sheets (via row_number). Generate Ice Breaker (Claude AI) Sends enriched profile content to Claude (Anthropic) using a custom prompt. Focuses on recent posts for crafting relevant, respectful, 1–4-line ice breakers. Update Sheet with Ice Breaker Writes the generated ice breaker to the Ice Breaker 1 column in the original row. ✅ Requirements To use this workflow, you must have the following: Google Sheets A Google account A Google Sheet with at least one sheet/tab containing: Column: Linkedin_URL_Person Column: row_number (used for mapping input and output rows) Bright Data A Bright Data account with access to the Dataset API An active dataset that accepts LinkedIn URLs API key with Dataset API access Anthropic Claude An Anthropic API key (for Claude 3.5 Haiku or other Claude models) n8n Environment Access to HTTP Request, Set, Wait, SplitInBatches, If, and Google Sheets nodes Access to Claude integration (via LangChain nodes: @n8n/n8n-nodes-langchain) Credential manager properly configured with: Google Sheets OAuth2 credentials Bright Data API key Anthropic API key ⚙️ Setup Instructions Step 1: Copy the Google Sheets Template > 📄 Click here to make a copy Fill the Linkedin_URL_Person column with LinkedIn profile URLs you want to enrich Do not modify headers or add filters to the sheet Leave other columns (name, city, about, posts, ice breaker) blank — the workflow fills them Step 2: Connect Your Accounts in n8n Google Sheets: Create a credential under Google Sheets OAuth2 API Bright Data: Add your API key as a credential under HTTP Request (Authorization header) Anthropic: Create a credential for Anthropic API with your Claude key Step 3: Import and Configure the Workflow Import the workflow into your n8n instance. In each Google Sheets node: Select the copied Google Sheet Select the correct tab (usually input or Sheet1) In the HTTP Request node to Bright Data: Paste your Bright Data dataset ID In the Claude prompt node: Optionally adjust the tone and length of the ice breaker prompt Step 4: Run the Workflow Test it using the Manual Trigger node For daily automation, enable the Schedule Trigger and configure interval settings Watch your Google Sheet populate with enriched data and tailored ice breakers 🧠 Tips & Best Practices Bright Data Delay**: Snapshots may take time. The workflow polls the status until complete. Retry Protection**: If and Wait nodes avoid infinite loops by checking snapshot status. Mapping via row_number**: Critical to ensure data is updated in the right row. Prompt Engineering**: You can fine-tune Claude's behavior by editing the text prompt. 🧾 Output Example Once complete, each row in your Google Sheet will contain: | Linkedin_URL_Person | Name | City | Company | Recent Post | Ice Breaker | |---------------------|------|------|---------|-------------|--------------| | linkedin.com/... | Jane Doe | NYC | ACME Corp | “Why AI should replace meetings” | "Loved your post about AI and meetings — finally someone said it!" | 💬 Support & Feedback Questions? Want to tweak the prompt or expand the enrichment? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen