by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This workflow automates the real-time extraction of Job Descriptions and Salary Information from job listing pages using Bright Data MCP and analyzes content using OpenAI GPT-4o mini. This workflow is ideal for: Recruiters & HR Tech Startups**: Automate job data collection from public listings Market Intelligence Teams**: Analyze compensation trends across companies or geographies Job Boards & Aggregators**: Power search results with structured, enriched listings AI Workflow Builders**: Extend to other career platforms or automate resume-job match analysis Analysts & Researchers**: Track hiring signals and salary benchmarks in real time What problem is this workflow solving? Traditional scraping of job portals can be challenging due to cluttered content, anti-scraping measures, and inconsistent formatting. Manually analyzing salary ranges and job descriptions is tedious and error-prone. This workflow solves the problem by: Simulating user behavior using Bright Data MCP Client to bypass anti-scraping systems Extracting structured, clean job data in Markdown format Using OpenAI GPT-4o mini to analyze and extract precise salary details and refined job descriptions Merging and formatting the result for easy consumption Delivering final output via webhook, Google Sheets, or file system What this workflow does Components & Flow Input Nodes job_search_url: The job listing or search result URL job_role: The title or role being searched for (used in logging/formatting) MCP Client Operations MCP Salary Data Extractor Simulates browser behavior and scrapes salary-related content (if available) MCP Job Description Extractor Extracts full job description as structured Markdown content OpenAI GPT-4o mini Nodes Salary Information Extractor Uses GPT-4o mini to detect, clean, and standardize salary range data (if any) Job Description Refiner Extracts role responsibilities, qualifications, and benefits from unstructured text Company Information Extractor Uses Bright Data MCP and GPT-4o mini to extract the company information Merge Node Combines the refined job description and extracted salary information into a unified JSON response object Aggregate node Aggregates the job description and salary information into a single JSON response object Final Output Handling The output is handled in three different formats depending on your downstream needs: Save to Disk** Output stored with filename including timestamp and job role Google Sheet Update** Adds a new row with job role, salary, summary, and link Webhook Notification** Pushes merged response to an external system Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Modify Input Source Change the job_search_url to point to any job board or aggregator Customize job_role to reflect the type of jobs being analyzed Tweak LLM Prompts (Optional) Refine GPT-4o mini prompts to extract additional fields like benefits, tech stacks, remote eligibility Change Output Format Customize the merged object to output JSON, CSV, or Markdown based on downstream needs Add additional destinations (e.g., Slack, Airtable, Notion) via n8n nodes
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The DNB Company Search & Extract workflow is designed for professionals who need to gather structured business intelligence from Dun & Bradstreet (DNB). It is ideal for: Market Researchers B2B Sales & Lead Generation Experts Business Analysts Investment Analysts AI Developers Building Financial Knowledge Graphs What problem is this workflow solving? Gathering business information from the DNB website usually involves manual browsing, copying company details, and organizing them in spreadsheets. This workflow automates the entire data collection pipeline — from searching DNB via Google, scraping relevant pages, to structuring the data and saving it in usable formats. What this workflow does This workflow performs automated search, scraping, and structured extraction of DNB company profiles using Bright Data’s MCP search agents and OpenAI’s 4o mini model. Here's what it includes: Set Input Fields: Provide search_query and webhook_notification_url. Bright Data MCP Client (Search): Performs Google search for the DNB company URL. Markdown Scrape from DNB: Scrapes the company page using Bright Data and returns it as markdown. OpenAI LLM Extraction: Transforms markdown into clean structured data. Extracts business information (company name, size, address, industry, etc.) Webhook Notification: Sends structured response to your provided webhook. Save to Disk: Persists the structured data locally for logging or auditing. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for search_query and webhook_notification_url. Update the file name and path to persist on disk. How to customize this workflow to your needs Search Engine**: Default is Google, but you can change the MCP client engine to Bing, or Yandex if needed. Company Scope**: Modify search query logic for niche filtering, e.g., "biotech startups site:dnb.com". Structured Fields**: Customize the LLM prompt to extract additional fields like CEO name, revenue, or ratings. Integrations**: Push output to Notion, Airtable, or CRMs like HubSpot using additional n8n nodes. Formatting**: Convert output to PDF or CSV using built-in File and Spreadsheet nodes.
by InfyOm Technologies
✅ What problem does this workflow solve? Automatically monitor multiple websites every 5 minutes, log downtime, notify your team instantly via multiple channels, and track uptime/downtime in a Google Sheet—without relying on expensive monitoring tools. ⚙️ What does this workflow do? Triggers every 5 minutes to monitor website health. Fetches a list of website URLs from a Google Sheet. Checks the status of each website one by one. Sends instant alerts if a website is down (Email, Slack, Telegram, Voice Call). Logs downtime events in Google Sheets. Tracks when websites are back up and updates the log. Sends recovery notifications when a site is live again (Email, Slack, Telegram). 🔧 Setup 📄 Google Sheets Setup Sheet 1: List of website URLs to monitor. Sheet 2: Log to store uptime/downtime records. Sample Format: https://docs.google.com/spreadsheets/d/1_VVpkIvpYQigw5q0KmPXUAC2aV2rk1nRQLQZ7YK2KwY/edit?usp=sharing ✉️ Gmail, Slack & Telegram Setup Connect Gmail, Slack, and Telegram to n8n. Configure each service with proper credentials or OAuth. 📞 Vapi (Voice Call) Setup Create a Vapi account. Generate an API key. Configure API Parameters (vapi_api_key, assistant_id, number, phone_number_id) on VAPI Node. Insert the First Message specified in the Workflow. 🧠 How it Works ⏱ 1. Scheduled Monitoring A Schedule Trigger runs the workflow every 5 minutes. It reads the list of URLs from the Google Sheet and loops through each one. 🌍 2. Website Health Check Each website is pinged to check if it’s online. 🔴 3. If Website is Down: It verifies if a downtime record already exists. If not, it: Adds a new row in the Google Sheet with the timestamp. Sends notifications via: 📧 Email 💬 Slack 📲 Telegram 📞 Voice Call via Vapi 🟢 4. If Website is Back Up: It fetches the matching downtime record. Updates the sheet with: ✅ Uptime timestamp ⏱ Total downtime duration Sends recovery notifications via: 📧 Email 💬 Slack 📲 Telegram (No phone call is made for uptime.) 👤 Who can use it? This is perfect for: 🚀 Startups 👨💻 Freelance Developers 🛠 SaaS Product Owners 🖥 IT/DevOps Teams If you're looking to replace tools like UptimeRobot, Pingdom, or StatusCake, this no-code solution gives you full control, customization, and cost-efficiency.
by Praveena
Why Teachers now spend 3-4 hours per lesson creating materials and resources from scratch. With additional/special needs, this makes it difficult to create additional materials. This is unsustainable and takes their time away from teaching. Tailored for UK teachers but can be expanded globally with prompt and form enhancements. How it works I built a system with three specialized AI agents that create complete lesson packages and automatically uploads a document in Google drive and puts an appointment in calendar to review the document. Features Research agent to pull specific information including special education needs and curriculums. The scoring and assessment agent to generate tailored assessment plans, assignments, grading mechanism based on chosen requirements. The integration agent just provides ideas to expand to other tools. In nfuture there is opportunity to add on Kahoot or other tools to create quizzes. Finally the enriched document is emailed and a calendar invite is sent for review. What you need N8N Any LLM API Key (I used OpenAI) Google drive integration Google calendar integration Modify the email id from XXX@gmail.com to your Email id in email component. Support Watch this video for intro on how it works. Contact me on info@pankstr.com for any queries.
by Dr. Firas
AI-powered WhatsApp booking system with instant SMS confirmations Who is this for? This workflow is designed for solo entrepreneurs, consultants, coaches, clinics, or any business that handles client appointments and wants to automate the entire scheduling experience via WhatsApp — without the need for live agents. What problem is this workflow solving? Responding to inbound messages, collecting booking details, suggesting available times, and sending reminders can be a huge time drain. This workflow eliminates manual handling by: Automating WhatsApp conversations with an AI assistant Booking appointments directly into Cal.com Sending timely SMS reminders before appointments It ensures you never miss a lead or a follow-up — even while you sleep. What this workflow does From a single WhatsApp message, the workflow: Triggers via a WhatsApp webhook Uses GPT-4 to handle conversation flow and qualify the prospect Collects name, email, selected service Calls Cal.com API to fetch available time slots Books the appointment and stores it in Google Sheets Sends a confirmation message via WhatsApp Periodically scans for upcoming appointments Sends SMS reminders to clients 2 hours before their session Setup Connect your Webhook node to a WhatsApp API (e.g., 360dialog, Twilio, or Ultramsg) Add your OpenAI API key for the GPT-4 nodes Configure your Cal.com API key and set your calendar ID Link your Google Sheets with fields like: name, email, date, time, status, reminder_sent Connect your SMS service (e.g., sms77) with API credentials Adjust the schedule in the reminder node as needed How to customize this workflow to your needs Change the language or tone of the AI assistant** by editing the system prompt in the GPT node Filter available time slots** by service, team member, or duration Modify the reminder timing** (e.g., 1 hour before, 24h before, etc.) Add conditional logic** to route users to different booking flows based on their responses Integrate additional CRMs** or notification channels like email or Slack 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Jimleuk
This n8n template demonstrates one approach to achieve a more natural and less frustration conversations with AI agents by reducing interrupts by predicting the end of user utterances. When we text or chat casually, it's not uncommon to break our sentences over multiple messages or when it comes to voice, break our speech with the odd pause or umms and ahhs. If an agent replies to every message, it's likely to interrupt us before we finish our thoughts and it can get very annoying! Previously, I demonstrated a simple technique for buffering each incoming message by 5 seconds but that approach still suffers in some scenarios when more time is needed. This technique has no arbitrary time limit and instead uses AI to figure out when its the agent's turn based on the user's message, allowing for the user to take all the time they need. How it works Telegram messages are received but no reply is generated for them by default. Instead they are sent to the prediction subworkflow to determine if a reply should be generated. The prediction subworkflow begins by checking Redis for the current user's prediction session state. If this is a new "utterance", it kicks off the "predict end of utterance" loop - the purpose of which is to buffer messages in a smart way! New users message can continue to be accepted by the workflow until enough is collected to allow our prediction classifier to determine the end of the utterance has been reached. The loop is then broken and the buffered chat messages are combined and sent to the AI agent to generate a response and sent to the user via the telegram node. The prediction session state is then deleted to signal the workflow is ready to start again with a new message. How to use This system sits between your preferred chat platform and the AI agent so all you need to do is replace the telegram nodes as required. Where LLM-only prediction isn't working well enough, consider more traditional code-based checking of heuristics to improve the detection. Ideally you'll want a fast but accurate LLM so your user isn't waiting longer than they have to - at time of writing Gemini-2.5-flash-lite was the fastest in testing but keep a look out for smaller and more powerful LLMs in the future. Requirements Gemini for LLM Redis for session management Telegram for chat platform
by Anurag Srivastava
🧠 AI Prompt Generator Workflow – n8n Documentation Who is this for? This workflow is for AI builders, prompt engineers, developers, marketers, and no-code creators who want to convert rough user input into structured, high-quality prompts for LLMs. It’s especially useful for tools that rely on precision prompting and want to automate the discovery of intent and constraints. What problem is this workflow solving? / Use case Many users struggle to write effective prompts due to vague ideas or unclear formatting needs. This workflow: Collects structured user input. Dynamically generates clarifying questions. Returns a well-formatted AI prompt based on the user's intent and context. This ensures the generated prompt is useful for downstream AI agents without requiring technical understanding from the end user. What this workflow does Start with a branded form UI The user is shown a styled form with questions like: What do you want to build? What tools can you access? What input can be expected? What output do you expect? Analyze and generate relevant follow-up questions The workflow sends the user's answers to Google Gemini (via LangChain) which outputs 1–3 clarifying questions. These questions are parsed into a dynamic form. Loop through and collect follow-up answers Each follow-up question is shown in a form one at a time to capture additional context. Merge all inputs The base intent and follow-up responses are merged into a single context block. Generate a final AI-ready prompt The prompt generator node formats everything into a clean, six-section structure: <constraints> <role> <inputs> <tools> <instructions> <conclusions> Display the final result The finished prompt is shown in a clean UI where users can easily copy and reuse it. Setup Credentials Required Google Gemini (PaLM) API credentials (already integrated as Google Gemini(PaLM) Api account 2). Form Trigger Ensure the On form submission trigger is exposed via a webhook or public endpoint (e.g. using ngrok or deployed server). Styling Custom CSS is included in all form nodes for a beautiful UI. You can modify this to match your branding. Environment This workflow is compatible with self-hosted n8n or n8n.cloud. Webhooks must be accessible to users who will fill out the form. How to customize this workflow to your needs Change the base questions** Update the BaseQuestions form node to add or remove fields depending on your use case. Modify Gemini prompts** You can edit the system prompt inside PromptGenerator to change tone, output structure, or AI instructions. Change prompt formatting** If you use a different AI agent (like GPT, Claude, or Mistral), adjust the section labels and formatting to suit that agent’s expected input. Send results elsewhere** Add integration nodes after PromptGenerator, such as: Google Docs / Notion (to log prompts) Gmail / Slack (to notify your team) Zapier / Make (to push to other automation flows) Skip follow-up questions (optional)** If your base form collects all needed info, you can bypass the RelevantQuestions form section by modifying conditional logic. Example Output Prompt (Structure) <role> You are an AI assistant that converts videos into LinkedIn posts with a witty tone. </role> <inputs> - A short video (max 5 minutes) - Desired tone: witty - Style: both summary and quotes - Audience: general network </inputs> <tools> You do not have access to APIs or web search. </tools> <instructions> 1. Parse transcript. 2. Extract insights and quotes. 3. Write an engaging, witty LinkedIn post under 3000 characters. </instructions> <constraints> Avoid technical jargon. No generic intros. Make it platform-native. </constraints> <conclusions> Return a LinkedIn-ready post that starts with a hook and ends with hashtags.
by InfyOm Technologies
✅ What problem does this workflow solve? Shopify and E-Commerce store owners often struggle to create engaging 3D videos from static product images. This workflow automates that entire process—from image upload to video delivery—so store owners can get professional-looking 3D videos without any manual editing or follow-up. ⚙️ What does this workflow do? Accepts a 2D product image and name via a public n8n form. Generates a unique slug and folder in Google Drive for the product. Uploads the original image to Google Drive and logs data in a spreadsheet. Removes the background from the image using remove.bg API. Uploads the cleaned image to Google Drive and updates the spreadsheet. Creates a 3D product video using the cleaned image via the Fal.ai API. Periodically checks the video creation status. Once completed, download the video, upload it to Google Drive, and log the link. Notifies the store owner via email with the video download link. 🔧 Setup 🟢 Google Services Google Drive**: Create and connect a folder where all product assets will be stored. Google Spreadsheet**: A spreadsheet to log the product name, original image link, cleaned image link, and final video URL. Gmail**: Connect Gmail to send the final notification email to the store owner. 🔑 API Keys Required Remove.bg**: Get an API key from remove.bg. Fal.ai**: Sign up at fal.ai and obtain your API key to use the image-to-video generation service. 🧠 How it Works 📝 1. Product Form Submission A store owner submits the product name and 2D image via a public n8n form. 🗂 2. Organize in Google Drive A unique slug is generated for the product. A new folder is created inside Google Drive using that slug. The original image is uploaded into the folder. 📊 3. Record in a Spreadsheet The product name and original image URL are stored in a Google Sheet. 🧹 4. Background Removal The uploaded image is processed through remove.bg API to eliminate noisy or cluttered backgrounds. The cleaned image is uploaded back into the product’s Drive folder. The cleaned image link is updated in the spreadsheet. 🎥 5. Create 3D Video (via Fal.ai) The cleaned image is passed to the Fal.ai video generation API. The workflow periodically checks the status until the video is ready. ☁️ 6. Store Final Video Once the video is ready, the file is downloaded. The final video is uploaded into the same Google Drive folder. Its link is saved in the spreadsheet next to the respective product entry. 📧 7. Notify the Store Owner An automated email is sent to the store owner with the video link, letting them know it's ready for use—no waiting, no manual follow-up needed. 👤 Who can use it? This workflow is ideal for: 🛍 Shopify Sellers 🧺 E-commerce Store Owners 📸 Product Photographers 🎬 Marketing Teams 🤖 Automation Enthusiasts If you want to automate 3D product video creation using AI—this is the no-code workflow you’ve been waiting for!
by Mihai Farcas
Who is this for? This workflow is for everyone who wants to have easier access to their Odoo sales data without complex queries. Use Case To have a clear overview of your sales data in Odoo you typically needs to extract data from it manually to analyse it. This workflow uses OpenAI's language models to create an intelligent chatbot that provides conversational access to your Odoo sales opportunity data. How it works Creates a summary of all Odoo sales opportunities using OpenAI Uses that summary as context for the OpenAI chat model Keeps the summary up to date using a schedule trigger Set up steps: Configure the Odoo credentials Configure OpenAI credentials Toggle "Make Chat Publicly Available" from the Chat Trigger node.
by Femi Ad
"Ade Technical Analyst" is a dual-workflow AI system combining conversational intelligence with visual chart analysis through Telegram. The system features 11 primary nodes for conversation management and 8 secondary nodes for chart generation and analysis. Core Components: Telegram Integration: Message handling with dynamic typing indicators AI Personality: "Ade" - a financial analyst with 50+ years NYSE/LSE experience using Claude 3.5 Sonnet Chart Generation: TradingView integration via Chart-IMG API with MACD and volume indicators Visual Analysis: GPT-4O vision for technical pattern recognition Memory System: Session-based conversation context retention Target Users Individual traders seeking professional-grade analysis without subscription costs Financial advisors wanting 24/7 AI-powered client support Investment educators needing interactive learning tools Fintech companies requiring white-label analysis solutions Setup Requirements Critical Security Fix Needed: Remove hardcoded API key from Chart-IMG node immediately Store all credentials securely in n8n credential manager Required APIs: OpenRouter (Claude 3.5 Sonnet) OpenAI (GPT-4O vision) Chart-IMG API Telegram Bot Token Technical Prerequisites: n8n version 1.7+ with Langchain nodes Webhook configuration for Telegram Dual-workflow setup with proper ID referencing Workflow Requirements Security Compliance: Never hardcode API keys in workflow JSON files Use n8n credential manager for all sensitive data Implement proper session isolation for user data Include mandatory financial disclaimers Performance Specifications: Model temperature: 0.8 for balanced responses Token limit: 500 for optimized performance Dark theme charts with professional indicators Session-based memory management Need help customizing? Contact me for consulting and support or add me on LinkedIn
by Oneclick AI Squad
Transform your meetings into actionable insights automatically! This workflow captures meeting audio, transcribes conversations, generates AI summaries, and emails the results to participants—all without manual intervention. What's the Goal? Auto-record meetings** when they start and stop when they end Transcribe audio** to text using Vexa Bot integration Generate intelligent summaries** with AI-powered analysis Email summaries** to meeting participants automatically Eliminate manual note-taking** and post-meeting admin work Never miss important discussions** or action items again Why Does It Matter? Save 90% of Post-Meeting Time**: No more manual transcription or summary writing Never Lose Key Information**: Automatic capture ensures nothing falls through cracks Improve Team Productivity**: Focus on discussions, not note-taking Perfect Meeting Records**: Searchable transcripts and summaries for future reference Instant Distribution**: Summaries reach all participants immediately after meetings How It Works Step 1: Meeting Detection & Recording Start Meeting Trigger**: Detects when meeting begins via Google Meet webhook Launch Vexa Bot**: Automatically joins meeting and starts recording End Meeting Trigger**: Detects meeting end and stops recording Step 2: Audio Processing & Transcription Stop Vexa Bot**: Ends recording and retrieves audio file Fetch Meeting Audio**: Downloads recorded audio from Vexa Bot Transcribe Audio**: Converts speech to text using AI transcription Step 3: AI Summary Generation Prepare Transcript**: Formats transcribed text for AI processing Generate Summary**: AI model creates concise meeting summary with: Key discussion points Decisions made Action items assigned Next steps identified Step 4: Distribution Send Email**: Automatically emails summary to all meeting participants Setup Requirements Google Meet Integration: Configure Google Meet webhook and API credentials Set up meeting detection triggers Test with sample meeting Vexa Bot Configuration: Add Vexa Bot API credentials for recording Configure audio file retrieval settings Set recording quality and format preferences AI Model Setup: Configure AI transcription service (e.g., OpenAI Whisper, Google Speech-to-Text) Set up AI summary generation with custom prompts Define summary format and length preferences Email Configuration: Set up SMTP credentials for email distribution Create email templates for meeting summaries Configure participant list extraction from meeting metadata Import Instructions Get Workflow JSON: Copy the workflow JSON code Open n8n Editor: Navigate to your n8n dashboard Import Workflow: Click menu (⋯) → "Import from Clipboard" → Paste JSON → Import Configure Credentials: Add API keys for Google Meet, Vexa Bot, AI services, and SMTP Test Workflow: Run a test meeting to verify end-to-end functionality Your meetings will now automatically transform into actionable summaries delivered to your inbox!
by PollupAI
Social Media Analysis and Automated Email Generation > by Thomas Vie Thomas@pollup.net Who is this for? This template is ideal for marketers, lead generation specialists, and business professionals seeking to analyze social media profiles of potential leads and automate personalized email outreach efficiently. What problem is this workflow solving? Manually analyzing social media profiles and crafting personalized emails can be time-consuming and prone to errors. This workflow streamlines the process by integrating social media APIs with AI to generate tailored communication, saving time and increasing outreach effectiveness. What this workflow does: Google Sheets Integration: Start with a Google Sheet containing lead information such as LinkedIn URL, Twitter handle, name, and email. Social Media Data Extraction: Automatically fetch profile and activity data from Twitter and LinkedIn using RapidAPI integrations. AI-Powered Content Generation: Use OpenAI's Chat Model to analyze the extracted data and generate personalized email subject lines and cover letters. Automated Email Dispatch: Send the generated email directly to the lead, with a copy sent to yourself for tracking purposes. Progress Tracking: Update the Google Sheet to indicate completed actions. Setup: Google Sheets: Create a sheet with the columns: LinkedIn URL, name, Twitter handle, email, and a "done" column for tracking. Populate the sheet with your leads. RapidAPI Accounts: Sign up for RapidAPI and subscribe to the Twitter and LinkedIn API plans. Configure API authentication keys in the workflow. AI Configuration: Connect OpenAI Chat Model with your API key for text generation. Email Integration: Add your email credentials or service (SMTP or third-party service like Gmail) for sending automated emails. How to customize this workflow to your needs: Modify the AI Prompt:** Adapt the prompt in the AI node to better align with your tone, style, or specific messaging framework. Expand Data Fields:** Add additional data fields in Google Sheets if you require further personalization. API Limits:** Adjust API configurations to fit your usage limits or upgrade to higher tiers for increased data scraping capabilities. Personalize Email Templates:** Tweak email formats to suit different audiences or use cases. Extend Functionality:** Integrate additional social media platforms or CRM tools as needed. By implementing this workflow, you’ll save time on repetitive tasks and create more effective lead generation strategies.