by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by VipinW
Apply to jobs automatically from Google Sheets with status tracking Who's it for Job seekers who want to streamline their application process, save time on repetitive tasks, and never miss following up on applications. Perfect for anyone managing multiple job applications across different platforms. What it does This workflow automatically applies to jobs from a Google Sheet, tracks application status, and keeps you updated with notifications. It handles the entire application lifecycle from submission to status monitoring. Key features: Reads job listings from Google Sheets with filtering by priority and status Automatically applies to jobs on LinkedIn, Indeed, and other platforms Updates application status in real-time Checks application status every 2 days and notifies you of changes Sends email notifications for successful applications and status updates Prevents duplicate applications and manages rate limiting How it works The workflow runs on two main schedules: Daily Application Process (9 AM, weekdays): Reads your job list from Google Sheets Filters for jobs marked as "Not Applied" with Medium/High priority Processes each job individually to prevent rate limiting Applies to jobs using platform-specific APIs (LinkedIn, Indeed, etc.) Updates the sheet with application status and reference ID Sends confirmation email for each application Status Monitoring (Every 2 days at 10 AM): Checks all jobs with "Applied" status Queries job platforms for application status updates Updates the sheet if status has changed Sends notification emails for status changes (interviews, rejections, etc.) Requirements Google account with Google Sheets access Gmail account for notifications Resume stored online (Google Drive, Dropbox, etc.) API access to job platforms (LinkedIn, Indeed) - optional for basic version n8n instance (self-hosted or cloud) How to set up Step 1: Create Your Job Tracking Sheet Create a Google Sheet with these exact column headers: | Job_ID | Company | Position | Status | Applied_Date | Last_Checked | Application_ID | Notes | Job_URL | Priority | |--------|---------|----------|--------|--------------|--------------|----------------|-------|---------|----------| | JOB001 | Google | Software Engineer | Not Applied | | | | | https://careers.google.com/jobs/123 | High | | JOB002 | Microsoft | Product Manager | Not Applied | | | | | https://careers.microsoft.com/jobs/456 | Medium | Column explanations: Job_ID**: Unique identifier (JOB001, JOB002, etc.) Company**: Company name Position**: Job title Status**: Not Applied, Applied, Under Review, Interview Scheduled, Rejected, Offer Applied_Date**: Auto-filled when application is submitted Last_Checked**: Auto-updated during status checks Application_ID**: Platform reference ID (auto-generated) Notes**: Additional information or application notes Job_URL**: Direct link to job posting Priority**: High, Medium, Low (Low priority jobs are skipped) Step 2: Configure Google Sheets Access In n8n, go to Credentials → Add Credential Select Google Sheets OAuth2 API Follow the OAuth setup process to authorize n8n Test the connection with your job tracking sheet Step 3: Set Up Gmail Notifications Add another credential for Gmail OAuth2 API Authorize n8n to send emails from your Gmail account Test by sending a sample email Step 4: Update Workflow Configuration In the "Set Configuration" node, update these values: spreadsheetId**: Your Google Sheet ID (found in the URL) resumeUrl**: Direct link to your resume (make sure it's publicly accessible) yourEmail**: Your email address for notifications coverLetterTemplate**: Customize your cover letter template Step 5: Customize Application Logic For basic version (no API access): The workflow includes placeholder HTTP requests that you can replace with actual job platform integrations. For advanced version (with API access): Replace LinkedIn/Indeed HTTP nodes with actual API calls Add your API credentials to n8n's credential store Update the platform detection logic for additional job boards Step 6: Test and Activate Add 1-2 test jobs to your sheet with "Not Applied" status Run the workflow manually to test Check that the sheet gets updated and you receive notifications Activate the workflow to run automatically How to customize the workflow Adding New Job Platforms Update Platform Detection: Modify the "Check Platform Type" node to recognize new job board URLs Add New Application Node: Create HTTP request nodes for new platforms Update Status Checking: Add status check logic for the new platform Customizing Application Strategy Rate Limiting**: Add "Wait" nodes between applications (recommended: 5-10 minutes) Application Timing**: Modify the cron schedule to apply during optimal hours Priority Filtering**: Adjust the filter conditions to match your criteria Multiple Resumes**: Use conditional logic to select different resumes based on job type Enhanced Notifications Slack Integration**: Replace Gmail nodes with Slack for team notifications Discord Webhooks**: Send updates to Discord channels SMS Notifications**: Use Twilio for urgent status updates Dashboard Updates**: Connect to Notion, Airtable, or other productivity tools Advanced Features AI-Powered Personalization**: Use OpenAI to generate custom cover letters Job Scoring**: Implement scoring logic based on job requirements vs. your skills Interview Scheduling**: Auto-schedule interviews when status changes Follow-up Automation**: Send follow-up emails after specific time periods Important Notes Platform Compliance Always respect rate limits to avoid being blocked Follow each platform's Terms of Service Use official APIs when available instead of web scraping Don't spam job boards with excessive applications Data Privacy Store credentials securely using n8n's credential store Don't hardcode API keys or personal information in nodes Regularly review and clean up old application data Ensure your resume link is secure but accessible Quality Control Start with a small number of jobs to test the workflow Review application success rates and adjust strategy Monitor for errors and set up proper error handling Keep your job list updated and remove expired postings This workflow transforms job searching from a manual, time-consuming process into an automated system that maximizes your application efficiency while maintaining quality and compliance.
by algopi.io
Who is this template for? This workflow template is designed for Marketing and pre-Sales to get prospects from a form like Tally, decline data in the famous opensource CRM (SuiteCRM), synchronize contact in Brevo with linking the id from CRM, and then notify in NextCloud. Bonus : validate email with ++CaptainVerify++ and notify in NextCloud depending on response How it works For each submission in the form, a webhook is triggered. A check of the email is done with CaptainVerify. Depending on the response, and if it is ok, then a Lead is created in SuiteCRM. Else, a message in your selected discussion is sent. As the lead has been created, we can create a contact in Brevo (for future campain), ank link this contact with the lead_id from the CRM in a dedicated field. Finaly, a message in your selected discussion in NextCloud informs you about the lead. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a CaptainVerify account (Api Key), a dedicated SuiteCRM user with Oauth, a Brevo account (Api Key) and a Nextcloud account. Set up the Webhook in the form's tool of your choice (why not Tally ?). Set each node with the explanations in sticky Notes. Enjoy ! Template was created in n8n v1.44.1
by Marvin Wu
Who is this for? This workflow is designed for n8n users and developers who need to automate the documentation process of their n8n workflows. It's particularly useful for teams looking to streamline their documentation efforts and ensure consistency across their workflow documentation. What problem is this workflow solving? / Use case The primary problem this workflow addresses is the manual and time-consuming process of creating documentation for n8n workflows. It automates the generation of concise, clear, and comprehensive documentation directly from the workflow's JSON, making it easier for both technical and non-technical users to understand what the workflow does and how it operates. What this workflow does Upon receiving a form submission with the workflow title and JSON, this workflow automatically generates documentation that includes: A brief introduction to the workflow. The trigger mechanism (webhook URLs for test and production environments, or cron schedules). Setup requirements, including necessary credentials and external dependencies. Setup Credentials Setup: Ensure you have OpenAI API credentials configured in n8n to use the GPT model for generating documentation text. Form Submission: Users must submit the form with the workflow title and JSON. The form is accessible via: Test URL: domain/form-test/{webhookId} Production URL: domain/form/{webhookId} How to customize this workflow to your needs Modify Trigger URLs**: Adjust the webhook or form URLs based on your domain and specific n8n setup. Customize Documentation Template**: Edit the OpenAI node's prompt to change the structure or details of the generated documentation. Extend Functionality**: Add nodes to integrate with other systems (e.g., automatically publishing the documentation to a wiki or sending it via email). This workflow simplifies the documentation process, making it accessible and manageable for teams of all sizes and technical abilities. By automating documentation, it ensures that all workflows are properly documented, enhancing understanding and efficiency within teams.
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by Don Jayamaha Jr
📊 This AI sub-agent aggregates Tesla (TSLA) trading signals across multiple timeframes using real-time technical indicators and candlestick behavior. It is a core component of the Tesla Quant Trading AI system. Powered by GPT-4.1, it consolidates 15-minute, 1-hour, and 1-day indicators, adds candlestick pattern data, and produces a unified JSON signal for downstream use by the master agent. ⚠️ This agent is not standalone. It is triggered by the Tesla Quant Trading AI Agent via Execute Workflow. 🧠 Requires: 4 connected sub-agents and Alpha Vantage Premium API Key 🔌 Required Sub-Workflows To use this workflow, you must install: Tesla 15min Indicators Tool Tesla 1hour Indicators Tool Tesla 1day Indicators Tool Tesla 1hour and 1day Klines Tool Tesla Quant Technical Indicators Webhooks Tool (provides Alpha Vantage data) 🧠 What This Agent Does Fetches pre-cleaned 20-point JSON outputs from the 4 sub-agents listed above Analyzes each timeframe individually: 15m: momentum and short-term setups 1h: confirmation of emerging trends 1d: macro positioning and trend alignment Klines: candlestick reversal patterns and volume divergence Generates a structured final signal in JSON with: Trading stance: Buy, Sell, Hold, or Cautious Confidence score (0.0–1.0) Multi-timeframe indicator breakdown Candlestick and volume divergence annotations 📋 Sample Output { "summary": "TSLA momentum is weakening short-term. 1h MACD shows bearish crossover, RSI declining. 1d candles confirm potential reversal setup.", "signal": "Cautious Sell", "confidence": 0.81, "multiTimeframeInsights": { "15m": { "RSI": 68.3, "MACD": { "macd": 0.53, "signal": 0.61 }, ... }, "1h": { "RSI": 65.0, "MACD": { "macd": -0.32, "signal": 0.11 }, ... }, "1d": { "BBANDS": { ... }, ... }, "candlestickPatterns": { "1h": "Doji", "1d": "Bearish Engulfing" }, "volumeDivergence": { "1h": "Bearish", "1d": "Neutral" } } } 🛠️ Setup Instructions Import this workflow into n8n Name it: Tesla_Financial_Market_Data_Analyst_Tool Add Required API Credentials Alpha Vantage Premium (via HTTP Query Auth) OpenAI GPT-4.1 for reasoning and synthesis Link Required Sub-Agents Connect the 4 tool workflows listed above to their respective Tool Workflow nodes Connect the webhook provider for data fetches Set Up as Sub-Agent This workflow must be triggered using Execute Workflow from the parent agent Pass in: message (optional context) sessionId (used for memory continuity) 🧾 Sticky Notes Provided 📘 Tesla Financial Market Data Analyst — Core logic overview 📈 15m / 1h / 1d Tool Notes — Indicator lists + use cases 🕯️ Klines Tool Note — Candlestick and volume divergence patterns 🧠 GPT Reasoning Note — GPT-4.1 handles final synthesis 🧩 Sub-Workflow Trigger — Proper integration with parent agent 🧠 Memory Buffer — Maintains session context across evaluations 🔒 Licensing & Support © 2025 Treasurium Capital Limited Company The logic, prompt design, and multi-agent architecture are proprietary and IP-protected. For support or collaboration inquiries: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Unify your Tesla trading logic across timeframes—automated, AI-powered, and built for scalers and swing traders.
by David Ashby
🛠️ MailerLite Tool MCP Server Complete MCP server exposing all MailerLite Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every MailerLite Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n MailerLite Tool tool with full error handling 📋 Available Operations (4 total) Every possible MailerLite Tool operation is included: 🔧 Subscriber (4 operations) • Create a subscriber • Get a subscriber • Get many subscribers • Update a subscriber 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native MailerLite Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every MailerLite Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Davide
This low-code automation enables all eCommerce store visitors to upload a photo of themselves and virtually “try on” a garment in just a few clicks. With this workflow, WooCommerce, Prestashop, Shopify and more merchants can offer a cutting-edge “virtual try-on” feature with minimal development effort, enhancing customer engagement and reducing product returns. Key Advantages Zero-Coding, Visual Setup** Build end-to-end e-commerce features with drag-and-drop nodes instead of custom backend code. Asynchronous, Scalable Processing** Non-blocking “Wait” + “If” loop handles multi-second AI jobs gracefully, freeing up the workflow for other tasks. Dynamic Inputs & URLs** Query strings (e.g. ?Product=IMAGE_URL) allow you to embed the form on any product page and pass the garment image on the fly. Seamless User Experience** Instant pop-up within your storefront and automatic redirect to the generated mock-up keeps shoppers engaged without page reloads. Easy Credential Management** API keys, FTP credentials, and webhook IDs are all stored securely in n8n’s credential manager. How It Works Form Submission: A user submits a form with their name, an image of themselves ("Me"), and a hidden product image URL ("Product"). The form is triggered via the On form submission node, which collects the input data. Image Upload: The uploaded image ("Me") is sent to an FTP server for temporary storage using the FTP node. The filename includes a timestamp to ensure uniqueness. Virtual Try-on Request: The Create Image node sends a POST request to the Fal.run API, providing: The uploaded human image URL (from FTP). The product image URL (from the hidden form field). This generates a virtual try-on result. Result Processing: The workflow checks the status of the image generation (Get status node) in a loop (with a 10-second wait between checks) until it is marked as "COMPLETED." Once ready, the final image URL is fetched (Get Url image node) and displayed to the user via a redirect (Form node). User Experience: The user is redirected to the generated try-on image, completing the process. Set Up Steps API Key Setup: Create an account and obtain an API key. Configure the Create Image node with HTTP Header Authentication: Name: Authorization Value: Key YOURAPIKEY FTP/S3 Configuration: Set up an FTP server or S3 bucket to temporarily store uploaded user images. Configure the FTP node with your FTP credentials and storage path. Ecommerce Integration: On your WooCommerce site, add a "Try On" button that opens the form in a pop-up. Dynamically pass the product image URL as a query parameter: Example: https://URL_N8N/form/ca1c314d-46c6-4eeb-b6a5-359XXXXXX?Product=IMAGE_URL Testing: Verify the workflow by submitting a test form and ensuring the virtual try-on image is generated and displayed correctly. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by James Carter
This n8n workflow automatically fetches trending news articles based on your chosen country, category, and keyword — then enriches the data with AI-powered business insights before posting a concise summary to Slack. Ideal for sales teams, executives, marketers, or anyone who wants fast, actionable news briefings directly in their Slack workspace. ⸻ Who it’s for Executives, analysts, sales teams, or marketing professionals who want curated, AI-enhanced news summaries tailored to business opportunities, risks, and trends — delivered automatically to Slack. ⸻ How it works / What it does A Schedule Trigger runs on a daily, weekly, or custom frequency. It queries the NewsAPI to retrieve top headlines by country, category, or keyword. Headlines are formatted and enriched with your configured query context. The AI model (GPT-4) analyzes articles and summarizes key insights, categorizing them as Opportunities, Risks, or Trends. Finally, the summarized insights are posted directly into a Slack channel of your choice. ⸻ How to set up Set your schedule frequency in the Schedule Trigger node. Configure your preferred country, category, and keyword in the Inject Config node. Add your NewsAPI Key inside the Fetch News Articles node. Connect your Slack credentials in the Post to Slack node. Optional: Adjust the AI prompt for more tailored analysis. ⸻ Requirements A NewsAPI account to fetch headlines. An OpenAI API key for GPT-4 summarization. A Slack workspace and connected credentials via n8n. ⸻ How to customize the workflow Change the country, category, or keyword in the Inject Config to focus on specific markets or sectors. Adjust the AI prompt in the GPT node to prioritize certain insights like ESG factors, M&A activity, or market sentiment. Extend the workflow to log results to Google Sheets, email summaries, or send SMS alerts. Replace the Schedule Trigger with a Webhook if you want to trigger summaries on demand. This template is designed to be modular, making it easy to adapt for competitive intelligence, investment tracking, or industry news curation.
by Oneclick AI Squad
Transform your meetings into actionable insights automatically! This workflow captures meeting audio, transcribes conversations, generates AI summaries, and emails the results to participants—all without manual intervention. What's the Goal? Auto-record meetings** when they start and stop when they end Transcribe audio** to text using Vexa Bot integration Generate intelligent summaries** with AI-powered analysis Email summaries** to meeting participants automatically Eliminate manual note-taking** and post-meeting admin work Never miss important discussions** or action items again Why Does It Matter? Save 90% of Post-Meeting Time**: No more manual transcription or summary writing Never Lose Key Information**: Automatic capture ensures nothing falls through cracks Improve Team Productivity**: Focus on discussions, not note-taking Perfect Meeting Records**: Searchable transcripts and summaries for future reference Instant Distribution**: Summaries reach all participants immediately after meetings How It Works Step 1: Meeting Detection & Recording Start Meeting Trigger**: Detects when meeting begins via Google Meet webhook Launch Vexa Bot**: Automatically joins meeting and starts recording End Meeting Trigger**: Detects meeting end and stops recording Step 2: Audio Processing & Transcription Stop Vexa Bot**: Ends recording and retrieves audio file Fetch Meeting Audio**: Downloads recorded audio from Vexa Bot Transcribe Audio**: Converts speech to text using AI transcription Step 3: AI Summary Generation Prepare Transcript**: Formats transcribed text for AI processing Generate Summary**: AI model creates concise meeting summary with: Key discussion points Decisions made Action items assigned Next steps identified Step 4: Distribution Send Email**: Automatically emails summary to all meeting participants Setup Requirements Google Meet Integration: Configure Google Meet webhook and API credentials Set up meeting detection triggers Test with sample meeting Vexa Bot Configuration: Add Vexa Bot API credentials for recording Configure audio file retrieval settings Set recording quality and format preferences AI Model Setup: Configure AI transcription service (e.g., OpenAI Whisper, Google Speech-to-Text) Set up AI summary generation with custom prompts Define summary format and length preferences Email Configuration: Set up SMTP credentials for email distribution Create email templates for meeting summaries Configure participant list extraction from meeting metadata Import Instructions Get Workflow JSON: Copy the workflow JSON code Open n8n Editor: Navigate to your n8n dashboard Import Workflow: Click menu (⋯) → "Import from Clipboard" → Paste JSON → Import Configure Credentials: Add API keys for Google Meet, Vexa Bot, AI services, and SMTP Test Workflow: Run a test meeting to verify end-to-end functionality Your meetings will now automatically transform into actionable summaries delivered to your inbox!
by Ranjan Dailata
Who this is for The Real Estate Intelligence Tracker is a powerful automated workflow designed for real estate analysts, investors, proptech startups, and market researchers who need to collect and analyze structured data from real estate listings across the web at scale. This workflow is tailored for: Real Estate Analysts** - Tracking property prices, locations, and market trends Investment Firms** - Sourcing high-opportunity listings for portfolio decisions PropTech Developers** - Automating listing insights for SaaS platforms Market Researchers** - Extracting insights from competitive housing data Growth Teams** - Monitoring geographic property trends and pricing fluctuations What problem is this workflow solving? Collecting structured real estate listing data from property websites is difficult due to bot protections and unstructured HTML content. Manual data collection is slow and error-prone, and traditional scrapers often get blocked or miss context. This workflow solves: Automated bypass of anti-bot protection using Bright Data Web Unlocker Conversion of unstructured HTML content into clean text using a Markdown-to-text LLM pipeline Structured extraction of key listing data like price, location, property type, and features using OpenAI Aggregation and delivery of insights to Google Sheets, local storage, and webhook-based alerts What this workflow does Convert to Text: Transforms scraped HTML/markdown into clean text using a Basic LLM Chain Structured Data Extraction: Uses OpenAI GPT-4o with the Information Extractor node to parse property attributes (price, address, area, type, etc.) Aggregate & Merge: Combines data from multiple pages or listings into a cohesive structure Outbound Data Handling: Google Sheets** – Appends the structured real estate data for further analysis Save to Disk** – Persists structured JSON/text data locally Webhook Notification** – Sends data alerts or summaries to any third-party platform Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Sites or Locations Update the Bright Data URL node dynamically with a list of regional real estate websites Loop through different city/state filter URLs Customize Extracted Fields Modify the Information Extractor prompt to extract fields like: Property size, number of bedrooms/bathrooms Days on market Nearby amenities or schools Agent contact details Integrate with More Destinations Add nodes to export data to Notion, Airtable, HubSpot, or your custom database Generate automated reports using PDF generators and email them Data Quality and Logging Add validation checks (e.g., missing price or address) Save intermediate files (markdown, raw HTML, JSON output) to disk for audit purposes
by Anurag Srivastava
🧠 AI Prompt Generator Workflow – n8n Documentation Who is this for? This workflow is for AI builders, prompt engineers, developers, marketers, and no-code creators who want to convert rough user input into structured, high-quality prompts for LLMs. It’s especially useful for tools that rely on precision prompting and want to automate the discovery of intent and constraints. What problem is this workflow solving? / Use case Many users struggle to write effective prompts due to vague ideas or unclear formatting needs. This workflow: Collects structured user input. Dynamically generates clarifying questions. Returns a well-formatted AI prompt based on the user's intent and context. This ensures the generated prompt is useful for downstream AI agents without requiring technical understanding from the end user. What this workflow does Start with a branded form UI The user is shown a styled form with questions like: What do you want to build? What tools can you access? What input can be expected? What output do you expect? Analyze and generate relevant follow-up questions The workflow sends the user's answers to Google Gemini (via LangChain) which outputs 1–3 clarifying questions. These questions are parsed into a dynamic form. Loop through and collect follow-up answers Each follow-up question is shown in a form one at a time to capture additional context. Merge all inputs The base intent and follow-up responses are merged into a single context block. Generate a final AI-ready prompt The prompt generator node formats everything into a clean, six-section structure: <constraints> <role> <inputs> <tools> <instructions> <conclusions> Display the final result The finished prompt is shown in a clean UI where users can easily copy and reuse it. Setup Credentials Required Google Gemini (PaLM) API credentials (already integrated as Google Gemini(PaLM) Api account 2). Form Trigger Ensure the On form submission trigger is exposed via a webhook or public endpoint (e.g. using ngrok or deployed server). Styling Custom CSS is included in all form nodes for a beautiful UI. You can modify this to match your branding. Environment This workflow is compatible with self-hosted n8n or n8n.cloud. Webhooks must be accessible to users who will fill out the form. How to customize this workflow to your needs Change the base questions** Update the BaseQuestions form node to add or remove fields depending on your use case. Modify Gemini prompts** You can edit the system prompt inside PromptGenerator to change tone, output structure, or AI instructions. Change prompt formatting** If you use a different AI agent (like GPT, Claude, or Mistral), adjust the section labels and formatting to suit that agent’s expected input. Send results elsewhere** Add integration nodes after PromptGenerator, such as: Google Docs / Notion (to log prompts) Gmail / Slack (to notify your team) Zapier / Make (to push to other automation flows) Skip follow-up questions (optional)** If your base form collects all needed info, you can bypass the RelevantQuestions form section by modifying conditional logic. Example Output Prompt (Structure) <role> You are an AI assistant that converts videos into LinkedIn posts with a witty tone. </role> <inputs> - A short video (max 5 minutes) - Desired tone: witty - Style: both summary and quotes - Audience: general network </inputs> <tools> You do not have access to APIs or web search. </tools> <instructions> 1. Parse transcript. 2. Extract insights and quotes. 3. Write an engaging, witty LinkedIn post under 3000 characters. </instructions> <constraints> Avoid technical jargon. No generic intros. Make it platform-native. </constraints> <conclusions> Return a LinkedIn-ready post that starts with a hook and ends with hashtags.