by InfyOm Technologies
✅ What problem does this workflow solve? Many websites lack a smart, searchable interface. Visitors often leave due to unanswered questions. This workflow transforms any website into a Retrieval-Augmented Generation (RAG) chatbot—automatically extracting content, creating embeddings, and enabling real-time, context-aware chat on your own site. ⚙️ What does this workflow do? Accepts a website URL through a form trigger. Fetches and cleans website content. Parses content into smaller sections. Generates vector embeddings using OpenAI (or your embedding model). Stores embeddings and metadata in Supabase’s vector database. When a user asks a question: Searches Supabase for relevant chunks via similarity search. Retrieves matching content as context. Sends context + question to OpenAI to generate an accurate answer. Returns the AI-generated response to the user in the chat interface. 🔧 Setup Instructions 🖥️ Website Form Trigger Use a Form / HTTP Trigger to submit website URLs for indexing. 📥 Content Extraction & Chunking Use HTTP nodes to fetch HTML. Clean and parse it (e.g., remove scripts, ads). Use a Function node to split into manageable text chunks. 🧠 Embedding Generation Call OpenAI (or Cohere) to generate embeddings for each chunk. Insert vectors and metadata into Supabase via its API or n8n Supabase node. 💬 User Query Handling Use a Chat Trigger (webhook/UI) to receive user questions. Convert the question into an embedding. Query Supabase with similarity search (e.g., match_documents RPC). Retrieve top-matching chunks and feed them into OpenAI with the user question. Return the reply to the user. 🛠 AI & Database Setup OpenAI API key** for embedding and chat. A Supabase project with: vector extension enabled Tables for document chunks and embeddings A similarity search function like match_documents 💬 How to Embed the Chat Widget on Your Website You can add the chatbot interface to your website with a simple JavaScript snippet. Steps: Open the "When chat message received" node Copy Chat URL Make sure, "Make Chat Publicly Available "Toggle is enabled Make sure the mode is "Embedded Chat" Follow the instructions given on this package here. 🧠 How it Works Submit URL → Form Trigger Fetch Website Content → HTTP Request Clean & Chunk Content → Function Node Make Embeddings (OpenAI/Cohere) Store in Supabase → embeddings + metadata User Chat → Chat Trigger Search for Similar Content → Supabase similarity match Generate Answer → OpenAI completion w/ context Send Reply → Chat interface returns answer 🗂 Why Supabase? Supabase offers a scalable Postgres-based vector database with extensions like pgvector, making it easy to: Store vector data alongside metadata Run ANN (Approximate Nearest Neighbor) similarity searches Integrate seamlessly with n8n and your chatbot UI :contentReference[oaicite:1]{index=1} 👤 Who can use this? 📝 Documentation websites 👩💼 Support portals 🏢 Product/Landing pages 🛠 Internal knowledge bases Perfect for anyone who wants a smart, website-specific chatbot without building an entire AI stack from scratch. 🚀 Ready to Deploy? Plug in your: ✅ OpenAI API Key ✅ Supabase project credentials ✅ Chat UI or webhook endpoint … and launch your AI-powered, website-specific RAG chatbot in minutes!
by Ferenc Erb
Use Case Extend Bitrix24 tasks with custom widgets that display relevant task information and enable seamless interaction through a custom tab interface. What This Workflow Does Processes incoming webhook requests from Bitrix24 task interfaces Handles authentication and secure token validation Manages application installation and placement registration Displays task data in a custom formatted view Stores and retrieves configuration settings persistently Provides user-friendly HTML interfaces for task information Setup Instructions Configure Bitrix24 webhook endpoints for the task widget Set up authentication credentials in your Bitrix24 account Install the application and register the task view tab placement Customize the task data display format as needed Deploy and test the application functionality within Bitrix24 tasks
by Artem Boiko
Revit to HTML Quantity Takeoff Generator Automates extraction of wall quantities from Revit models and creates a professional interactive HTML report. Key Features Automated wall quantity analysis Calculates volumes by wall type ("Type Name") Generates interactive HTML QTO report Includes summary statistics: total elements, total and average volumes Provides detailed breakdown by element type How it works Upload a Revit file as input Workflow extracts wall quantities and types Creates and saves a ready-to-share HTML dashboard with QTO data No API keys required Runs offline Output is a professional, ready-to-use HTML report
by Intuz
This n8n template delivers a complete AI-powered solution for automated LinkedIn posts, including unique content, custom images, and optimized hashtags. Use cases are many: Generate and schedule tailored LinkedIn content for different use-cases. By feeding the AI specific prompts, you can create specific post depending upon the topics and visuals to maintain a consistency yet and an online presence. How it works Maintaining a consistent and engaging presence on LinkedIn can be time-consuming, requiring constant ideation, content creation, and manual posting. This workflow takes that burden off your shoulders, delivering a fully automated solution for generating and publishing high-quality LinkedIn content. Scheduled Content Engine: Each day (or on your chosen schedule), the workflow kicks into gear, ensuring a fresh stream of content. Smart Topic & Content Generation: Using the power of Google Gemini, it intelligently crafts unique content topics and then expands them into full, engaging posts, ensuring your message is always fresh and relevant. Dynamic Image Creation: To make your posts stand out, the workflow leverages an AI image generator (like DALL-E) to produce a custom, eye-catching visual that perfectly complements your generated text. SEO-Optimized Hashtag Generation: Google Gemini then analyzes your newly created post and automatically generates a set of relevant, trending, and SEO-friendly hashtags, significantly boosting your content's reach and discoverability. Seamless LinkedIn Publishing: Finally, all these elements—your compelling text, unique image, and powerful hashtags—are merged and automatically published to your LinkedIn profile, establishing you as a thought leader with minimal effort. How to Use: Quick Start Guide This guide will get your AI LinkedIn Content Automation workflow up and running in n8n. Import Workflow Template: Download the template's JSON file and import it into your n8n instance via "File" > "Import from JSON." Configure Credentials: Google Gemini: Set up and apply your API key credentials to all "Google Gemini Chat Model" nodes. AI Image Generation (e.g., OpenAI): Create and apply API key credentials for your chosen image generation service to the "Generate an Image" node. LinkedIn: Set up and apply OAuth credentials to the "Create a post" node for your LinkedIn account. Customize Schedule & AI Prompts: Schedule Trigger: Double-click "Schedule Trigger 1" to set how often your workflow runs (e.g., daily, weekly). AI Prompts: Review and edit the prompts within the "Content Topic Generator," "Content Creator," and "Hashtag Generator / SEO" nodes to guide the AI for your desired content style and topics. Test & Activate: Test Run: Click "Execute Workflow" to perform a test run and verify all steps are working as expected. Activate: Once satisfied, toggle the workflow "Active" switch to enable automated posting on your defined schedule. Requirements To use this workflow template, you will need: n8n Instance: A running n8n instance (cloud or self-hosted) to import and execute the workflow. Google Gemini Account: For content topic generation, content creation, and hashtag generation (requires Google Gemini API Key) from Google AI Studios. AI Image Generation Service Account: For creating images (e.g., OpenAI DALL-E API Key or similar service that the "Generate an Image" node uses). LinkedIn Account: For publishing the generated posts (requires LinkedIn OAuth Credentials for n8n connection). Connect with us Website: https://www.intuz.com/cloud/stack/n8n Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Solido AI
How it works: This system functions by receiving expenses via webhook POST. It validates the data, stores it in Google Sheets, and, daily at 8 PM, generates and sends financial summaries. Automatic categorization simplifies the organization of expenses. Set up steps: Setup involves creating the Google Sheet, configuring the webhook, and defining the categorization rules. The process is quick and intuitive, taking about 10-15 minutes for the system to be ready to receive your expenses.
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by David Ashby
🛠️ Clearbit Tool MCP Server Complete MCP server exposing all Clearbit Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Clearbit Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Clearbit Tool tool with full error handling 📋 Available Operations (3 total) Every possible Clearbit Tool operation is included: 🔧 Company (2 operations) • Autocomplete a company • Enrich a company 👥 Person (1 operations) • Enrich a person 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Clearbit Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Clearbit Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 Catalog API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Catalog API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Catalog API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Product (1 endpoints) • GET /product/{epid}: Get {Epid} 🔧 Product_Summary (1 endpoints) • GET /product_summary/search: Search Product Summaries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Catalog API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Who this is for The Real Estate Intelligence Tracker is a powerful automated workflow designed for real estate analysts, investors, proptech startups, and market researchers who need to collect and analyze structured data from real estate listings across the web at scale. This workflow is tailored for: Real Estate Analysts** - Tracking property prices, locations, and market trends Investment Firms** - Sourcing high-opportunity listings for portfolio decisions PropTech Developers** - Automating listing insights for SaaS platforms Market Researchers** - Extracting insights from competitive housing data Growth Teams** - Monitoring geographic property trends and pricing fluctuations What problem is this workflow solving? Collecting structured real estate listing data from property websites is difficult due to bot protections and unstructured HTML content. Manual data collection is slow and error-prone, and traditional scrapers often get blocked or miss context. This workflow solves: Automated bypass of anti-bot protection using Bright Data Web Unlocker Conversion of unstructured HTML content into clean text using a Markdown-to-text LLM pipeline Structured extraction of key listing data like price, location, property type, and features using OpenAI Aggregation and delivery of insights to Google Sheets, local storage, and webhook-based alerts What this workflow does Convert to Text: Transforms scraped HTML/markdown into clean text using a Basic LLM Chain Structured Data Extraction: Uses OpenAI GPT-4o with the Information Extractor node to parse property attributes (price, address, area, type, etc.) Aggregate & Merge: Combines data from multiple pages or listings into a cohesive structure Outbound Data Handling: Google Sheets** – Appends the structured real estate data for further analysis Save to Disk** – Persists structured JSON/text data locally Webhook Notification** – Sends data alerts or summaries to any third-party platform Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Sites or Locations Update the Bright Data URL node dynamically with a list of regional real estate websites Loop through different city/state filter URLs Customize Extracted Fields Modify the Information Extractor prompt to extract fields like: Property size, number of bedrooms/bathrooms Days on market Nearby amenities or schools Agent contact details Integrate with More Destinations Add nodes to export data to Notion, Airtable, HubSpot, or your custom database Generate automated reports using PDF generators and email them Data Quality and Logging Add validation checks (e.g., missing price or address) Save intermediate files (markdown, raw HTML, JSON output) to disk for audit purposes
by David Ashby
🛠️ CircleCI Tool MCP Server Complete MCP server exposing all CircleCI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every CircleCI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n CircleCI Tool tool with full error handling 📋 Available Operations (3 total) Every possible CircleCI Tool operation is included: 🔧 Pipeline (3 operations) • Get a pipeline • Get many pipelines • Trigger a pipeline 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native CircleCI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every CircleCI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.