by Joseph LePage
💡🌐 Essential Multipage Website Scraper with Jina.ai Use responsibly and follow local rules and regulations This N8N workflow enables automated multi-page website scraping using Jina.ai's powerful web scraping capabilities, with seamless integration to Google Drive for content storage. Here's how it works: Main Features The workflow automatically scrapes multiple pages from a website's sitemap and saves each page's content as a separate Google Drive document. Key Components Input Configuration Starts with a sitemap URL (default: https://ai.pydantic.dev/sitemap.xml)** Processes the sitemap to extract individual page URLs Includes filtering options to target specific topics or pages Scraping Process Uses Jina.ai's web scraper to extract content from each URL Converts webpage content into clean markdown format Extracts page titles automatically for document naming Storage Integration Creates individual Google Drive documents for each scraped page Names documents using the format "URL - Page Title" Saves content in markdown format for better readability Usage Instructions Set your target website's sitemap URL in the "Set Website URL" node Configure the "Filter By Topics or Pages" node to select specific content Adjust the "Limit" node (default: 20 pages) to control batch size Connect your Google Drive account Run the workflow to begin automated scraping Additional Features Built-in rate limiting through the Wait node to prevent overloading servers Batch processing capability for handling large sitemaps The workflow requires no API key for Jina.ai, making it accessible for immediate use while maintaining responsible scraping practices.
by Angel Menendez
CallForge - AI Sales Call Processing & Insights Extraction Automate sales call analysis with AI-powered insights for sales, marketing, and product teams. Who is This For? This workflow is designed for: ✅ Sales teams looking to extract structured insights from Gong call transcripts. ✅ Marketing professionals seeking AI-driven customer pain points & content strategy. ✅ Product teams needing feedback from sales calls to prioritize feature development. 🔍 What Problem Does This Workflow Solve? Manually analyzing Gong.io sales call transcripts is slow, inconsistent, and lacks structured insights. With CallForge, you can: ✔ Extract AI-powered insights about use cases, objections, competitors, and next steps. ✔ Provide structured marketing & product intelligence to enhance strategy. ✔ Automatically store call insights in Notion and Salesforce for easy access. ✔ Ensure resilience with automated reruns on failed workflows (handling Notion API limits). ✔ Improve decision-making with AI-powered competitor and sentiment analysis. 📌 Key Workflow Features 🎤 AI-Powered Transcript Analysis Uses AI to identify use cases, objections, competitors, and customer pain points. Categorizes insights for sales, marketing, and product teams. 📌 AI Agent Breakdown 🔹 Sales AI Agent – Extracts customer objections, pain points, competitors, and next steps. 🔹 Marketing AI Agent – Identifies recurring topics, keyword trends, and content opportunities. 🔹 Product AI Agent – Captures feature requests and AI/ML-related references. 📊 Structured Output Processing Sales Data Processor* → Stores insights in *Notion & Salesforce** for sales tracking. Marketing Data Processor* → Extracts *SEO & content strategy insights** for marketing teams. Product AI Data Processor* → Logs *customer feedback* to prioritize *feature development**. 💡 Competitor & Integration Analysis Tracks competing products mentioned in calls**. Identifies integration needs**, flagging workarounds used by prospects. 📢 Real-Time Slack Notifications Alerts teams on workflow progress** and completed call analyses. 🔄 Failure Resilience & Automated Re-Runs If a Notion API limit is reached, the process resumes automatically. 🚀 How This Works 🛠 1. Trigger & Call Data Processing The workflow retrieves Gong call transcripts and metadata. Normalizes data**, correcting common mispronunciations like "n8n." 🤖 2. AI Agents Analyze the Call Sales Agent** – Extracts actionable insights for sales follow-ups. Marketing Agent* – Identifies *recurring themes* and *keyword trends**. Product Agent* – Captures *feature requests and AI/ML usage mentions**. 📡 3. Data is Stored in Notion & Salesforce Logs AI-extracted insights* in *Notion** for structured tracking. Pushes sales-related data* to *Salesforce** for team accessibility. 🔔 4. Slack Alerts for Teams Notifies sales, marketing, and product teams** about extracted insights. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion 📊 Sample Output Data 1️⃣ Sales Insights { "UseCases": [ { "Summary": "A manufacturing company wants to automate inventory tracking and reduce manual entry delays.", "DepartmentTags": ["Operations"], "IndustryTags": ["Manufacturing"], "ImplementationStatus": "Evaluating" } ], "Objection": { "ObjectionTags": ["Feature Limitation"], "Nature": "The prospect wanted a deeper integration with their ERP system, which n8n currently lacks." }, "CallSummary": "The call focused on automation for supply chain processes. The prospect expressed interest but wanted confirmation on ERP integration capabilities.", "NextSteps": ["Schedule a follow-up demo for ERP integration."] } 2️⃣ Marketing Insights { "MarketingInsights": [ { "Tag": "Workflow Template Request", "Summary": "The prospect requested a template for automating CRM lead tracking." } ], "RecurringTopics": [ { "Topic": "CRM Integration", "Mentions": 3, "Context": "Discussed how n8n could sync CRM data automatically." } ], "ActionableInsights": [ { "RecommendationType": "Tutorial", "Title": "How to Automate CRM Lead Tracking with n8n", "Topic": "CRM Integration", "Rationale": "The prospect expressed a need for CRM automation templates." } ] } 3️⃣ Product Feedback { "ProductFeedback": [ { "Sentiment": "Positive", "Feedback": "The external speaker praised the simplicity of n8n's UI, making it easier for non-developers to automate tasks." }, { "Sentiment": "Negative", "Feedback": "The external speaker mentioned frustration over the lack of a dedicated ERP integration node." } ], "AI_ML_References": { "Exist": true, "Context": "The external speaker mentioned using AI for automating customer ticket categorization.", "Details": { "DevelopmentStatus": "Building", "Department": "Support", "RequiresAgents": true, "RequiresRAG": false, "RequiresChat": "Yes: External App (e.g., Slack)" } } } 🔧 How to Customize This Workflow 💡 🔗 Change Data Storage – Swap Notion for Airtable, HubSpot, or another CRM. 💡 📩 Customize Slack Notifications – Send alerts via email, webhook, or another channel. 💡 🛠 Modify AI Processing – Adjust AI models or processing prompts. 💡 📊 Add More Integrations – Sync insights with Pipedrive, HubSpot, or another CRM. 🚀 Why Use This Workflow? ✔ Automates Gong call transcript analysis, eliminating manual work. ✔ Improves collaboration by structuring insights for sales, marketing, and product teams. ✔ Boosts sales conversions by identifying objections and next steps. ✔ Enhances marketing and SEO strategy with AI-driven insights. ✔ Optimizes product roadmap decisions based on customer feedback. This workflow scales AI-powered sales intelligence for better decision-making, content strategy, and sales enablement. 🚀
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors publicly available competitor financial data—funding rounds, earnings, and SEC filings—and alerts your team to significant changes. Gain an edge by reacting to financial moves faster. Overview Using Bright Data, the automation scrapes Crunchbase, press releases, and SEC Edgar filings. OpenAI extracts key figures (revenue, funding amount, valuation) and assesses the potential impact. Highlights are posted to Slack and stored in Airtable for long-term tracking. Tools Used n8n** – Drives the automation Bright Data** – Scrapes financial disclosure sites OpenAI** – Extracts numbers and generates insights Slack** – Sends real-time alerts Airtable** – Maintains a financial timeline database How to Install Import the Workflow into n8n. Configure Bright Data credentials. Set Up OpenAI API key. Authorize Slack & Airtable. Customize Competitor List & Thresholds in the Set node. Use Cases Competitive Intelligence**: Track rivals’ financial health. Investor Relations**: Benchmark against peers. Strategic Planning**: Identify acquisition targets. Sales Enablement**: Time outreach after funding events. Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #financialmonitoring #competitoranalysis #brightdata #openai #secfilings #fundingrounds #n8nworkflow #nocode
by Don Jayamaha Jr
This workflow powers the Binance Spot Market Quant AI Agent, acting as the Financial Market Analyst. It fuses real-time market structure data (price, volume, kline) with multiple timeframe technical indicators (15m, 1h, 4h, 1d) and returns a structured trading outlook—perfect for intraday and swing traders who want actionable analysis in Telegram. 🔗 Requires the following sub-workflows to function: • Binance SM 15min Indicators Tool • Binance SM 1hour Indicators Tool • Binance SM 4hour Indicators Tool • Binance SM 1day Indicators Tool • Binance SM Price/24hStats/Kline Tool ⚙️ How It Works Triggered via webhook (typically by the Quant AI Agent). Extracts user symbol + timeframe from input (e.g., "DOGE outlook today"). Calls all linked sub-workflows to retrieve indicators + live price data. Merges the data and formats a clean trading report using GPT-4o-mini. Returns HTML-formatted message suitable for Telegram delivery. 📥 Sample Input { "message": "SOLUSDT", "sessionId": "654321123" } ✅ Telegram Output Format 📊 SOLUSDT Market Snapshot 💰 Price: $156.75 📉 24h Stats: High $160.10 | Low $149.00 | Volume: 1.1M SOL 🧪 4h Indicators: • RSI: 58.2 (Neutral-Bullish) • MACD: Crossover Up • BB: Squeezing Near Upper Band • ADX: 25.7 (Rising Trend) 📈 Resistance: $163 📉 Support: $148 🔍 Use Cases | Scenario | Outcome | | ------------------------------- | --------------------------------------------------------- | | User asks for “BTC outlook” | Returns 1h + 4h + 1d indicators + live price + key levels | | Telegram bot prompt: “DOGE now” | Returns short-term 15m + 1h analysis snapshot | | Strategy trigger inside n8n | Enables other workflows to consume structured signal data | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or redistribution permitted. 🔗 For support: LinkedIn – Don Jayamaha
by The { AI } rtist
Este workflow es para trabajar con tratamiento de texto usando n8n y poder iniciarte en como funciona. How To, Paso a Paso: https://comunidad-n8n.com/tratamiento-de-textos/ Comunidad de telegram: https://t.me/comunidadn8n
by johappel
Main Workflow “AI Nextcloud” Entry point**: A public chat-trigger greets the user; every incoming chat message starts the flow. AI agent**: A LangChain agent (“AI Nextcloud”) uses the configured OpenAI model plus short-term memory to continue the dialogue in context. Purpose**: Answers questions about files stored in a Nextcloud folder. The user simply includes the folder path in their question. Tool integration**: Calls the sub-workflow “Nextcloud Tool” whenever it needs to read files and pass their text back to the AI. Sub-Workflow “Nextcloud Tool” Invocation: Triggered by other workflows with the input parameter path (folder path). File listing: Retrieves every file in the specified folder via the Nextcloud API. Filter: Allows only readable formats (PDF, Markdown, DOCX). Download & text extraction PDF → Text via Extract From File Markdown → Raw text DOCX → Text via community node word2text Aggregation: Combines all extracted text into a single output field and returns it. > Outcome: Each call yields the plain content of every supported file in a Nextcloud folder—providing rich context for the AI agent to answer user questions accurately.
by NanaB
Description This n8n workflow automates the entire process of creating and publishing AI-generated videos, triggered by a simple message from a Telegram bot (YTAdmin). It transforms a text prompt into a structured video with scenes, visuals, and voiceover, stores assets in MongoDB, renders the final output using Creatomate, and uploads the video to YouTube. Throughout the process, YTAdmin receives real-time updates on the workflow’s progress. This is ideal for content creators, marketers, or businesses looking to scale video production using automation and AI. You can see a video demonstrating this template in action here: https://www.youtube.com/watch?v=EjI-ChpJ4xA&t=200s How it Works Trigger: Message from YTAdmin (Telegram Bot) The flow starts when YTAdmin sends a content prompt. Generate Structured Content A Mistral language model processes the input and outputs structured content, typically broken into scenes. Split & Process Content into Scenes The content is split into categorized parts for scene generation. Generate Media Assets For each scene: Images: Generated using OpenAI’s image model. Voiceovers: Created using OpenAI’s text-to-speech. Audio files are encoded and stored in MongoDB. Scene Composition Assets are grouped into coherent scenes. Render with Creatomate A complete payload is generated and sent to the Creatomate rendering API to produce the video. Progress messages are sent to YTAdmin. The flow pauses briefly to avoid rate limits. Render Callback Once Creatomate completes rendering, it sends a callback to the flow. If the render fails, an error message is sent to YTAdmin. If the render succeeds, the flow proceeds to post-processing. Generate Title & Description A second Mistral prompt generates a compelling title and description for YouTube. Upload to YouTube The rendered video is retrieved from Creatomate. It’s uploaded to YouTube with the AI-generated metadata. Final Update A success message is sent to YTAdmin, confirming upload completion. Set Up Steps (Approx. 10–15 Minutes)Step 1: Set Up YTAdmin Bot Create a Telegram bot via BotFather and get your API token. Add this token in n8n's Telegram credentials and link to the "Receive Message from YTAdmin" trigger. Step 2: Connect Your AI Providers Mistral: Add your API key under HTTP Request or AI Model nodes. OpenAI: Create an account at platform.openai.com and obtain an API key. Use it for both image generation and voiceover synthesis. Step 3: Configure Audio File Storage with MongoDB via Custom API Receives the Base64 encoded audio data sent in the request body. Connects to the configured MongoDB instance (connection details are managed securely within the API- code below). Uses the MongoDB driver and GridFS to store the audio data. Returns the unique _id (ObjectId) of the stored file in GridFS as a response. This _id is crucial as it will be used in subsequent steps to generate the download URL for the audio file. My API code can be found here for reference: https://github.com/nanabrownsnr/YTAutomation.git Step 4: Set Up Creatomate Create a Creatomate account, define your video templates, and retrieve your API key. Configure the HTTP request node to match your Creatomate payload requirements. Step 5: Connect YouTube In n8n, add OAuth2 credentials for your YouTube account. Make sure your Google Cloud project has YouTube Data API enabled. Step 6: Deploy and Test Send a message to YTAdmin and monitor the flow in n8n. Verify that content is generated, media is created, and the final video is rendered and uploaded. Customization Options Change the AI Prompts Modify the generation prompts to adjust tone, voice, or content type (e.g., news recaps, product videos, educational summaries). Switch Messaging Platform Replace Telegram (YTAdmin) with Slack, Discord, or WhatsApp by swapping out the trigger and response nodes. Add Subtitles or Effects Integrate Whisper or another speech-to-text tool to generate subtitles. Add overlay or transition effects in the Creatomate video payload. Use Local File Storage Instead of MongoDB Swap out MongoDB upload http nodes with filesystem or S3-compatible storage. Repurpose for Other Platforms Swap YouTube upload with TikTok, Instagram, or Vimeo endpoints for broader publishing. **Need Help or Want to Customize This Workflow? If you'd like assistance setting this up or adapting it for a different use case, feel free to reach out to me at nanabrownsnr@gmail.com. I'm happy to help!**
by Mutasem
Use case Slackbots are super powerful. At n8n, we have been using them to get a lot done.. But it can become hard to manage and maintain many different operations that a workflow can do. This is the base workflow we use for our most powerful internal Slackbots. They handle a lot from running e2e tests for Github branch to deleting a user. By splitting the workflow into many subworkflows, we are able to handle each command seperately, making it easier to debug as well as support new usecases. In this template, you can find eveything to setup your own Slackbot (and I made it simple, there's only one node to configure 😉). After that, you need to build your commands directly. This bot can create a new thread on an alerts channel and respond there. Or reply directly to the user. It responds for help request to return a help page. It automatically handles unknown commands. It also supports flags and environment variables. For example /cloudbot-test info mutasem --full-info -e env=prod would give you the following info, when calling subworkflow. How to setup Add Slack command and point it up to the webhook. For example. Add the following to the Set config node alerts_channel with alerts channel to start threads on instance_url with this instance url to make it easy to debug slack_token with slack bot token to validate request slack_secret_signature with slack secret signature to validate request help_docs_url with help url to help users understand the commands Build other workflows to call and add them to commands in Set Config. Each command must be mapped to a workflow id with an Execute Workflow Trigger node Activate workflow 🚀 How to adjust Add your own commands. Depending on your need, you might need to lock down who can call this.
by Harshil Agrawal
This workflow allows you to insert and retrieve data from a table in Stackby. Set node: The Set node is used to set the values for the name and id fields for a new record. You might want to add data from an external source, for example an API or a CRM. Based on your use-case, add the respective node before the Set node and configure your Set node accordingly. Stackby node: This node appends data from the previous node to a table in Stackby. Based on the values you want add to your table, enter the column names in the Column field. Stackby1 node: This node fetches all the data that is stored in the table in Stackby.
by Cheney Zhang
Create a RAG System with Paul Essays, Milvus, and OpenAI for Cited Answers This workflow automates the process of creating a document-based AI retrieval system using Milvus, an open-source vector database. It consists of two main steps: Data collection/processing Retrieval/response generation The system scrapes Paul Graham essays, processes them, and loads them into a Milvus vector store. When users ask questions, it retrieves relevant information and generates responses with citations. Step 1: Data Collection and Processing Set up a Milvus server using the official guide Create a collection named "my_collection" Execute the workflow to scrape Paul Graham essays: Fetch essay lists Extract names Split content into manageable items Limit results (if needed) Fetch texts Extract content Load everything into Milvus Vector Store This step uses OpenAI embeddings for vectorization. Step 2: Retrieval and Response Generation When a chat message is received, the system: Sets chunks to send to the model Retrieves relevant information from the Milvus Vector Store Prepares chunks Answers the query based on those chunks Composes citations Generates a comprehensive response This process uses OpenAI embeddings and models to ensure accurate and relevant answers with proper citations. For more information on vector databases and similarity search, visit Milvus documentation.
by Askan
What problem does this solve? It fetches LinkedIn profiles for a multitude of purposes based on a keyword and location via Google search and stores them in an Excel file for download and in a NocoDB database. It tries to avoid using costly services and should be n8n beginner friendly. It uses the serpapi.com to avoid being blocked by Google Search and to process the data in an easier way. What does it do? Based on criteria input, it searches LinkedIn profiles It discards unnecessary data and turns the follower count into a real number The output is provided as an Excel table for download and in a NocoDB database How does it do it? Based on criteria input, it uses serpAPI.com to conduct Google search of the respective LinkedI profiles With OpenAI.com the name of the respective company is being added With OpenAI.com the follower number e.g., 300+ is turned into a real number: 300 All unnecessary metadata is being discarded As an output an Excel file is being created The output is stored in a nocodb.com table Step-by-step instruction Import the Workflow: Copy the workflow JSON from the "Template Code" section below. Import it into n8n via "Import from File" or "Import from URL". Set up a free account at serpapi.com and get API credentials to enable good Google search results Set up an API account at openai.com and get API key Set up a nocodb.com account (or self-host) and get the API credentials Create the credentials for serpapi.com, opemnai.com and nocodb.com in n8n. Set up a table in NocoDB with the fields indicated in the note above the NocoDB node Follow the instructions as detailed in the notes above individual nodes When the workflow is finished, open the Excel node and click download if you need the Excel file
by Jay Hartley
Disclaimer This template only works on n8n local instances! How it Works This workflow allows you to to receive webhooks from the public web and have your local workflow catch them, without any remote proxy. It is very useful for running quick tests without exposing your dev server. All you have to do is activate the workflow and use the public address as defined below. Set up steps If you use the default key-value storage, there are only three steps: Install the @horka.tv/n8n-nodes-storage-kv community node Put your n8n workflow address in Local Webhook Address Activate the workflow and, from Executions, note down your public webhook token from the inputs to Get Latest Requests. You can now use https://webhook.site/[YOUR TOKEN] as a webhook destination, to receive webhook requests from the public web.