by Julian Kaiser
Startup Funding Research Automation with Claude, Perplexity AI, and Airtable How it works This intelligent workflow automatically discovers and analyzes recently funded startups by: Monitoring multiple news sources (TechCrunch and VentureBeat) for funding announcements Using AI to extract key funding details (company name, amount raised, investors) Conducting automated deep research on each company through perplexity deep research or jina deep search. Organizing all findings into a structured Airtable database for easy access and analysis Set up steps (10-15 minutes) Connect your news feed sources (TechCrunch and VentureBeat). Could be extended. These were easy to scrape and this data can be expensive. Set up your AI service credentials (Claude and Perplexity or jina which has generous free tier) Connect your Airtable account and create a base with appropriate fields (can be imported from my base) or see structure below. Airtable Base Structure Funding Round Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | lead_investor | String | The primary investor leading the funding round | | market | String | The market or industry sector the company operates in | | participating_investors | String | List of other investors participating in the funding round | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Structure Company Deep Research Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | currency | String | Currency of the funding amount | | announcement_date | String | Date when the funding was announced | | lead_investor | String | The primary investor leading the funding round | | participating_investors | String | List of other investors participating in the funding round | | industry | String | The industry sectors the company operates in | | company_description | String | Description of the company's business | | hq_location | String | Company headquarters location | | founding_year | Number | Year the company was founded | | founder_names | String | Names of the company founders | | ceo_name | String | Name of the company CEO | | employee_count | Number | Number of employees at the company | | total_funding | Number | Total funding amount received to date | | total_funding_currency | String | Currency of total funding | | funding_purpose | String | Purpose or use of the funding | | business_model | String | Company's business model | | valuation | Object | Company valuation information | | previous_rounds | Object | Information about previous funding rounds | | source_urls | String | Source URLs for the funding information | | original_report | String | Original report text about the funding | | market | String | The market the company operates in | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Notes I found that by using perplexity via open router, we lose access to the sources, as they are not stored in the same location as the report itself so I opted to use perplexity API via HTTP node. For using perplexity and or jina you have to configure header auth as described in Header Auth - n8n Docs What you can learn How to scrape data using sitemaps How to extract strucutred data from unstructured text How to execute parts of the workflow as subworkflow How to use deep research in a practical scenario How to define more complex JSON schemas
by Joseph
Note: Now includes an Apify alternative for Rapid API (Some users can't create new accounts on Rapid API, so I have added an alternative for you. But immediately you are able to get access to Rapid API, please use that option, it returns more detailed data). *Scroll to bottom for APify setup guide* This n8n workflow automates LinkedIn lead generation, enrichment, and activity analysis using Apollo.io, RapidAPI, Google Sheets and Mail.so. Perfect for sales teams, founders, B2B marketers, and cold outreach pros who want personalized lead insights to drive better conversion rates. ⚙️ How This Workflow Works The workflow is broken down into several key steps, each designed to help you build and enrich a valuable list of LinkedIn leads: 1. 🔑 Lead Discovery (Keyword Search via Apollo) Pulls leads using Apollo.io's API based on keywords, industries, or job titles. Saves lead name, title, company, and LinkedIn URL to your Google Sheet. You can replace the trigger node from the form node to a webhook, whatsapp, telegram, etc, any way for you to send over your query variables over to initiate the workflow. 2. 🧠 Username Extraction (from LinkedIn URL) Extracts the LinkedIn username from profile URLs using a simple script node. This is required for further enrichment via RapidAPI. 3. ✉️ Email Lookup (via Apollo User ID) Uses the Apollo User ID to retrieve the lead’s verified work email. Ensures high-quality leads with reliable contact info. To double check that the email is currently valid, we use the mail.so api and filter out emails that fail deliverability and mx-record check. We don't wanna risk sending emails to no longer existent addresses, right? 4. 🧾 Profile Summary Enrichment (via RapidAPI) Queries the LinkedIn Data API to fetch a lead’s profile summary/bio. Gives you a deeper understanding of their background and expertise. 5. 📰 Recent Activity Collection (Posts & Reposts) Retrieves recent posts or reposts from each lead’s profile. Great for tailoring outreach with reference to what they’re currently talking about. 6. 🗂️ Leads Database Update All enriched data is written to the same Google Sheet. New columns are filled in without overwriting existing data. ✅ Smart Retry & Row Status Logic Every subworkflow includes a fail-safe mechanism to ensure: ✅ Each row has status columns (e.g., done, failed, pending). 🕒 A scheduled retry workflow resets failed rows to pending after 2 weeks (customizable). 💬 This gives failed enrichments another chance to be processed later, reducing data loss. 📋 Google Sheets Setup Template 1: Apollo Leads Scraping & Enrichment Template 2: Enriched Leads Database Make a copy to your Drive and use. Columns will be filled as each subworkflow runs (email, summary, interests, etc.) 🔐 Required API Keys To use this workflow, you’ll need the following credentials: 🧩 Apollo.io Sign up and get your key here: Apollo.io API Keys ⚠️ Important: Toggle the “Master API Key” option to ON when generating your key. This ensures the same key can be used for all Apollo endpoints in this workflow. 🌐 RapidAPI (LinkedIn Data API) Subscribe to the API here: LinkedIn Data API on RapidAPI Use the key in the x-rapidapi-key header in the relevant nodes. ✉️ Mail.so Sign up and get your key here: Mail.so API > 💡 For both APIs, set up the credentials in n8n as “Generic Credential” types. This way, you won’t need to reconfigure the headers in each node. 🛠️ Customization Options Modify the Apollo filters (location, industry, seniority) to target your ideal customers. Change retry interval in the scheduler (e.g., weekly instead of 2 weeks). Connect the database to your email campaign tool like Mailchimp or Instantly.ai. Replace the AI nodes with your desired AI agents and customize the system messages further to get desired results. 🆕 Apify Update Guide To use this workflow, you’ll need the following credentials: Login to Apify, then open this link; https://console.apify.com/actors/2SyF0bVxmgGr8IVCZ/ Click on integrations and scroll down to API Solutions and select "Use API endpoints". Scroll to "Run Actor synchronously and get dataset items" and copy the actor endpoint url then paste it in the placeholder inside the http node of Apify alternative flow "apify-actor-endpoint". That's it, you are set to go. I am available for custom n8n workflows, if you like my work, please get in touch with me on email at joseph@uppfy.com
by JPres
👥 Who Is This For? Content creators, marketing teams, and channel managers who need to streamline video publishing with optimized metadata and scheduled releases across multiple videos. 🛠 What Problem Does This Solve? Manual YouTube video publishing is time-consuming and often results in inconsistent descriptions, tags, and scheduling. This workflow fully automates: Extracting video transcripts via Apify for metadata generation Creating SEO-optimized descriptions and tags for each video Setting videos to private during initial upload (critical for scheduling) Implementing scheduled publishing at strategic times Maintaining consistent branding and formatting across all content 🔄 Node-by-Node Breakdown | Step | Node Purpose | |------|--------------| | 1 | Every Day (Scheduler) | Trigger workflow on a regular schedule | | 2 | Get Videos to Harmonize | Retrieve videos requiring metadata enhancement | | 3 | Get Video IDs (Unpublished) | Filter for videos that need publishing | | 4 | Loop over Video IDs | Process each video individually | | 5 | Get Video Data | Retrieve metadata for the current video | | 6 | Loop over Videos with Parameter IS | Set parameters for processing | | 7 | Set Videos to Private | Ensure videos are private (required for scheduling) | | 8 | Apify: Get Transcript | Extract video transcript via Apify | | 9 | Fetch Latest Videos | Get most recent channel content | | 10 | Loop Over Items | Process each video item | | 11 | Generate Description, Tags, etc. | Create optimized metadata from transcript | | 12 | AP Clean ID | Format identifiers | | 13 | Retrieve Generated Data | Collect the enhanced metadata | | 14 | Adjust Transcript Format | Format transcript for better processing | | 15 | Update Video's Metadata | Apply generated description and tags to video | ⚙️ Pre-conditions / Requirements n8n with YouTube API credentials configured Apify account with API access for transcript extraction YouTube channel with upload permissions Master templates for description formatting Videos must be initially set to private for scheduling to work ⚙️ Setup Instructions Import this workflow into your n8n instance. Configure YouTube API credentials with proper channel access. Set up Apify integration with appropriate actor for transcript extraction. Define scheduling parameters in the Every Day node. Configure description templates with placeholders for dynamic content. Set default tags and customize tag generation rules. Test with a single video before batch processing. 🎨 How to Customize Adjust prompt templates for description generation to match your brand voice. Modify tag selection algorithms based on your channel's SEO strategy. Create multiple publishing schedules for different content categories. Integrate with analytics tools to optimize publishing times. Add notification nodes to alert when videos are successfully scheduled. ⚠️ Important Notes Videos MUST be uploaded as private initially - the Publish At logic only works for private videos that haven't been published before. Publishing schedules require videos to remain private until their scheduled time. Transcript quality affects metadata generation results. Consider YouTube API quotas when scheduling large batches of videos. 🔐 Security and Privacy API credentials are stored securely within n8n. Transcripts are processed temporarily and not stored permanently. Webhook URLs should be protected to prevent unauthorized triggering. Access to the workflow should be limited to authorized team members only.
by Davide
Voiceflow is a no-code platform that allows you to design, prototype, and deploy conversational assistants across multiple channels—such as chat, voice, and phone—with advanced logic and natural language understanding. It supports integration with APIs, webhooks, and even tools like Twilio for phone agents. It's perfect for building customer support agents, voice bots, or intelligent assistants. This workflow connects n8n and Voiceflow with tools like Google Calendar, Qdrant (vector database), OpenAI, and an order tracking API to power a smart, multi-channel conversational agent. There are 3 main webhook endpoints in n8n that Voiceflow interacts with: n8n_order – receives user input related to order tracking, queries an API, and responds with tracking status. n8n_appointment – processes appointment booking, reformats date input using OpenAI, and creates a Google Calendar event. n8n_rag – handles general product/service questions using a RAG (Retrieval-Augmented Generation) system backed by: Google Drive document ingestion, Qdrant vector store for search, and OpenAI models for context-based answers. Each webhook is connected to a corresponding "Capture" block inside Voiceflow, which sends data to n8n and waits for the response. How It Works This n8n workflow integrates Voiceflow for chatbot/voice interactions, Google Calendar for appointment scheduling, and RAG (Retrieval-Augmented Generation) for knowledge-based responses. Here’s the flow: Trigger**: Three webhooks (n8n_order, n8n_appointment, n8n_rag) receive inputs from Voiceflow (chat, voice, or phone calls). Each webhook routes requests to specific functions: Order Tracking: Fetches order status via an external API. Appointment Scheduling: Uses OpenAI to parse dates, creates Google Calendar events, and confirms via WhatsApp. RAG System: Queries a Qdrant vector store (populated with Google Drive documents) to answer customer questions using GPT-4. AI Processing**: OpenAI Chains: Convert natural language dates to Google Calendar formats and generate responses. RAG Pipeline: Embeds documents (via OpenAI), stores them in Qdrant, and retrieves context-aware answers. Voiceflow Integration: Routes responses back to Voiceflow for multi-channel delivery (chat, voice, or phone). Outputs**: Confirmation messages (e.g., "Event created successfully"). Dynamic responses for orders, appointments, or product support. Setup Steps Prerequisites: APIs**: Google Calendar & Drive OAuth credentials. Qdrant vector database (hosted or cloud). OpenAI API key (for GPT-4 and embeddings). Configuration: Qdrant Setup: Run the "Create collection" and "Refresh collection" nodes to initialize the vector store. Populate it with documents using the Google Drive → Qdrant pipeline (embeddings generated via OpenAI). Voiceflow Webhooks: Link Voiceflow’s "Captures" to n8n’s webhook URLs (n8n_order, n8n_appointment, n8n_rag). Google Calendar: Authenticate the Google Calendar node and set event templates (e.g., summary, description). RAG System: Configure the Qdrant vector store and OpenAI embeddings nodes. Adjust the Retrieve Agent’s system prompt for domain-specific queries (e.g., electronics store support). Optional: Add Twilio for phone-agent capabilities. Customize OpenAI prompts for tone/accuracy. PS. You can import a Twilio number to assign it to your agent for becoming a Phone Agent Need help customizing? Contact me for consulting and support or add me on Linkedin
by PUQcloud
Overview The Docker NextCloud WHMCS module leverages a sophisticated workflow for n8n, designed to automate the comprehensive deployment, configuration, and management processes for NextCloud and NextCloud Office services. Through its intuitive API interface, the workflow securely receives commands and orchestrates predefined tasks via SSH on your Docker-hosted server, ensuring streamlined operations and efficient management. Prerequisites You must deploy your own dedicated n8n server to manage workflows effectively. Alternatively, you may opt for the official n8n cloud-based solutions accessible via: n8n Official Site Your Docker server must be accessible via SSH with necessary permissions. Installation Steps Install the Required Workflow on n8n You can select from two convenient installation options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates are continuously updated and available on the n8n marketplace. Explore all templates provided by PUQcloud directly here: PUQcloud on n8n Option 2: Manual Installation Each module version includes a bundled workflow template file. Import this workflow file directly into your n8n server manually. n8n Workflow API Backend Setup for WHMCS Configure API Webhook and SSH Access Create a secure Basic Auth Credential for Webhook API interactions within n8n. Create an SSH Credential within n8n to securely communicate with the Docker host. Modify Template Parameters Adjust and update the following critical parameters to match your deployment specifics: server_domain – Set this to the domain of your WHMCS Docker server. clients_dir – Specify the directory where user data and related resources will be stored. mount_dir – The standard mount point for container storage (recommended to remain unchanged). Do not alter the following technical parameters to avoid workflow disruption: screen_left, screen_right. Deploy-docker-compose Configuration Fine-tune Docker Compose configurations tailored specifically for these critical operational scenarios: Initial service provisioning and setup Service suspension and subsequent unlocking Service configuration updates Routine service maintenance tasks nginx Configuration Management Enhance and customize proxy server configurations using the dedicated nginx workflow element: main**: Define specialized parameters within the server configuration block. main_location**: Set custom headers, caching policies, and routing rules for the root location. Bash Script Automation Automate Docker container management and related server tasks through dynamically generated Bash scripts within n8n. Scripts execute securely via SSH and provide responses in JSON or plain text formats for easy parsing and logging. Scripts are conveniently linked directly to the SSH action elements. You retain complete flexibility to adapt or extend these scripts as necessary to meet your precise operational requirements.
by Cyril Nicko Gaspar
🔍 Email Lookup with Google Search from Postgres Database This N8N workflow is designed to enrich seller data stored in a Postgres database by performing automated Google search lookups. It uses Bright Data's Web Unlocker to bypass search result restrictions and the HTML Extract node to parse and extract relevant information from webpages. The main purpose of this workflow is to discover missing contact details, company domains, and secondary emails for businesses or sellers based on existing database entries. 🎯 Problem This Workflow Solves Manually searching for missing seller or business details—like secondary emails, websites, or domain names—can be time-consuming and inefficient, especially for large datasets. This workflow automates the search and data enrichment process, significantly reducing manual effort while improving the quality and completeness of your seller database. ✅ Prerequisites Before using this template, make sure the following requirements are met: ✔️ A Bright Data account with access to the Web Unlocker or Amazon Scraper API ✔️ A valid Bright Data API key ✔️ An active PostgreSQL database with seller data ✔️ N8N self-hosted instance (recommended for using community nodes like n8n-nodes-brightdata) ✔️ Installed n8n-nodes-brightdata package (custom node for Bright Data integration) ⚙️ Setup Instructions Step 1: Prepare Your Postgres Table Create a table in Postgres with the following structure (you can adjust field names if needed): CREATE TABLE sellers ( seller_id SERIAL PRIMARY KEY, seller_name TEXT, primary_email TEXT, company_info TEXT, trade_name TEXT, business_address TEXT, coc_number TEXT, vat_number TEXT, commercial_register TEXT, secondary_email TEXT, domain TEXT, seller_slug TEXT, source TEXT ); Step 2: Setup Web Unlocker on Bright Data Go to your Bright Data dashboard. Navigate to Proxies & Scraping → Web Unlocker. Create a new zone, selecting Web Unlocker API under Scraping Solutions. Whitelist your server IP if required. Step 3: Generate API Key In the Bright Data dashboard, go to the API section. Generate a new API key. In N8N, create HTTP Request Credentials using Bearer Authentication with the API key. Step 4: Install the Bright Data Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-brightdata. 🔄 Workflow Functionality 🔁 Trigger: Can be set to run on a schedule (e.g., daily) or manually. 📥 Read: Fetches seller records from the Postgres table. 🌐 Search: Uses Bright Data to perform a Google search based on seller_name, company_info, or trade_name. 🧾 Extract: Parses the HTML content using the HTML Extract node to identify potential websites and email addresses. 📝 Update: Writes enriched data (like domain or secondary_email) back to the Postgres table. 💡 Use Cases Lead enrichment for e-commerce sellers Domain and contact info discovery for B2B databases Email and web domain verification for CRM systems Market research automation 🛠️ Customization Tips You can enhance the parsing logic in the HTML Extract node to look for phone numbers, LinkedIn profiles, or social media links. Modify the search query logic to include additional parameters like location or industry for more refined results. Integrate additional APIs (e.g., Hunter.io, Clearbit) for email validation or social profile enrichment. Add filtering to skip entries that already have domain or secondary_email.
by Don Jayamaha Jr
📡 This workflow serves as the central Alpha Vantage API fetcher for Tesla trading indicators, delivering cleaned 20-point JSON outputs for three timeframes: 15min, 1hour, and 1day. It is required by the following agents: Tesla 15min, 1h, 1d Indicators Tools Tesla Financial Market Data Analyst Tool ✅ Requires an Alpha Vantage Premium API Key 🚀 Used as a sub-agent via webhook endpoints triggered by other workflows 📈 What It Does For each timeframe (15min, 1h, 1d), this tool: Triggers 6 technical indicators via Alpha Vantage: RSI MACD BBANDS SMA EMA ADX Trims the raw response to the latest 20 data points Reformats into a clean JSON structure: { "indicator": "MACD", "timeframe": "1hour", "data": { "timestamp": "...", "macd": 0.32, "signal": 0.29 } } Returns results via Webhook Respond for the calling agent 📂 Required Credentials 🔑 Alpha Vantage Premium API Key Set up under Credentials > HTTP Query Auth Name: Alpha Vantage Premium Query Param: apikey Get yours here: https://www.alphavantage.co/premium/ 🛠️ Setup Steps Import Workflow into n8n Name it: Tesla_Quant_Technical_Indicators_Webhooks_Tool Add HTTP Query Auth Credential Name: Alpha Vantage Premium Param key: apikey Value: your Alpha Vantage key Publish and Use the Webhooks This workflow exposes 3 endpoints: /15minData → used by 15m Indicator Tool /1hourData → used by 1h Indicator Tool /1dayData → used by 1d Indicator Tool Connect via Execute Workflow or HTTP Request Ensure caller sends webhook trigger correctly to the path 🧱 Architecture Summary Each timeframe section includes: | Component | Details | | ------------------ | --------------------------------------------- | | 📡 Webhook Trigger | Entry node (/15minData, /1hourData, etc.) | | 🔄 API Calls | 6 nodes fetching indicators via Alpha Vantage | | 🧹 Formatters | JS Code nodes to clean and trim responses | | 🧩 Merge Node | Consolidates cleaned JSONs | | 🚀 Webhook Respond | Returns structured data to calling workflow | 🧾 Sticky Notes Overview ✅ Webhook Entry: Instructions per timeframe ✅ API Call Summary: Alpha Vantage endpoint for each indicator ✅ Format Nodes: Explain JSON parsing and cleaning ✅ Merge Logic: Final output format ✅ Webhook Response: What gets returned to caller All stickies follow n8n standard color-coding: Blue = Webhook flow Yellow = API request group Purple = Formatters Green = Merge step Gray = Workflow overview and usage 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company This agent is part of the Tesla Quant AI Trading System and protected under U.S. copyright. For support: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Use this API tool to feed Tesla technical indicators into any AI or trading agent across 15m, 1h, and 1d timeframes. Required for all Tesla Quant Agent indicator tools.
by ist00dent
This n8n template lets you automatically pull market data for the cryptocurrencies from CoinGecko every hour, calculate custom volatility and market-health metrics, classify each coin’s price action into buy/sell/hold/neutral signals with risk ratings, and expose both individual analyses and a portfolio summary via a webhook. It’s perfect for crypto analysts, DeFi builders, or portfolio managers who want on-demand insights without writing a single line of backend code. 🔧 How it works Schedule Trigger fires every hour (or interval you choose). HTTP Request (CoinGecko) fetches the top 10 coins by market cap, including 24 h, 7 d, and 30 d price change percentages. Split In Batches ensures each coin is processed sequentially. Function (Calculate Market Metrics) computes: A weighted volatility score Market-cap-to-volume ratio Price-to-ATH ratio Composite market score IF & Switch nodes categorize each coin’s 24 h price action (up >5%, down >5%, high volatility, or stable) and append: signal (BUY/SELL/HOLD/NEUTRAL) riskRating (High/Medium/Low/Unknown) recommendation & investmentStrategy guidance NoOp & Merge nodes consolidate each branch back into a single data stream. Function (Generate Portfolio Summary) aggregates all analyses into: A Markdown portfolioSummary Counts of buy/sell/hold/neutral signals Risk distribution Webhook Response returns the full JSON payload with individual analyses and the summary for downstream consumers. 👤 Who is it for? This workflow is ideal for: Crypto researchers and analysts who need scheduled market insights DeFi and trading bot developers looking to automate signal generation Portfolio managers seeking a no-code overview of top assets Automation engineers exploring API integration and data enrichment 📑 Data Structure When you trigger the webhook, you’ll receive a JSON object containing: individualAnalyses: Array of { coin, symbol, currentPrice, priceChanges, marketMetrics, signal, riskRating, recommendation } portfolioSummary: Markdown report summarizing signals, risk distribution, and top opportunity marketSignals: Counts of each signal type riskDistribution: Counts of each risk rating timestamp: ISO string of analysis time ⚙️ Setup Instructions Import: In n8n Editor → click “Import from JSON” → paste this workflow JSON. Configure Schedule: Double-click the Schedule Trigger → set your desired interval (default: every hour). Webhook Path: Open the Webhook node → choose a unique path (e.g., /crypto‐analysis) and “POST”. Activate: Save and activate the workflow. Test: Open the webhook url to other tab or use cURL curl -X POST https://<your-n8n-host>/webhook/<path> You’ll get back a JSON payload with both portfolioSummary and individualAnalyses. 📝 Tips Rate-Limit Handling: If CoinGecko returns 429, insert a Delay node (e.g., 500 ms) after the HTTP Request. Batch Size: Default is 1 coin at a time; you can bump it to parallelize. Customization: Tweak volatility weightings or add new metrics directly in the “Calculate Market Metrics” Function node. Extension: Swap CoinGecko for another API by updating the HTTP Request URL and field mappings.
by Zacharia Kimotho
Workflow documentation updated on 21 May 2025 This workflow keeps track of your brand mentions across different Facebook groups and provides an analysis of the posts as positive, negative or neutral and updates this to Googe sheets for further analysis This is useful and relevants for brands looking to keep track of what people are saying about their brands and guage the customer satisfaction or disatisfaction based on what they are talking about Who is this template for? This workflow is for you if You Need to keep track of your brand sentiments across different niche facebook groups Own a saas and want to monitor it across different local facebook Groups Are looking to do some competitor research to understand what others dont like about their products Are testing the market on different market offerings and products to get best results Are looking for sources other that review sites for product, software or service reviews Need to keep track of your brand sentiments across different niche facebook groups Are starting on market research and would like to get insights from differnt facebook groups on app usage, strngths weaknesses, features etc How it works You will set the desired schedule by which to monitor the groups This gets the brand names and facebook Groups to monitor. Setup Steps Before you begin You will need access to a Bright Data API to run this workflows Make a copy of the sheet below and add the urls for the facebook groups to scrap and the brand names you wish to monitor. Import the workflow json to your canvas Make a copy of this Google sheet to get started easily Set your APi key in the Map out the Google sheet to your tables You can use/update the current AI models to differnt models eg Gemini or anthropic Run the workflow Setup B Bright Data provides an option to receive the results on an external webhook via a POST call. This can be collected via the
by Adam Bertram
An intelligent IT support agent that uses Azure AI Search for knowledge retrieval, Microsoft Entra ID integration for user management, and Jira for ticket creation. The agent can answer questions using internal documentation and perform administrative tasks like password resets. How It Works The workflow operates in three main sections: Agent Chat Interface: A chat trigger receives user messages and routes them to an AI agent powered by Google Gemini. The agent maintains conversation context using buffer memory and has access to multiple tools for different tasks. Knowledge Management: Users can upload documentation files (.txt, .md) through a form trigger. These documents are processed, converted to embeddings using OpenAI's API, and stored in an Azure AI Search index with vector search capabilities. Administrative Tools: The agent can query Microsoft Entra ID to find users, reset passwords, and create Jira tickets when issues need escalation. It uses semantic search to find relevant internal documentation before responding to user queries. The workflow includes a separate setup section that creates the Azure AI Search service and index with proper vector search configuration, semantic search capabilities, and the required field schema. Prerequisites To use this template, you'll need: n8n cloud or self-hosted instance Azure subscription with permissions to create AI Search services Microsoft Entra ID (Azure AD) access with user management permissions OpenAI API account for embeddings Google Gemini API access Jira Software Cloud instance Basic understanding of Azure resource management Setup Instructions Import the template into n8n. Configure credentials: Add Google Gemini API credentials Add OpenAI API credentials for embeddings Add Microsoft Azure OAuth2 credentials with appropriate permissions Add Microsoft Entra ID OAuth2 credentials Add Jira Software Cloud API credentials Update workflow parameters: Open the "Set Common Fields" nodes Replace <azure subscription id> with your Azure subscription ID Replace <azure resource group> with your target resource group name Replace <azure region> with your preferred Azure region Replace <azure ai search service name> with your desired service name Replace <azure ai search index name> with your desired index name Update the Jira project ID in the "Create Jira Ticket" node Set up Azure infrastructure: Run the manual trigger "When clicking 'Test workflow'" to create the Azure AI Search service and index This creates the vector search index with semantic search configuration Configure the vector store webhook: Update the "Invoke Query Vector Store Webhook" node URL with your actual webhook endpoint The webhook URL should point to the "Semantic Search" webhook in the same workflow Upload knowledge base: Use the "On Knowledge Upload" form to upload your internal documentation Supported formats: .txt and .md files Documents will be automatically embedded and indexed Test the setup: Use the chat interface to verify the agent responds appropriately Test knowledge retrieval with questions about uploaded documentation Verify Entra ID integration and Jira ticket creation Security Considerations Use least-privilege access for all API credentials Microsoft Entra ID credentials should have limited user management permissions Azure credentials need Search Service Contributor and Search Index Data Contributor roles OpenAI API key should have usage limits configured Jira credentials should be restricted to specific projects Consider implementing rate limiting on the chat interface Review password reset policies and ensure force password change is enabled Validate all user inputs before processing administrative requests Extending the Template You could enhance this template by: Adding support for additional file formats (PDF, DOCX) in the knowledge upload Implementing role-based access control for different administrative functions Adding integration with other ITSM tools beyond Jira Creating automated escalation rules based on query complexity Adding analytics and reporting for support interactions Implementing multi-language support for international organizations Adding approval workflows for sensitive administrative actions Integrating with Microsoft Teams or Slack for notifications
by Miko
Stay ahead of trends by automating your content research. This workflow fetches trending keywords from Google Trends RSS, extracts key insights from top articles, and saves structured summaries in Google Sheets—helping you build a data-driven editorial plan effortlessly. How it works Fetch Google Trends RSS – The workflow retrieves trending keywords along with three related article links. Extract & Process Content – It fetches the content of these articles, cleans the HTML, and generates a concise summary using Jina AI. Store in Google Sheets – The processed insights, including the trending keyword and summary, are saved in a pre-configured Google Sheet. Setup Steps Prepare a Google Sheet – Ensure you have a Google Sheet ready to store the extracted data. Configure API Access – Set up Google Sheets API and any required authentication. Get Jina.ai API key Adjust Workflow Settings – A dedicated configuration node allows you to fine-tune how data is processed and stored. Customization Modify the RSS source to focus on specific Google Trends regions or categories. Adjust the content processing logic to refine how article summaries are created. Expand the workflow to integrate with CMS (e.g., WordPress) for automated content planning. This workflow is ideal for content strategists, SEO professionals, and news publishers who want to quickly identify and act on trending topics without manual research. 🚀 Google Sheets Fields Copy and paste these column headers into your Google Sheet: | Column Name | Description | |------------------------|-------------| | status | Initial status of the keyword (e.g., "idea") | | trending_keyword | Trending keyword extracted from Google Trends | | approx_traffic | Estimated traffic for the trending keyword | | pubDate | Date when the keyword was fetched | | news_item_url1 | URL of the first related news article | | news_item_title1 | Title of the first news article | | news_item_url2 | URL of the second related news article | | news_item_title2 | Title of the second news article | | news_item_url3 | URL of the third related news article | | news_item_title3 | Title of the third news article | | news_item_picture1 | Image URL from the first news article | | news_item_source1 | Source of the first news article | | news_item_picture2 | Image URL from the second news article | | news_item_source2 | Source of the second news article | | news_item_picture3 | Image URL from the third news article | | news_item_source3 | Source of the third news article | | abstract | AI-generated summary of the articles (limited to 49,999 characters) | Instructions Open Google Sheets and create a new spreadsheet. Copy the column names from the table above. Paste them into the first row of your Google Sheet.
by Krupal Patel
🔧 Workflow Summary This system automates LinkedIn lead generation and enrichment in six clear stages: 1. Lead Collection (via Apollo.io) Automatically pulls leads based on keywords, roles, or industries using Apollo’s API. Captures name, job title, company, and LinkedIn profile URL. You can kick off the workflow via form, webhook, WhatsApp, Telegram, or any other custom trigger that passes search parameters. 2. LinkedIn Username Extraction Extracts usernames from LinkedIn profile URLs using a script step. These usernames are required for further enrichment using RapidAPI. 3. Email Retrieval (via Apollo.io User ID) Fetches verified work email using the Apollo User ID. Email validity is double-checked using www.mails.so filtering out undeliverable or inactive emails by checking MX records and deliverability. 4. Profile Summary (via LinkedIn API on RapidAPI) Enriches lead data by pulling bio/summary details to understand their background and expertise. 5. Activity Insights (Posts & Reposts) Collects recent posts or reposts to help craft personalised messages based on what they’re currently engaging with. 6. Leads Sheet Update All data is written into a Google Sheet. New columns are populated dynamically without erasing existing data. ⸻ ✅ Smart Retry Logic Each workflow is equipped with a fail-safe system: Tracks status per row: ✅ done, ❌ failed, ⏳ pending Failed rows are automatically retried after a custom delay (e.g., 2 weeks). Ensures minimal drop-offs and complete data coverage. 📊 Google Sheets Setup Make a copy of the following: Template 1: Apollo Leads Scraper & Enrichment Template 2: Final Enriched Leads The system appends data (like emails, bios, activity) step by step. 🔐 API Credentials Needed 1. Apollo API Sign up and generate API key at Apollo Developer Portal Be sure to enable the “Master API Key” toggle so the same key works for all endpoints. 2. LinkedIn Data API (via RapidAPI) Subscribe at RapidAPI - LinkedIn Data Use your key in the x-rapidapi-key header. 3. Mails.so API Get your API Key from mails.so dashboard 🛠️ Troubleshooting – LinkedIn Lead Machine ✅ Common Mistakes & Fixes 1. API Keys Not Working Make sure API keys for Apollo, RapidAPI, and mails.so are correct. Apollo “Master API Key” must be enabled. Keys should be saved as Generic Credentials in n8n. 2. Leads Not Found Check if the search query (keyword/job title) is too narrow. Apollo might return empty results if the filters are incorrect. 3. LinkedIn URLs Missing or Invalid Ensure Apollo is returning valid LinkedIn URLs. Improper URLs will cause username extraction and enrichment steps to fail. 4. Emails Not Coming Through Apollo may not have verified emails for all leads. mails.so might reject invalid or expired email addresses. 5. Google Sheet Not Updating Make sure the Google Sheet is shared with the right Google account (linked to n8n). Check if the column names match and data isn’t blocked due to formatting. 6. Status Columns Not Changing Each row must have done, failed, or pending in the status column. If the status doesn’t update, the retry logic won’t trigger. 7. RapidAPI Not Returning Data Double-check if username is present and valid. Make sure the RapidAPI plan is active and within limits. 8. Workflow Not Running Check if the trigger node (form, webhook, etc.) is connected and active. Make sure you’re passing the required inputs (keyword, role, etc.). Need Help? Contact www.KrupalPatel.com for support and custom workflow development