by Yaron Been
Automated monitoring system that tracks startup activities, funding events, and company updates in real-time, providing valuable market intelligence. 🚀 What It Does Real-time monitoring of startup activities Funding alerts and updates Competitor tracking Industry trend analysis Customizable watchlists 🎯 Perfect For Venture capitalists Startup founders Business development teams Market researchers Investment analysts ⚙️ Key Benefits ✅ Stay ahead of market movements ✅ Never miss important funding rounds ✅ Track competitor activities ✅ Identify emerging trends ✅ Save hours of manual research 🔧 What You Need Crunchbase API access n8n instance Notification preferences (email/Slack/Teams) 📊 Data Points Tracked New funding rounds Company updates Leadership changes Product launches Market expansions 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay informed about the startup ecosystem with automated monitoring and alerts. Make data-driven decisions with timely, relevant information.
by Robert Breen
This no-code n8n workflow finds recent Instagram posts by hashtag, scrapes profile data, and uses an AI agent to evaluate whether each account is a good collaboration lead. The workflow filters based on the number of followers and the content of their bio, and outputs structured reasoning for outreach decisions. Perfect for creators, marketers, or business developers looking to automate influencer or community partnership prospecting—especially in niche ecosystems like n8n. ✅ Key Features 🔍 Hashtag Discovery**: Finds recent Instagram posts from a specified hashtag (e.g., #n8n) 👤 Account Scraping**: Retrieves profile details such as follower count and biography 🧠 AI Evaluation**: Uses OpenAI and LangChain to determine if the profile is a good fit for outreach 📦 Structured Output**: Returns a JSON object with "Yes/No" lead status and reasoning 🛠️ Manual Execution**: Run on demand using the manual trigger 🧰 What You'll Need | Tool / API | Purpose | Setup Steps | |-------------------------|------------------------------------------|-------------| | Apify Account | To access Instagram scraping actors | Create account → Generate API Token → Use in httpQueryAuth credential in n8n | | OpenAI API Key | To power the AI decision-making agent | Sign up at OpenAI → Create API key → Paste into OpenAI credential in n8n | | LangChain Plugin for n8n | AI Orchestration with System Message | Install LangChain nodes from Community Nodes (already installed in this workflow) | 🔧 Step-by-Step Setup 1️⃣ Manual Trigger Node**: When clicking ‘Execute workflow’ Use**: Allows you to run the workflow manually while testing. 2️⃣ Define Hashtag Node**: Create Search Term Value**: Sets "n8n" as the default Instagram hashtag to scan. You can edit this to any other hashtag you'd like. 3️⃣ Find Recent Posts Node**: Find Recent Posts API**: Apify Instagram Hashtag Scraper Auth Setup**: Go to your Apify Console Click “Create new token” In n8n, create a new HTTP Query Auth credential Set token in the token query param (e.g., ?token=yourTokenHere) Choose the credential in this node 4️⃣ Scrape Each Profile Node**: Scrape Accounts API**: Apify Instagram Profile Scraper Body**: JSON with usernames from the hashtag search Note**: Uses the same httpQueryAuth credential as the previous node. 5️⃣ Extract Fields Node**: Set bio and follower count What it does**: Extracts biography and followersCount from the profile JSON and stores them in clean variables for AI input. 6️⃣ AI Lead Scoring Node**: AI Agent Purpose**: Uses GPT-4o-mini to analyze the bio and follower count Prompt Details**: 7️⃣ AI Model Node**: OpenAI Chat Model Model**: gpt-4o-mini Credential**: Connect your OpenAI account via API Key. Go to OpenAI API Keys Copy your key and create a new OpenAI API credential in n8n. 8️⃣ Output Parser Node**: Structured Output Parser What it does**: Parses the response from the AI into structured JSON for further use (e.g., storing leads, sending to Airtable, etc.) 🧪 Sample Output { "lead status": "Yes", "Reasoning": "The user has 3.5k followers and their bio shows they build automations with n8n." } 📬 Need More Help? If you'd like assistance setting this up, customizing it to your niche, or expanding it to score and store leads automatically — I can help! 👤 Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn
by explorium
Explorium Prospects Search Chatbot Template Download the following json file and import it to a new n8n workflow: mcp\_to\_prospects\_to\_csv.json Overview This n8n workflow creates a chatbot that understands natural language requests for finding business prospects and automatically: Interprets your query using AI (Claude Sonnet 3.7) Converts it to proper Explorium API filters Validates the API request structure Fetches prospect data from Explorium Exports results as a downloadable CSV file Perfect for sales teams, recruiters, and business development professionals who need to quickly find and export targeted prospect lists without learning complex API syntax. Key Features Natural Language Interface**: Simply describe who you're looking for in plain English Smart Query Translation**: AI converts your request to valid API parameters Built-in Validation**: Ensures API calls meet Explorium's requirements Error Recovery**: Automatically retries with corrections if validation fails Pagination Support**: Handles large result sets automatically CSV Export**: Clean, formatted output ready for CRM import Conversation Memory**: Maintains context for follow-up queries Example Queries The chatbot understands queries like: "Find marketing directors at SaaS companies in New York with 50-200 employees" "Get me CTOs from fintech startups in California" "Show me sales managers at healthcare companies with revenue over $10M" "Find engineers at Microsoft with 3-5 years experience" "Get customer service leads from e-commerce companies in Europe" Prerequisites Before setting up this workflow, ensure you have: n8n instance with chat interface enabled Anthropic API key for Claude Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of n8n chat workflows Supported Filters The chatbot can search using these criteria: Company Filters Size**: 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5001-10000, 10001+ employees Revenue**: Ranges from $0-500K up to $10T+ Age**: 0-3, 3-6, 6-10, 10-20, 20+ years Location**: Countries, regions, cities Industry**: Google categories, NAICS codes, LinkedIn categories Name**: Specific company names Prospect Filters Job Level**: CXO, VP, Director, Manager, Senior, Entry, etc. Department**: Sales, Marketing, Engineering, Finance, HR, etc. Experience**: Total months and current role duration Location**: Country and region codes Contact Info**: Filter by email/phone availability Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Anthropic Credentials Click on the Anthropic Chat Model1 node Under Credentials, click Create New Add your Anthropic API key Name: "Anthropic API" Save credentials Step 3: Configure Explorium Credentials You'll need to set up Explorium credentials in two places: For MCP Client: Click on the MCP Client node Under Credentials, create new Header Auth Add your authentication header (usually Authorization: Bearer YOUR_TOKEN) Save credentials For API Calls: Click on the Prospects API Call node Use the same Header Auth credentials created above Verify the API endpoint is correct Step 4: Activate the Workflow Save the workflow Click the Active toggle to enable it The chat interface will now be available Step 5: Access the Chat Interface Click on the When chat message received node Copy the webhook URL Access this URL in your browser to start chatting How It Works Workflow Architecture Chat Trigger: Receives natural language queries from users Memory Buffer: Maintains conversation context AI Agent: Interprets queries and generates API parameters Validation: Checks API structure against Explorium requirements API Call: Fetches prospect data with pagination Data Processing: Formats results for CSV export File Conversion: Creates downloadable CSV file Processing Flow User Query → AI Interpretation → Validation → API Call → CSV Export ↑ ↓ └──── Error Correction Loop ←──────┘ Validation Rules The workflow validates: Filter keys are allowed by Explorium API Values match expected formats (e.g., valid country codes) Range filters have proper gte/lte values No duplicate values in arrays Required structure is maintained Usage Guide Basic Conversation Flow Start with your query: "Find me VPs of Sales at software companies in the US" Bot processes and responds: Generates API filters Validates the structure Fetches data Returns CSV download link Refine if needed: "Can you also include directors and filter for companies with 100+ employees?" Query Tips Be specific**: Include job titles, departments, company details Use standard terms**: "CTO" instead of "Chief Technology Officer" Specify locations**: Use country names or standard codes Include size/revenue**: Helps narrow results effectively Advanced Queries Combine multiple criteria: "Find engineering managers and senior engineers at B2B SaaS companies in New York and California with 50-500 employees and revenue over $5M who have been in their role for at least 1 year" Output Format The CSV file includes: Prospect ID Name (first, last, full) Location (country, region, city) LinkedIn profile Experience summary Skills and interests Company details Job information Business ID Troubleshooting Common Issues "Validation failed" errors Check that your query uses supported filter values Ensure location names are spelled correctly Verify company sizes/revenues match allowed ranges No results returned Broaden your search criteria Check if the company exists in Explorium's database Verify filter combinations aren't too restrictive Chat not responding Ensure workflow is activated Check all credentials are properly configured Verify webhook URL is accessible Large result sets timing out Try adding more specific filters Limit results by location or company size Use the size parameter (max 10,000) Error Messages The bot provides clear feedback: Invalid filters**: Shows which filters aren't supported Value errors**: Lists correct options for each field API failures**: Explains connection or authentication issues Performance Optimization Best Practices Start broad, then narrow: Begin with basic criteria and add filters Use business IDs: When targeting specific companies Limit by contact info: Add has_email: true for actionable leads Batch by location: Process regions separately for large searches API Limits Maximum 10,000 results per search Pagination handles up to 100 records per page Rate limits apply based on your Explorium subscription Customization Options Modify AI Behavior Edit the AI Agent system message to: Change response format Add custom filters Adjust interpretation logic Include additional instructions Extend Functionality Add nodes to: Send results via email Import directly to CRM Schedule recurring searches Create custom reports Integration Ideas Connect to Slack for team queries Add to CRM workflows Create lead scoring systems Build automated outreach campaigns Security Considerations API credentials are stored securely in n8n Chat sessions are isolated No prospect data is stored permanently CSV files are generated on-demand Support Resources For issues with: n8n platform**: Check n8n documentation Explorium API**: Contact Explorium support Anthropic/Claude**: Refer to Anthropic docs Workflow logic**: Review node configurations
by Airtop
Use Case Automatically responding to X (formerly Twitter) posts can help you engage with potential customers at scale, saving time while maintaining a personal touch. What This Automation Does This automation replies to specified X posts using the following input parameters: airtop_profile: The name of your Airtop Profile connected to X. thread_url: The URL of the X post to reply to. Example reply_text: The message you want to post as a reply. How It Works Creates a browser session using Airtop. Navigates to the specified X post. Types and submits the reply text. Setup Requirements Airtop API Key — free to generate. An Airtop Profile connected to X (requires one-time login). Next Steps Combine with X Monitoring**: Use this with the X monitoring automation to create a fully automated engagement pipeline. Extend to Other Platforms**: Adapt the automation for use on LinkedIn, Reddit, or any web community. Read more about this Airtop Automation.
by Yang
🧾 What this workflow does This workflow takes a reference ad image and brand website, then uses GPT-4, LangChain, and Dumpling AI to generate 10 high-quality image variations for ad testing. These image variations are visually consistent but subtly different in background, mood, lighting, and tone — perfect for performance testing on platforms like Meta Ads or TikTok. 👤 Who is this for DTC marketers and brand designers testing ad creatives Creative teams automating visual experimentation Content agencies using AI for fast ad mockups Performance marketers running multivariate testing ⚙️ How to set up ✅ Requirements You’ll need the following tools set up in n8n: Google Drive (OAuth2 credential) Google Sheets (OAuth2 credential) OpenAI API (for GPT-4 or GPT-4o) Dumpling AI API (via HTTP header authentication) 🛠️ Steps to configure Google Sheet Setup Create a sheet with one column: Image URL Update the Sheet ID and tab name in the final Google Sheets node. Drive Setup Create a folder in Google Drive for storing the reference image. Replace the folderId in the “Upload Ad Image to Google Drive” node. Dumpling AI API Key Use n8n’s credential manager (HTTP Header Auth) — do not hardcode the key. OpenAI API Key Required for both image description and LangChain agent prompt generation. Form Inputs Required Brand Name Brand Website Ad Image (upload field) 🧠 How it works A user submits the brand name, website, and a reference ad image through a form. The image is uploaded to Google Drive. GPT-4o describes the image’s visual style (e.g., mood, lighting, composition). GPT-4 analyzes the brand’s website to define its visual aesthetic. A LangChain agent uses both analyses to create 10 tightly scoped variation prompts. Dumpling AI generates a new image for each prompt using its “FLUX.1-pro” model. Each new image’s link is logged into Google Sheets. 🛠️ How to customize 🧪 Change prompt logic to experiment with different variations (e.g., theme, season). 🎨 Switch image model in Dumpling AI to one that supports your desired style. 🔗 Log additional metadata (prompt, timestamp) to Google Sheets. 📤 Connect output images to Airtable, Notion, or a review tool like Figma. 🎯 Modify GPT system message to reflect a different tone or brand strategy. This workflow gives creative teams and marketers an instant, AI-powered ad image testing system — built on real brand visuals, not generic stock content.
by Humble Turtle
Architecture Agent Overview The Architect Agent listens to Slack messages and generates full data architecture blueprints in response. Powered by Claude 3.5 (Anthropic) for reasoning and design, and Tavily for real-time web search, this agent creates production-ready data pipeline scaffolds on-demand — transforming natural language prompts into structured data engineering solutions. Capabilities Understands and interprets user requests from Slack Designs end-to-end data pipelines architectures using industry best practices. Outputs include High-level architecture diagrams Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Setup time <15 minutes Example input: "Create a data pipeline orchestrated by Airflow, running on a Docker image. It should connect to a MySQL database, load in the data into a PostgreSQL DB (incremental load) and then transform the data into business-oriented tables also in the PostgreSQL database. Create an example setup with raw sales data." Customising this workflow Try saving outputs to Google Drive to store all your architecture blueprints
by Incrementors
Yelp Business Scraper by URL via Bright Data API with Google Sheets Storage Overview This n8n workflow automates the process of scraping comprehensive business information from Yelp using individual business URLs. It integrates with Bright Data for professional web scraping and Google Sheets for centralized data storage, providing detailed business intelligence for market research, competitor analysis, and lead generation. Workflow Components 1. 📥 Form Trigger Type**: Form Trigger Purpose**: Initiates the workflow with user-submitted Yelp business URL Input Fields**: URL (Yelp business page URL) Function**: Captures target business URL to start the scraping process 2. 🔍 Trigger Bright Data Scrape Type**: HTTP Request (POST) Purpose**: Sends scraping request to Bright Data API for Yelp business data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lgugwl0519h1p14rwk Include errors: true Limit multiple results: 5 Limit per input: 20 Function**: Initiates comprehensive business data extraction from Yelp 3. 📡 Monitor Snapshot Status Type**: HTTP Request (GET) Purpose**: Monitors the progress of the Yelp scraping job Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if the business data scraping is complete 4. ⏳ Wait 30 Sec for Snapshot Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 30 seconds Function**: Pauses workflow before rechecking scraping status to optimize API usage 5. 🔁 Retry Until Ready Type**: IF Condition Purpose**: Evaluates scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data retrieval False: Loops back to status monitoring with wait 6. 📥 Fetch Scraped Business Data Type**: HTTP Request (GET) Purpose**: Retrieves the final scraped business information Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Yelp business data with comprehensive details 7. 📊 Store to Google Sheet Type**: Google Sheets Node Purpose**: Stores scraped business data for analysis and storage Operation**: Append rows Target**: "Yelp scraper data by URL" sheet Data Mapping**: Business Name, Overall Rating, Reviews Count Business URL, Images/Videos URLs Additional business metadata fields Workflow Flow Form Input → Trigger Scrape → Monitor Status → Wait 30s → Check Ready ↑ ↓ └─── Loop ─────┘ ↓ Fetch Data → Store to Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for Yelp business scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Dataset ID**: gd_lgugwl0519h1p14rwk (Yelp business scraper) Form Webhook ID**: User input form identifier Google Sheets Credential ID**: OAuth2 authentication Key Features Comprehensive Business Data Extraction Complete business profile information Customer ratings and review counts Contact details and business hours Photo and video content URLs Location and category information Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 30-second intervals Status validation before data retrieval Error handling and timeout management Centralized Data Storage Automatic Google Sheets export Organized business data format Historical scraping records Easy sharing and collaboration URL-Based Processing Direct Yelp business URL input Single business deep-dive analysis Flexible input through web form Real-time workflow triggering Use Cases Market Research Competitor business analysis Local market intelligence gathering Industry benchmark establishment Service offering comparison Lead Generation Business contact information extraction Potential client identification Market opportunity assessment Sales prospect development Business Intelligence Customer sentiment analysis through ratings Competitor performance monitoring Market positioning research Brand reputation tracking Location Analysis Geographic business distribution Local competition assessment Market saturation evaluation Expansion opportunity identification Data Output Fields | Field | Description | Example | |-------|-------------|---------| | Name | Business name | "Joe's Pizza Restaurant" | | Overall Rating | Average customer rating | "4.5" | | Reviews Count | Total number of reviews | "247" | | URL | Original Yelp business URL | "https://www.yelp.com/biz/joes-pizza..." | | Images/Videos URLs | Media content links | "https://s3-media1.fl.yelpcdn.com/..." | Technical Notes Polling Interval**: 30-second status checks for optimal API usage Result Limiting**: Maximum 20 businesses per input, 5 multiple results Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Form Input**: Single URL field with validation Storage Format**: Structured Google Sheets with predefined columns Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration Import into n8n: Workflows → Import from JSON Paste configuration and save Step 2: Configure Bright Data Set up credentials: Navigate to Credentials → Add Bright Data API Enter your Bright Data API key Test connection Update API key references: Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Verify dataset access for gd_lgugwl0519h1p14rwk Step 3: Configure Google Sheets Create target spreadsheet: Create new Google Sheet named "Yelp Business Data" or similar Copy the Sheet ID from URL Set up OAuth2 credentials: Add Google Sheets OAuth2 credential in n8n Complete authentication process Update workflow references: Replace YOUR_GOOGLE_SHEET_ID with actual Sheet ID Update YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with credential reference Step 4: Test and Activate Test with sample URL: Use a known Yelp business URL Monitor execution progress Verify data appears in Google Sheet Activate workflow: Toggle workflow to "Active" Share form URL with users Sample Business Data The workflow captures comprehensive business information including: Basic Information**: Name, category, location Performance Metrics**: Ratings, review counts, popularity Contact Details**: Phone, website, address Visual Content**: Photos, videos, gallery URLs Operational Data**: Hours, services, amenities Customer Feedback**: Review summaries, sentiment indicators Advanced Configuration Batch Processing Modify the input to accept multiple URLs: [ {"url": "https://www.yelp.com/biz/business-1"}, {"url": "https://www.yelp.com/biz/business-2"}, {"url": "https://www.yelp.com/biz/business-3"} ] Enhanced Data Fields Add more extraction fields by updating the dataset configuration: Business hours and schedule Menu items and pricing Customer photos and reviews Special offers and promotions Notification Integration Add alert mechanisms: Email notifications for completed scrapes Slack messages for team updates Webhook triggers for external systems Error Handling Common Issues Invalid URL**: Ensure URL is a valid Yelp business page Rate Limiting**: Bright Data API usage limits exceeded Authentication**: Google Sheets or Bright Data credential issues Data Format**: Unexpected response structure from Yelp Troubleshooting Steps Verify URLs: Ensure Yelp business URLs are correctly formatted Check Credentials: Validate all API keys and OAuth tokens Monitor Logs: Review n8n execution logs for detailed errors Test Connectivity: Verify network access to all external services Performance Specifications Processing Time**: 2-5 minutes per business URL Data Accuracy**: 95%+ for publicly available business information Success Rate**: 90%+ for valid Yelp business URLs Concurrent Processing**: Depends on Bright Data plan limits Storage Capacity**: Unlimited (Google Sheets based) **For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Onur
Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI This n8n workflow empowers you to seamlessly manage your tasks by creating Todoist entries directly from Telegram, using the power of AI. Simply send a voice or text message to your Telegram bot, and this workflow will transform it into actionable tasks in your Todoist account. Who is this for? Busy professionals** who need a quick and easy way to capture tasks on the go. Students** looking to streamline their assignments and project management. Anyone** who wants to leverage AI for effortless task management. What Problem Does it Solve? This workflow eliminates the need to manually enter tasks into Todoist. It automates the process of capturing, organizing, and prioritizing tasks, saving you time and effort. What are the Benefits? Seamless Integration:** Connect your Telegram and Todoist accounts for a frictionless workflow. AI-Powered Task Breakdown:** LLM AI intelligently analyzes your messages and breaks them down into manageable sub-tasks. Voice-to-Task:** Create tasks with voice messages for hands-free convenience. Increased Productivity:** Capture and organize tasks quickly, keeping you focused and productive. Accessibility:** Access your tasks from anywhere with Todoist's mobile app and Google extension. How it Works Send a message: Send a voice or text message describing your task to your Telegram bot. AI analysis: The workflow uses an LLM (OpenAI Chat Model) to analyze your message and break it down into sub-tasks. Task creation: The workflow creates tasks in your Todoist account based on the AI's analysis. Notification: You receive a Telegram notification with a link to your newly created tasks in Todoist. Nodes in the Workflow Telegram Trigger:** Listens for incoming messages on Telegram. Switch:** Routes messages based on their type (voice or text). Telegram:** Fetches voice messages from Telegram. OpenAI:** Transcribes voice messages to text using OpenAI's Whisper API. Edit Fields:** Prepares the text for the LLM. Basic LLM Chain:** Analyzes messages and generates sub-tasks using OpenAI's GPT model. Structured Output Parser:** Extracts sub-tasks from the LLM's response. Todoist:** Creates tasks in your Todoist account. Telegram:** Sends a notification with a link to your Todoist tasks. Requirements Active n8n instance. Telegram account with a bot. Todoist account. OpenAI API key. Setup Information Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your bot token. Set up the OpenAI credentials with your API key. Connect your Todoist account in the Todoist node. Customize the LLM prompt (optional) to fine-tune task creation. Additional Tips Explore Todoist's features to further organize and manage your tasks. Experiment with different LLM prompts to optimize task breakdown. Use n8n's features to automate other aspects of your workflow. This workflow combines the convenience of Telegram with the power of AI and Todoist to provide a seamless task management experience. Start managing your tasks effortlessly today!
by Simone Smerilli
This workflow is especially suitable for founders and operators offering services to their clients and regularly scheduling sales or project update meetings. How it works When a booking is created, rescheduled, or canceled in cal.com, this workflow syncs the meeting and contact data into Notion. When a new booking is scheduled: Creates a meeting in the dedicated Notion database. Here we can customize all the information to include on the meeting page (e.g., mapping the answers to custom questions). Finds the Contact(s) in the dedicated Notion database (based on the email). If the Contact(s) exists, it links the contact(s) to the newly created meeting. If the Contact(s) doesn’t exist, it creates the contact(s) and links them to the newly created meeting. When a booking is rescheduled: The automation finds the event in Notion (based on the “cal id” property) It updates the event date and time in Notion When a booking is cancelled: The automation deletes the event in Notion (i.e., it archives the page, which remains available in the Trash for 30 days) Requirements A Cal account and API key. A Notion account and connection with access to all the databases involved (Meetings, Contacts). Find all your connections, manage their access, or create a new connection on your Notion Integrations page. A Meetings and Contacts database in Notion, both accessible by the Integration (see step 2 above). The database names don't matter. You will input your database IDs in the workflow. Find a Notion database ID in the URL between the slash characters. Notion database column specifications In the Meetings database, these are required properties: Event time (date) cal id (text) Contacts (relation) Name In the Contacts database, these are required properties: Name Email Meetings (relation) Read the essay and watch the video for a detailed walkthrough.
by Jordan Lee
This flexible template scrapes business listings for any industry and location, perfect for sales teams, marketers, and researchers. Good to know Works with any business category (restaurants, contractors, retailers, etc.) Fully customizable search parameters Results automatically organized in Google Sheets Built-in delay ensures scraping completes before data collection How it works Trigger: Manual or scheduled start Apify Configuration: Sets scraping parameters (industry, location, data fields) Scraping Execution: Runs the web scraping job Data Processing: Cleans and structures the raw data Storage: Saves results to your Google Sheets What is Apify? Apify is a webscraping tool, in this workflow the data is scraped from a google maps scraper: https://apify.com/compass/crawler-google-places How to use Apify Small # Lead Generation (Purple) https://apify.com/compass/crawler-google-places Add location and industry to scrape (Apify) Add the number of leads to output (Apify) Copy over the JSON file into N8N Copy & paste API endpoint "Get Run URL" in N8N Apify Large # Lead Generation (Grey) Configure the Manual Trigger When clicking 'Execute workflow' node is ready to use as-is This triggers the entire lead generation process Setup "Start Results (Apify)" Node Get Your Apify API Information Go to Apify.com and create a free account Navigate to Settings → Integrations → API tokens Copy your API token Find the Google Maps scraper actor ID: Configure the HTTP Request (start results) Method: POST URL: Replace "enter apify (get run)" with: https://api.apify.com/v2/acts/nwua9Gu5YrADL7ZDj/runs?token=YOUR_API_TOKEN C. Customize the JSON Body Parameters In the JSON body, modify these key fields: Location & Search: "locationQuery": Change "Toronto" to your target city "searchStringsArray": Change ["barber"] to your business type Examples: ["restaurants"], ["dentists"], ["contractors"] Configure the HTTP Request (start results) Method : Get Url: enter the get dataset URL from Apify Split Out Node Select fields to append in the google sheet Test the Configuration Click Execute workflow to test Check that the Apify job starts successfully Note the job ID returned for the next section This section initiates the scraping process and should complete in 30-60 seconds depending on your lead count. Setup Google Sheets Create a new Google Sheet with these columns: title (business name) address (full address) state (state/province) neighborhood (area/district) phone (contact number) emails (email addresses) Copy your Google Sheets document ID for workflow configuration Requirements Apify account Google Sheets document Google OAuth credentials Customization Options For different use cases: Lead Gen: Get business leads Local SEO: Collect competitor data Market Research: Analyze industry trends Advanced mofications: Add email enrichment Integrate with CRM systems Set up automatic daily runs
by Joseph
This n8n workflow automates SEO keyword research by querying the Ahrefs API for keyword data and related keyword insights. The enriched data is then processed by an AI agent to format a response and provide valuable SEO recommendations. Perfect for SEO specialists, content marketers, digital agencies, and anyone looking to gain valuable insights into keyword opportunities to boost their rankings. ⚙️ How This Workflow Works This workflow guides you through the entire SEO keyword research process, from entering the initial keyword to receiving detailed insights and related keyword suggestions. 1. 🗣️ User Input (Keyword Query) The user enters a keyword they want to research. This input is captured by the Chat Input Node, ready for analysis. 2. 🤖 AI Agent (Input Verification) The AI Agent reviews the keyword input for any grammatical errors or extra commentary. If necessary, it cleans the input to ensure a seamless query to the API. 3. 🔑 Ahrefs API (Keyword Data Retrieval) The cleaned keyword is sent to the Ahrefs Keyword Tool API. This retrieves a detailed report including metrics like search volume, keyword difficulty, and CPC. 4. 💡 Related Keywords Extraction (Using JavaScript Function) The workflow uses a JavaScript function to extract main keyword data and 10 related keywords data from the Ahrefs response. You can tweak the script to adjust the number of related keywords or the level of detail you want. 5. 🧠 AI Agent (Text Formatting) The aggregated data, including both the main keyword and related keywords, is sent to an AI agent. The AI agent formats the data into a concise, readable format that can be shared with the user. 6. 📨 Final Response The formatted text is delivered to the user with keyword insights, recommendations, and related keyword suggestions. ✅ Smart Retry & Error Handling Each subworkflow includes a fail-safe mechanism to ensure: ✅ Proper error handling for any issues with the API request. 🕒 Failed API requests are retried after a customizable period (e.g., 2 hours or 1 day). 💬 User input validation prevents any incorrect or malformed queries from being processed. 📋 Ahrefs API Setup To use this workflow, you’ll need to set up your Ahrefs API credentials: 🔑 Ahrefs API Sign up for an Ahrefs account and get your key here: Ahrefs Keyword Tool API Once signed up, you'll receive an API key, which you’ll use in the x-rapidapi-key header in n8n. Ensure you check the Ahrefs Keyword Tool API documentation for more details on available parameters. 📥 How to Import This Workflow Copy the json code. Open your n8n instance. Open a new workflow. Paste anywhere inside the workflow. Voila. 🛠️ Customization Options Adjust the number of related keywords extracted (default is 10). Customize the AI agent response formatting or add specific recommendations for users. Modify the JavaScript function to extract different metrics from the Ahrefs API. 🧪 Use Case Example Trying to optimize your blog post around a specific keyword? Query a broad keyword, like “SEO tips”. Get related keyword data and search volume insights. Use the AI agent to provide keyword recommendations and additional topics to target. 💥 Boost your content strategy with fresh keywords and relevant search data!
by L Hùng
Pre-Conditions A Facebook Developer account with an active app. Basic understanding of n8n workflows. Access to a database (optional, for storing tokens). Setup Webhook Activation: Configure the Webhook to receive user requests and process input data. Ensure the Webhook URL is correctly set in your Facebook App settings. Short-Lived Token Retrieval: Use Facebook OAuth to fetch a short-lived token from the authorization code. Long-Lived Token Conversion: Convert the short-lived token into a long-lived token (valid for ~60 days). Page Token Retrieval: Follow the provided instructions to retrieve Page Tokens for posting on managed Facebook Pages. Customizable Scopes: Edit the correctScopes array to include or exclude permissions as needed. Optional Database Storage: Extend the workflow to save tokens to a database instead of displaying them on-screen. Step-by-Step Instructions: Detailed guidance is provided via sticky notes for activating the app, configuring Webhook, and editing parameters like fb_redirect_uri, app_id, and app_secret. Who the Template is For Developers**: Integrating Facebook APIs into their applications. Social Media Managers**: Automating posting and engagement on Facebook Pages. n8n Users**: Looking for a ready-to-use workflow for Facebook Token management. Primary Use Automates Facebook Token retrieval and management. Supports posting to Facebook Pages via Page Tokens. Easily customizable and extendable for specific requirements.