by Airtop
Use Case Automatically responding to X (formerly Twitter) posts can help you engage with potential customers at scale, saving time while maintaining a personal touch. What This Automation Does This automation replies to specified X posts using the following input parameters: airtop_profile: The name of your Airtop Profile connected to X. thread_url: The URL of the X post to reply to. Example reply_text: The message you want to post as a reply. How It Works Creates a browser session using Airtop. Navigates to the specified X post. Types and submits the reply text. Setup Requirements Airtop API Key — free to generate. An Airtop Profile connected to X (requires one-time login). Next Steps Combine with X Monitoring**: Use this with the X monitoring automation to create a fully automated engagement pipeline. Extend to Other Platforms**: Adapt the automation for use on LinkedIn, Reddit, or any web community. Read more about this Airtop Automation.
by Yang
🧾 What this workflow does This workflow takes a reference ad image and brand website, then uses GPT-4, LangChain, and Dumpling AI to generate 10 high-quality image variations for ad testing. These image variations are visually consistent but subtly different in background, mood, lighting, and tone — perfect for performance testing on platforms like Meta Ads or TikTok. 👤 Who is this for DTC marketers and brand designers testing ad creatives Creative teams automating visual experimentation Content agencies using AI for fast ad mockups Performance marketers running multivariate testing ⚙️ How to set up ✅ Requirements You’ll need the following tools set up in n8n: Google Drive (OAuth2 credential) Google Sheets (OAuth2 credential) OpenAI API (for GPT-4 or GPT-4o) Dumpling AI API (via HTTP header authentication) 🛠️ Steps to configure Google Sheet Setup Create a sheet with one column: Image URL Update the Sheet ID and tab name in the final Google Sheets node. Drive Setup Create a folder in Google Drive for storing the reference image. Replace the folderId in the “Upload Ad Image to Google Drive” node. Dumpling AI API Key Use n8n’s credential manager (HTTP Header Auth) — do not hardcode the key. OpenAI API Key Required for both image description and LangChain agent prompt generation. Form Inputs Required Brand Name Brand Website Ad Image (upload field) 🧠 How it works A user submits the brand name, website, and a reference ad image through a form. The image is uploaded to Google Drive. GPT-4o describes the image’s visual style (e.g., mood, lighting, composition). GPT-4 analyzes the brand’s website to define its visual aesthetic. A LangChain agent uses both analyses to create 10 tightly scoped variation prompts. Dumpling AI generates a new image for each prompt using its “FLUX.1-pro” model. Each new image’s link is logged into Google Sheets. 🛠️ How to customize 🧪 Change prompt logic to experiment with different variations (e.g., theme, season). 🎨 Switch image model in Dumpling AI to one that supports your desired style. 🔗 Log additional metadata (prompt, timestamp) to Google Sheets. 📤 Connect output images to Airtable, Notion, or a review tool like Figma. 🎯 Modify GPT system message to reflect a different tone or brand strategy. This workflow gives creative teams and marketers an instant, AI-powered ad image testing system — built on real brand visuals, not generic stock content.
by Humble Turtle
Architecture Agent Overview The Architect Agent listens to Slack messages and generates full data architecture blueprints in response. Powered by Claude 3.5 (Anthropic) for reasoning and design, and Tavily for real-time web search, this agent creates production-ready data pipeline scaffolds on-demand — transforming natural language prompts into structured data engineering solutions. Capabilities Understands and interprets user requests from Slack Designs end-to-end data pipelines architectures using industry best practices. Outputs include High-level architecture diagrams Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Setup time <15 minutes Example input: "Create a data pipeline orchestrated by Airflow, running on a Docker image. It should connect to a MySQL database, load in the data into a PostgreSQL DB (incremental load) and then transform the data into business-oriented tables also in the PostgreSQL database. Create an example setup with raw sales data." Customising this workflow Try saving outputs to Google Drive to store all your architecture blueprints
by Incrementors
Yelp Business Scraper by URL via Bright Data API with Google Sheets Storage Overview This n8n workflow automates the process of scraping comprehensive business information from Yelp using individual business URLs. It integrates with Bright Data for professional web scraping and Google Sheets for centralized data storage, providing detailed business intelligence for market research, competitor analysis, and lead generation. Workflow Components 1. 📥 Form Trigger Type**: Form Trigger Purpose**: Initiates the workflow with user-submitted Yelp business URL Input Fields**: URL (Yelp business page URL) Function**: Captures target business URL to start the scraping process 2. 🔍 Trigger Bright Data Scrape Type**: HTTP Request (POST) Purpose**: Sends scraping request to Bright Data API for Yelp business data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lgugwl0519h1p14rwk Include errors: true Limit multiple results: 5 Limit per input: 20 Function**: Initiates comprehensive business data extraction from Yelp 3. 📡 Monitor Snapshot Status Type**: HTTP Request (GET) Purpose**: Monitors the progress of the Yelp scraping job Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if the business data scraping is complete 4. ⏳ Wait 30 Sec for Snapshot Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 30 seconds Function**: Pauses workflow before rechecking scraping status to optimize API usage 5. 🔁 Retry Until Ready Type**: IF Condition Purpose**: Evaluates scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data retrieval False: Loops back to status monitoring with wait 6. 📥 Fetch Scraped Business Data Type**: HTTP Request (GET) Purpose**: Retrieves the final scraped business information Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Yelp business data with comprehensive details 7. 📊 Store to Google Sheet Type**: Google Sheets Node Purpose**: Stores scraped business data for analysis and storage Operation**: Append rows Target**: "Yelp scraper data by URL" sheet Data Mapping**: Business Name, Overall Rating, Reviews Count Business URL, Images/Videos URLs Additional business metadata fields Workflow Flow Form Input → Trigger Scrape → Monitor Status → Wait 30s → Check Ready ↑ ↓ └─── Loop ─────┘ ↓ Fetch Data → Store to Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for Yelp business scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Dataset ID**: gd_lgugwl0519h1p14rwk (Yelp business scraper) Form Webhook ID**: User input form identifier Google Sheets Credential ID**: OAuth2 authentication Key Features Comprehensive Business Data Extraction Complete business profile information Customer ratings and review counts Contact details and business hours Photo and video content URLs Location and category information Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 30-second intervals Status validation before data retrieval Error handling and timeout management Centralized Data Storage Automatic Google Sheets export Organized business data format Historical scraping records Easy sharing and collaboration URL-Based Processing Direct Yelp business URL input Single business deep-dive analysis Flexible input through web form Real-time workflow triggering Use Cases Market Research Competitor business analysis Local market intelligence gathering Industry benchmark establishment Service offering comparison Lead Generation Business contact information extraction Potential client identification Market opportunity assessment Sales prospect development Business Intelligence Customer sentiment analysis through ratings Competitor performance monitoring Market positioning research Brand reputation tracking Location Analysis Geographic business distribution Local competition assessment Market saturation evaluation Expansion opportunity identification Data Output Fields | Field | Description | Example | |-------|-------------|---------| | Name | Business name | "Joe's Pizza Restaurant" | | Overall Rating | Average customer rating | "4.5" | | Reviews Count | Total number of reviews | "247" | | URL | Original Yelp business URL | "https://www.yelp.com/biz/joes-pizza..." | | Images/Videos URLs | Media content links | "https://s3-media1.fl.yelpcdn.com/..." | Technical Notes Polling Interval**: 30-second status checks for optimal API usage Result Limiting**: Maximum 20 businesses per input, 5 multiple results Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Form Input**: Single URL field with validation Storage Format**: Structured Google Sheets with predefined columns Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration Import into n8n: Workflows → Import from JSON Paste configuration and save Step 2: Configure Bright Data Set up credentials: Navigate to Credentials → Add Bright Data API Enter your Bright Data API key Test connection Update API key references: Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Verify dataset access for gd_lgugwl0519h1p14rwk Step 3: Configure Google Sheets Create target spreadsheet: Create new Google Sheet named "Yelp Business Data" or similar Copy the Sheet ID from URL Set up OAuth2 credentials: Add Google Sheets OAuth2 credential in n8n Complete authentication process Update workflow references: Replace YOUR_GOOGLE_SHEET_ID with actual Sheet ID Update YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with credential reference Step 4: Test and Activate Test with sample URL: Use a known Yelp business URL Monitor execution progress Verify data appears in Google Sheet Activate workflow: Toggle workflow to "Active" Share form URL with users Sample Business Data The workflow captures comprehensive business information including: Basic Information**: Name, category, location Performance Metrics**: Ratings, review counts, popularity Contact Details**: Phone, website, address Visual Content**: Photos, videos, gallery URLs Operational Data**: Hours, services, amenities Customer Feedback**: Review summaries, sentiment indicators Advanced Configuration Batch Processing Modify the input to accept multiple URLs: [ {"url": "https://www.yelp.com/biz/business-1"}, {"url": "https://www.yelp.com/biz/business-2"}, {"url": "https://www.yelp.com/biz/business-3"} ] Enhanced Data Fields Add more extraction fields by updating the dataset configuration: Business hours and schedule Menu items and pricing Customer photos and reviews Special offers and promotions Notification Integration Add alert mechanisms: Email notifications for completed scrapes Slack messages for team updates Webhook triggers for external systems Error Handling Common Issues Invalid URL**: Ensure URL is a valid Yelp business page Rate Limiting**: Bright Data API usage limits exceeded Authentication**: Google Sheets or Bright Data credential issues Data Format**: Unexpected response structure from Yelp Troubleshooting Steps Verify URLs: Ensure Yelp business URLs are correctly formatted Check Credentials: Validate all API keys and OAuth tokens Monitor Logs: Review n8n execution logs for detailed errors Test Connectivity: Verify network access to all external services Performance Specifications Processing Time**: 2-5 minutes per business URL Data Accuracy**: 95%+ for publicly available business information Success Rate**: 90%+ for valid Yelp business URLs Concurrent Processing**: Depends on Bright Data plan limits Storage Capacity**: Unlimited (Google Sheets based) **For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Onur
Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI This n8n workflow empowers you to seamlessly manage your tasks by creating Todoist entries directly from Telegram, using the power of AI. Simply send a voice or text message to your Telegram bot, and this workflow will transform it into actionable tasks in your Todoist account. Who is this for? Busy professionals** who need a quick and easy way to capture tasks on the go. Students** looking to streamline their assignments and project management. Anyone** who wants to leverage AI for effortless task management. What Problem Does it Solve? This workflow eliminates the need to manually enter tasks into Todoist. It automates the process of capturing, organizing, and prioritizing tasks, saving you time and effort. What are the Benefits? Seamless Integration:** Connect your Telegram and Todoist accounts for a frictionless workflow. AI-Powered Task Breakdown:** LLM AI intelligently analyzes your messages and breaks them down into manageable sub-tasks. Voice-to-Task:** Create tasks with voice messages for hands-free convenience. Increased Productivity:** Capture and organize tasks quickly, keeping you focused and productive. Accessibility:** Access your tasks from anywhere with Todoist's mobile app and Google extension. How it Works Send a message: Send a voice or text message describing your task to your Telegram bot. AI analysis: The workflow uses an LLM (OpenAI Chat Model) to analyze your message and break it down into sub-tasks. Task creation: The workflow creates tasks in your Todoist account based on the AI's analysis. Notification: You receive a Telegram notification with a link to your newly created tasks in Todoist. Nodes in the Workflow Telegram Trigger:** Listens for incoming messages on Telegram. Switch:** Routes messages based on their type (voice or text). Telegram:** Fetches voice messages from Telegram. OpenAI:** Transcribes voice messages to text using OpenAI's Whisper API. Edit Fields:** Prepares the text for the LLM. Basic LLM Chain:** Analyzes messages and generates sub-tasks using OpenAI's GPT model. Structured Output Parser:** Extracts sub-tasks from the LLM's response. Todoist:** Creates tasks in your Todoist account. Telegram:** Sends a notification with a link to your Todoist tasks. Requirements Active n8n instance. Telegram account with a bot. Todoist account. OpenAI API key. Setup Information Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your bot token. Set up the OpenAI credentials with your API key. Connect your Todoist account in the Todoist node. Customize the LLM prompt (optional) to fine-tune task creation. Additional Tips Explore Todoist's features to further organize and manage your tasks. Experiment with different LLM prompts to optimize task breakdown. Use n8n's features to automate other aspects of your workflow. This workflow combines the convenience of Telegram with the power of AI and Todoist to provide a seamless task management experience. Start managing your tasks effortlessly today!
by Simone Smerilli
This workflow is especially suitable for founders and operators offering services to their clients and regularly scheduling sales or project update meetings. How it works When a booking is created, rescheduled, or canceled in cal.com, this workflow syncs the meeting and contact data into Notion. When a new booking is scheduled: Creates a meeting in the dedicated Notion database. Here we can customize all the information to include on the meeting page (e.g., mapping the answers to custom questions). Finds the Contact(s) in the dedicated Notion database (based on the email). If the Contact(s) exists, it links the contact(s) to the newly created meeting. If the Contact(s) doesn’t exist, it creates the contact(s) and links them to the newly created meeting. When a booking is rescheduled: The automation finds the event in Notion (based on the “cal id” property) It updates the event date and time in Notion When a booking is cancelled: The automation deletes the event in Notion (i.e., it archives the page, which remains available in the Trash for 30 days) Requirements A Cal account and API key. A Notion account and connection with access to all the databases involved (Meetings, Contacts). Find all your connections, manage their access, or create a new connection on your Notion Integrations page. A Meetings and Contacts database in Notion, both accessible by the Integration (see step 2 above). The database names don't matter. You will input your database IDs in the workflow. Find a Notion database ID in the URL between the slash characters. Notion database column specifications In the Meetings database, these are required properties: Event time (date) cal id (text) Contacts (relation) Name In the Contacts database, these are required properties: Name Email Meetings (relation) Read the essay and watch the video for a detailed walkthrough.
by Julian Kaiser
🗂️ Bulk File Upload to Google Drive with Folder Management How it works User submits files and target folder name via form Workflow checks if folder exists in Drive Creates folder if needed or uses existing one Processes and uploads all files maintaining structure Set up steps (Est. 10-15 mins) Set up Google Drive credentials in n8n Replace parent folder ID in search query with your Drive folder ID Configure form node with: Multiple file upload field Folder name text field Test workflow with sample files 💡 Detailed configuration steps and patterns are documented in sticky notes within the workflow. Perfect for: Bulk file organization Automated Drive folder management File upload automation Maintaining consistent file structures
by Nicolas
What is it This workflow aims to build a simple bot that will send a message to a telegram channel every time there is a new saved item to the Reader. This workflow can be easily modify to support other way of sending the notification, thanks to existing n8n nodes. Warning: This is only for folks who already have access to the Reader, it won't work if you don't Also, this workflow use a file to store the last update time in order to not sync everything everytime. Setup The config node : It contains the telegram channel id It also contains the file used as storage To get the header auth, you have to : Go to the reader Open the devtools, Option + ⌘ + J (on macOS), or Shift + CTRL + J (on Windows/Linux) Go to network and find a profile_details/ request, click on it Go to Request Headers Copy the value for "Cookie" In n8n, set the name of the Header auth account to Cookie and the value with the one you copied before
by Jordan Lee
This flexible template scrapes business listings for any industry and location, perfect for sales teams, marketers, and researchers. Good to know Works with any business category (restaurants, contractors, retailers, etc.) Fully customizable search parameters Results automatically organized in Google Sheets Built-in delay ensures scraping completes before data collection How it works Trigger: Manual or scheduled start Apify Configuration: Sets scraping parameters (industry, location, data fields) Scraping Execution: Runs the web scraping job Data Processing: Cleans and structures the raw data Storage: Saves results to your Google Sheets What is Apify? Apify is a webscraping tool, in this workflow the data is scraped from a google maps scraper: https://apify.com/compass/crawler-google-places How to use Apify Small # Lead Generation (Purple) https://apify.com/compass/crawler-google-places Add location and industry to scrape (Apify) Add the number of leads to output (Apify) Copy over the JSON file into N8N Copy & paste API endpoint "Get Run URL" in N8N Apify Large # Lead Generation (Grey) Configure the Manual Trigger When clicking 'Execute workflow' node is ready to use as-is This triggers the entire lead generation process Setup "Start Results (Apify)" Node Get Your Apify API Information Go to Apify.com and create a free account Navigate to Settings → Integrations → API tokens Copy your API token Find the Google Maps scraper actor ID: Configure the HTTP Request (start results) Method: POST URL: Replace "enter apify (get run)" with: https://api.apify.com/v2/acts/nwua9Gu5YrADL7ZDj/runs?token=YOUR_API_TOKEN C. Customize the JSON Body Parameters In the JSON body, modify these key fields: Location & Search: "locationQuery": Change "Toronto" to your target city "searchStringsArray": Change ["barber"] to your business type Examples: ["restaurants"], ["dentists"], ["contractors"] Configure the HTTP Request (start results) Method : Get Url: enter the get dataset URL from Apify Split Out Node Select fields to append in the google sheet Test the Configuration Click Execute workflow to test Check that the Apify job starts successfully Note the job ID returned for the next section This section initiates the scraping process and should complete in 30-60 seconds depending on your lead count. Setup Google Sheets Create a new Google Sheet with these columns: title (business name) address (full address) state (state/province) neighborhood (area/district) phone (contact number) emails (email addresses) Copy your Google Sheets document ID for workflow configuration Requirements Apify account Google Sheets document Google OAuth credentials Customization Options For different use cases: Lead Gen: Get business leads Local SEO: Collect competitor data Market Research: Analyze industry trends Advanced mofications: Add email enrichment Integrate with CRM systems Set up automatic daily runs
by Joseph
This n8n workflow automates SEO keyword research by querying the Ahrefs API for keyword data and related keyword insights. The enriched data is then processed by an AI agent to format a response and provide valuable SEO recommendations. Perfect for SEO specialists, content marketers, digital agencies, and anyone looking to gain valuable insights into keyword opportunities to boost their rankings. ⚙️ How This Workflow Works This workflow guides you through the entire SEO keyword research process, from entering the initial keyword to receiving detailed insights and related keyword suggestions. 1. 🗣️ User Input (Keyword Query) The user enters a keyword they want to research. This input is captured by the Chat Input Node, ready for analysis. 2. 🤖 AI Agent (Input Verification) The AI Agent reviews the keyword input for any grammatical errors or extra commentary. If necessary, it cleans the input to ensure a seamless query to the API. 3. 🔑 Ahrefs API (Keyword Data Retrieval) The cleaned keyword is sent to the Ahrefs Keyword Tool API. This retrieves a detailed report including metrics like search volume, keyword difficulty, and CPC. 4. 💡 Related Keywords Extraction (Using JavaScript Function) The workflow uses a JavaScript function to extract main keyword data and 10 related keywords data from the Ahrefs response. You can tweak the script to adjust the number of related keywords or the level of detail you want. 5. 🧠 AI Agent (Text Formatting) The aggregated data, including both the main keyword and related keywords, is sent to an AI agent. The AI agent formats the data into a concise, readable format that can be shared with the user. 6. 📨 Final Response The formatted text is delivered to the user with keyword insights, recommendations, and related keyword suggestions. ✅ Smart Retry & Error Handling Each subworkflow includes a fail-safe mechanism to ensure: ✅ Proper error handling for any issues with the API request. 🕒 Failed API requests are retried after a customizable period (e.g., 2 hours or 1 day). 💬 User input validation prevents any incorrect or malformed queries from being processed. 📋 Ahrefs API Setup To use this workflow, you’ll need to set up your Ahrefs API credentials: 🔑 Ahrefs API Sign up for an Ahrefs account and get your key here: Ahrefs Keyword Tool API Once signed up, you'll receive an API key, which you’ll use in the x-rapidapi-key header in n8n. Ensure you check the Ahrefs Keyword Tool API documentation for more details on available parameters. 📥 How to Import This Workflow Copy the json code. Open your n8n instance. Open a new workflow. Paste anywhere inside the workflow. Voila. 🛠️ Customization Options Adjust the number of related keywords extracted (default is 10). Customize the AI agent response formatting or add specific recommendations for users. Modify the JavaScript function to extract different metrics from the Ahrefs API. 🧪 Use Case Example Trying to optimize your blog post around a specific keyword? Query a broad keyword, like “SEO tips”. Get related keyword data and search volume insights. Use the AI agent to provide keyword recommendations and additional topics to target. 💥 Boost your content strategy with fresh keywords and relevant search data!
by L Hùng
Pre-Conditions A Facebook Developer account with an active app. Basic understanding of n8n workflows. Access to a database (optional, for storing tokens). Setup Webhook Activation: Configure the Webhook to receive user requests and process input data. Ensure the Webhook URL is correctly set in your Facebook App settings. Short-Lived Token Retrieval: Use Facebook OAuth to fetch a short-lived token from the authorization code. Long-Lived Token Conversion: Convert the short-lived token into a long-lived token (valid for ~60 days). Page Token Retrieval: Follow the provided instructions to retrieve Page Tokens for posting on managed Facebook Pages. Customizable Scopes: Edit the correctScopes array to include or exclude permissions as needed. Optional Database Storage: Extend the workflow to save tokens to a database instead of displaying them on-screen. Step-by-Step Instructions: Detailed guidance is provided via sticky notes for activating the app, configuring Webhook, and editing parameters like fb_redirect_uri, app_id, and app_secret. Who the Template is For Developers**: Integrating Facebook APIs into their applications. Social Media Managers**: Automating posting and engagement on Facebook Pages. n8n Users**: Looking for a ready-to-use workflow for Facebook Token management. Primary Use Automates Facebook Token retrieval and management. Supports posting to Facebook Pages via Page Tokens. Easily customizable and extendable for specific requirements.
by Jimleuk
This n8n template imports an XLSX containing terms dates for a university, extracts the relevant events using AI and converts the events to an ICS file which can be imported into iCal, Google Calendar or Outlook. Manually adding important term dates to your calendar by hand? Stop! Automate it with this simple AI/LLM-powered document understanding and extraction template. This cool use-case can be applied to many scenarios where Excel files are predominantly used. How it works The term dates excel file (xlsx) are imported into the workflow from the university's website using the http request node. To parse the excel file, we use an external service - Cloudflare's Markdown Conversion Service. This converts the excel's sheets into markdown tables which our LLM can read. To extract the events and their dates from the markdown, we can use the Information Extractor node for structured output. LLMs are great for this use-case because they can understand the layout; one row may have many data points. With our data, there are endless possibilities to use it! But for this demonstration, we'll generate an ICS file so that we can import the extracted events into our calendar. We use the Python code node to combine the events into the ICS spec and the "Convert to File" node to create the ICS binary. Finally, let's distribute the ICS file by email to other students or instructors who may also find this incredibly helpful for the upcoming semester! How to use Ensure you're downloading the correct excel file and amend the URL parameter of the "Get Term Dates Excel" as necessary. Update the gmail node with your email or other emails as required. Alternatively, send the ICS file to Google Drive or a student portal. Requirements Cloudflare Account is required to use the Markdown Conversion Service. Gemini for LLM document understanding and extraction. Gmail for email sending. Customising the workflow This template should work for other Excel files which - for a university - there are many. Some will be more complicated than others so experiment with different parsers and extraction tools and strategies.