by Amjid Ali
Automate Digital Delivery After PayPal Purchase Using n8n A Complete Step-by-Step Guide to Seamless Template Delivery Built by Amjid Ali β SyncBricks Deliver personalized files instantly after PayPal transactions using n8n β without writing a single backend line. π What This n8n Workflow Does This automation template helps you automatically deliver a digital product (such as an n8n template or JSON file) to customers who pay via PayPal β within seconds. You can: Automatically extract customer info Identify what was purchased Send a clean, branded email with the product file Promote your other courses, books, and tools π¦ Use Case Example Product: AI-Powered Social Media Content Generator & Publisher When a customer buys this product through PayPal, this automation: Listens for a successful payment event Fetches order details via API Sends an HTML email with the template attached Promotes your other offerings with embedded links π§ Prerequisites Youβll need: An n8n instance (self-hosted or n8n Cloud) A PayPal developer account PayPal OAuth2 credentials configured in n8n Your product hosted as a downloadable .json file (Oracle, Dropbox, GitHub, etc.) SMTP email credentials in n8n π§ Step-by-Step Setup 1. Webhook Trigger Node: Webhook Listens for a POST request from PayPalβs webhook for PAYMENT.CAPTURE.COMPLETED events. π Add the webhook to your PayPal Developer App > Webhooks. 2. Wait Node: Wait Adds a brief delay to ensure the payment is completely processed before continuing. 3. Filter Event Type Node: Switch Processes only when the event is PAYMENT.CAPTURE.COMPLETED. 4. Fetch Order Details Node: HTTP Request Retrieves the order information from PayPal's Orders API. URL format: https://api.paypal.com/v2/checkout/orders/{{ order_id }} 5. Extract Email & Product Info Node: Set Extracts first name, last name, email address, and the purchased item name. 6. Identify Product Purchased Node: Switch Checks if the product is βAI-Powered Social Media Content Generator & Publisherβ. 7. Download Workflow File Node: HTTP Request Fetches the hosted workflow JSON from object storage (Oracle in this case). 8. Convert to Downloadable File Node: Code Converts the JSON content into a binary file and attaches it. 9. Send Custom Email Node: Send Email Sends a rich HTML email to the buyer with: Their name The file attachment Product name Helpful resource links: π Mastering n8n Course on Udemy π Step-by-Step Guide (n8n Book) π n8n Video Tutorials (Free Course) βοΈ Sign up for n8n Cloud β Use code AMJID10 π₯ YouTube Video Walkthrough π Additional Learning Resources π My Full Automation Suite Explore more and master n8n with these resources: π Mastering n8n (Full Udemy Course) π Get Your Step-by-Step Guide (n8n Book) π₯ Get Step-by-Step Tutorials (Video Course) βοΈ Sign up for n8n Cloud π‘ Templates, Tools, and More πΊ YouTube Channel β SyncBricks π Need Help or Customization? Reach out! Email: amjid@amjidali.com LinkedIn: linkedin.com/in/amjidali Website: syncbricks.com
by Vijayeta Sinel
Automate document translation and ensure translation accuracy using Straker Verify, Google Drive and Slack. **How it works?β¨** A workflow step is set up to "watch" a Google Drive folder. When your team members place new files in this folder, they are downloaded. Straker Verify then translates them and provides a quality score. Once Straker Verify has completed this, the job info is fetched, the translation is saved to an output folder and you are notified via Slack. What problem does this solve? When using AI to translate documents, you have no idea about the quality and accuracy of the output. This template answers the question βHow good is my translation?β so you have a high level of confidence before you publish. Who is this for? This workflow template is designed for businesses needing translation and localization of documents such as text docs, presentations, web pages, transcripts, video subtitles and others. Use it to build workflows that localize your content at scale while maintaining translation quality, accuracy and compliance. Set up instructions Straker Verify Integration with n8n **Connect to Straker Verify: Obtain Your API Key Sign Up/Log In:** Visit https://verify.straker.ai/ to create an account or log in. Navigate to API Keys: Go to `Verify β Settings β API Keys. Copy Your Key: Find and copy your API key. Add Key to n8n: In n8n, go to Settings β Credentials β Straker Verify. Set Up Credentials for the Straker Verify Node Open n8n and go to "Credentials". From the left sidebar, click on "Credentials". Search for "Straker Verify" and select "Straker Verify API" from the dropdown. Paste the copied access token from the Verify app and save it. Assign Credentials to the Node 1.Go to your workflow and open the Straker Verify node. 2.Select the "Straker Verify API" credentials you just created from the dropdown (Credentials to connect with), and save. β You are now ready to use the workflow. Workflow Steps Step 1: Initiate Workflow β Upload Files to Google Drive 1.Upload one or more files to the designated Google Drive folder. This triggers the "New File in Google Drive" node in n8n. Step 2: Verify Token Balance The workflow checks your token balance via: Get Current User Balance User Has Enough Tokens Not enough tokens? You will receive a Slack message: Not enough tokens Please top up and re-upload your files. Step 3: Select Workflow The system fetches available workflows via: Fetch All Users Workflows No matching workflow found? You'll be notified via Slack: No workflow found Step 4: Create Project in Straker Verify The following steps are handled automatically: Fetch language/project options Download files from Google Drive Create a new project using: Create Straker Project β You'll receive a Slack confirmation: > "Project created β ID: xxxxxx" Step 5: Process Completion & File Return Once files are processed by Straker: The Incoming Translation Result webhook is triggered. The workflow: Downloads processed files: Get File Content from Strakerh - Uploads them to Google Drive: Upload File to Google Drivee Step 6: Confirmation - Workflow Complete You'll receive a final Slack message: > "Workflow complete β Files are now available in Google Drive"
by Incrementors
Yelp Business Scraper by URL via Bright Data API with Google Sheets Storage Overview This n8n workflow automates the process of scraping comprehensive business information from Yelp using individual business URLs. It integrates with Bright Data for professional web scraping and Google Sheets for centralized data storage, providing detailed business intelligence for market research, competitor analysis, and lead generation. Workflow Components 1. π₯ Form Trigger Type**: Form Trigger Purpose**: Initiates the workflow with user-submitted Yelp business URL Input Fields**: URL (Yelp business page URL) Function**: Captures target business URL to start the scraping process 2. π Trigger Bright Data Scrape Type**: HTTP Request (POST) Purpose**: Sends scraping request to Bright Data API for Yelp business data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lgugwl0519h1p14rwk Include errors: true Limit multiple results: 5 Limit per input: 20 Function**: Initiates comprehensive business data extraction from Yelp 3. π‘ Monitor Snapshot Status Type**: HTTP Request (GET) Purpose**: Monitors the progress of the Yelp scraping job Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if the business data scraping is complete 4. β³ Wait 30 Sec for Snapshot Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 30 seconds Function**: Pauses workflow before rechecking scraping status to optimize API usage 5. π Retry Until Ready Type**: IF Condition Purpose**: Evaluates scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data retrieval False: Loops back to status monitoring with wait 6. π₯ Fetch Scraped Business Data Type**: HTTP Request (GET) Purpose**: Retrieves the final scraped business information Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Yelp business data with comprehensive details 7. π Store to Google Sheet Type**: Google Sheets Node Purpose**: Stores scraped business data for analysis and storage Operation**: Append rows Target**: "Yelp scraper data by URL" sheet Data Mapping**: Business Name, Overall Rating, Reviews Count Business URL, Images/Videos URLs Additional business metadata fields Workflow Flow Form Input β Trigger Scrape β Monitor Status β Wait 30s β Check Ready β β ββββ Loop ββββββ β Fetch Data β Store to Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for Yelp business scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Dataset ID**: gd_lgugwl0519h1p14rwk (Yelp business scraper) Form Webhook ID**: User input form identifier Google Sheets Credential ID**: OAuth2 authentication Key Features Comprehensive Business Data Extraction Complete business profile information Customer ratings and review counts Contact details and business hours Photo and video content URLs Location and category information Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 30-second intervals Status validation before data retrieval Error handling and timeout management Centralized Data Storage Automatic Google Sheets export Organized business data format Historical scraping records Easy sharing and collaboration URL-Based Processing Direct Yelp business URL input Single business deep-dive analysis Flexible input through web form Real-time workflow triggering Use Cases Market Research Competitor business analysis Local market intelligence gathering Industry benchmark establishment Service offering comparison Lead Generation Business contact information extraction Potential client identification Market opportunity assessment Sales prospect development Business Intelligence Customer sentiment analysis through ratings Competitor performance monitoring Market positioning research Brand reputation tracking Location Analysis Geographic business distribution Local competition assessment Market saturation evaluation Expansion opportunity identification Data Output Fields | Field | Description | Example | |-------|-------------|---------| | Name | Business name | "Joe's Pizza Restaurant" | | Overall Rating | Average customer rating | "4.5" | | Reviews Count | Total number of reviews | "247" | | URL | Original Yelp business URL | "https://www.yelp.com/biz/joes-pizza..." | | Images/Videos URLs | Media content links | "https://s3-media1.fl.yelpcdn.com/..." | Technical Notes Polling Interval**: 30-second status checks for optimal API usage Result Limiting**: Maximum 20 businesses per input, 5 multiple results Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Form Input**: Single URL field with validation Storage Format**: Structured Google Sheets with predefined columns Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration Import into n8n: Workflows β Import from JSON Paste configuration and save Step 2: Configure Bright Data Set up credentials: Navigate to Credentials β Add Bright Data API Enter your Bright Data API key Test connection Update API key references: Replace BRIGHT_DATA_API_KEY in all HTTP request nodes Verify dataset access for gd_lgugwl0519h1p14rwk Step 3: Configure Google Sheets Create target spreadsheet: Create new Google Sheet named "Yelp Business Data" or similar Copy the Sheet ID from URL Set up OAuth2 credentials: Add Google Sheets OAuth2 credential in n8n Complete authentication process Update workflow references: Replace YOUR_GOOGLE_SHEET_ID with actual Sheet ID Update YOUR_GOOGLE_SHEETS_CREDENTIAL_ID with credential reference Step 4: Test and Activate Test with sample URL: Use a known Yelp business URL Monitor execution progress Verify data appears in Google Sheet Activate workflow: Toggle workflow to "Active" Share form URL with users Sample Business Data The workflow captures comprehensive business information including: Basic Information**: Name, category, location Performance Metrics**: Ratings, review counts, popularity Contact Details**: Phone, website, address Visual Content**: Photos, videos, gallery URLs Operational Data**: Hours, services, amenities Customer Feedback**: Review summaries, sentiment indicators Advanced Configuration Batch Processing Modify the input to accept multiple URLs: [ {"url": "https://www.yelp.com/biz/business-1"}, {"url": "https://www.yelp.com/biz/business-2"}, {"url": "https://www.yelp.com/biz/business-3"} ] Enhanced Data Fields Add more extraction fields by updating the dataset configuration: Business hours and schedule Menu items and pricing Customer photos and reviews Special offers and promotions Notification Integration Add alert mechanisms: Email notifications for completed scrapes Slack messages for team updates Webhook triggers for external systems Error Handling Common Issues Invalid URL**: Ensure URL is a valid Yelp business page Rate Limiting**: Bright Data API usage limits exceeded Authentication**: Google Sheets or Bright Data credential issues Data Format**: Unexpected response structure from Yelp Troubleshooting Steps Verify URLs: Ensure Yelp business URLs are correctly formatted Check Credentials: Validate all API keys and OAuth tokens Monitor Logs: Review n8n execution logs for detailed errors Test Connectivity: Verify network access to all external services Performance Specifications Processing Time**: 2-5 minutes per business URL Data Accuracy**: 95%+ for publicly available business information Success Rate**: 90%+ for valid Yelp business URLs Concurrent Processing**: Depends on Bright Data plan limits Storage Capacity**: Unlimited (Google Sheets based) **For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Avkash Kakdiya
How it works This workflow enhances contact intelligence by retrieving new or updated contact data, enriching it using AI and external APIs, and then updating your CRM or contact management system with intelligent insights. It automates the process of gathering, enriching, and organizing contact information to improve targeting, personalization, and engagement. Step-by-step 1. Trigger & Input The workflow is triggered by a scheduler or webhook event. It reads a new contact entry (or an updated one) from your source, such as a spreadsheet or form. Basic fields like name, email, and company are used as the starting point for enrichment. 2. Contact Lookup & Parsing The contact's domain or company is extracted and used to perform a lookup via an external data source. Data such as company details, job title, or LinkedIn profile is retrieved. Parsed and cleaned to remove duplicates, missing values, or invalid results. 3. AI Enrichment The enriched contact is passed through an AI model (such as GPT or another NLP service). The model analyzes job role, seniority, and inferred interests based on available data. Insights like intent, persona category, or engagement score are generated. 4. Validation & Tagging The AI-enriched data is validated to ensure consistency and accuracy. Tags and segments (e.g., "Decision Maker", "Technical Buyer", etc.) are assigned based on rules or AI inference. This enables smart filtering, targeting, and routing later in your CRM or campaigns. 5. Output & Integration The final enriched and validated contact is written back to your CRM, sheet, or marketing platform. The system also: Sends a Slack/Email alert with a summary. Updates the original contact entry with a "Processed" or "Enriched" status. Triggers next steps, such as personalized outreach or nurture sequences. Benefits Enhances Contact Profiles with AI-generated insights and third-party data. Improves Segmentation & Targeting through smart tags and persona classification. Automates Manual Research, saving time and improving accuracy. Easily Extendable by adding more AI models, data sources, or CRM integrations.
by InfraNodus
Set Up ElevenLabs Voice Chat Agent using Graph RAG Knowledge Graphs as Experts This workflow creates an AI voice chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. We use ElevenLabs to set up a voice agent that can be embedded to any website or used via their API. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) and to update with new knowledge A knowledge graph has a holistic overview of your knowledge base Better retrieval of relations between the document chunks = higher quality responses Ability to reuse in other n8n workflows How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. The user's prompt is received from the ElevenLabs Conversational AI agent via an n8n Webhook, which also takes care of the voice interaction. The response from n8n is then sent to the Webhook, which is polled by the ElevenLabs voice agent. This agent processes the response and provides the final answer. Here's a description step by step: The user submits a question using ElevenLabs voice interface The question is sent via the knowledge_base tool in ElevenLabs to the n8n Webhook with the POST request containing the user's prompt and sessionID for Chat Memory node in n8n. The n8n AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge auto-generated by InfraNodus (we call each tool an "expert"). The n8n AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expertΒ (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Webhook endpoint ElevenLabs conversational AI agent picks up the response arriving from the knowledge_base tool via the webhook and then condenses it for conversational format and transforms text into voice. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow You will also need to set up an ElevenLabs account and to set up a conversational AI agent there. See the Post note in the n8n workflow for a complete step-by-step description or our support article on setting up ElevenLabs AI voice agent Once the voice AI agent is ready, you might want to combine it with a text AI chatbot workflow so your users have a choice between the text and voice interaction. In that case, you may be interested to use our free open-source website popup chat widget popupchat.dev where you can create an embed code to add to your blog or website and allow the user to choose between the text and voice interaction. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key An ElevenLabs account FAQ 1. How many "experts" should I aim for? We recommend to aim for the number of experts as the optimal number of people in a team, which is usually 2-7. If you add more experts, your AI orchestrating agent will have troubles choosing the most suitable "expert" tool for the user's query. You can mitigate this by specifying in the AI agent description that it can choose maximum 3-7 experts to provide a response. 2. Why use InfraNodus GraphRAG and not standard vector store for knowledge? First, vector stores are complex to set up and to update. You'd need a separate workflow for that, decide on the vector dimensions, add metadata to your knowledge, etc. With InfraNodus, you have a complete RAG / GraphRAG solution under the hood that is easy to set up and provides high-quality responses that takes the overall structure and the relations between your ideas into account. 3 Why not use ElevenLabs' own knowledge? One of the reasons is that you want your knowledge base to be in one place so you can reuse it in other n8n workflows. Another reason is that you will not have such a good separation between the "experts" when you converse with the agent. So the answers you get will be based on top matches from all the books / articles you upload, while with the InfraNodus GraphRAG setup you can better control which graphs are consulted as experts and have an explicit way to display this data. Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available on our GitHub repo for n8n workflows. Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20318967066396-How-to-Build-a-Text-Voice-AI-Agent-Chatbot-with-n8n-Elevenlabs-and-InfraNodus Also check out the video tutorial with a demo:
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Geminiβs LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Dataβs Web Unlocker. Transform the scraped content into structured JSON using n8nβs built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Hereβs how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Paul
Gmail AI Email Manager - Setup Guide π― Workflow Overview This workflow will create an intelligent Gmail email manager that can: Monitor incoming emails via webhook Analyze email content using AI Categorize emails automatically Generate smart responses Take actions based on email content Send notifications for important emails π Pre-Setup Checklist Before we build the workflow, let me gather the necessary information and validate our approach. Phase 1: Discovery & Planning [ ] Search for Gmail nodes [ ] Find AI analysis nodes [ ] Identify webhook trigger options [ ] Check notification nodes Phase 2: Configuration Requirements [ ] Gmail API credentials [ ] AI service (OpenAI/Claude) API key [ ] Webhook URL setup [ ] Email classification rules π§ Setup Instructions Step 1: Gmail API Setup Go to Google Cloud Console Create new project or select existing Enable Gmail API Create OAuth 2.0 credentials Add authorized redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Step 2: AI Service Setup Choose one of the following: OpenAI**: Get API key from platform.openai.com Claude**: Get API key from console.anthropic.com Local AI**: Set up Ollama or similar Step 3: n8n Credentials Gmail OAuth2: Add client ID, secret, and scopes AI Service: Add API key Webhook: Configure webhook URL Gmail AI Email Manager - Setup Guide π§ Quick Setup Checklist 1. Google Cloud Console [ ] Enable Gmail API [ ] Create OAuth2 credentials [ ] Add redirect URI: https://your-n8n.com/rest/oauth2-credential/callback [ ] Set up Gmail push notifications with Pub/Sub 2. API Keys [ ] Get OpenAI API key from platform.openai.com [ ] Create Google Sheets for logging (optional) 3. n8n Credentials [ ] Gmail OAuth2: Client ID, Secret, Scopes: gmail.readonly,gmail.modify,gmail.compose [ ] OpenAI API: Your API key 4. Gmail Labels (Create these) [ ] URGENT (red) [ ] IMPORTANT (orange) [ ] PROMOTIONAL (purple) [ ] PERSONAL (green) [ ] WORK (blue) [ ] SPAM (gray) 5. Update Workflow Values [ ] High Priority Alert: Change notification email [ ] Spreadsheet Log: Update sheet ID (if using) [ ] Webhook: Copy URL after saving workflow 6. Test [ ] Save & activate workflow [ ] Send test email to Gmail [ ] Check execution log [ ] Verify auto-categorization works That's it! Your AI email manager is ready! π
by Yang
Who is this for? This template is for sales teams, agencies, or local service providers who want to quickly generate cold outreach lists and automatically call local businesses with a Vapi AI assistant. Itβs perfect for automating cold calls from scraped local listings with no manual dialing or research. What problem is this workflow solving? Finding leads and initiating outreach calls can be time-consuming. This workflow automates the process: it scrapes business listings from Google Maps using Dumpling AI, extracts phone numbers, filters out incomplete data, formats the numbers, and uses Vapi to make outbound AI-powered calls. Every call is logged in Google Sheets for follow-up and tracking. What this workflow does Starts manually and pulls search queries (e.g., "plumbers in Austin") from Google Sheets. Sends each query to Dumpling AIβs Google Maps scraping endpoint. Splits the returned business data into individual leads. Extracts key info like business name, website, and phone number. Filters to only keep leads with valid phone numbers. Formats phone numbers for Vapi dialing (adds +1). Calls each business using Vapi AI. Logs each successful call in a Google Sheet. Setup Google Sheets Setup Create a sheet with business search queries in the first column (e.g., best+restaurants+in+Chicago) Make sure the tab name is set and authorized in your credentials. Connect your Google Sheets account in the Get Search Keywords from Google Sheets node. Dumpling AI Setup Go to dumplingai.com Generate an API Key and connect it as a header token in the Scrape Google Map Businesses using Dumpling AI node Vapi Setup Sign into Vapi and create an assistant Get your assistantId and phoneNumberId Insert these into the JSON payload of the Initiate Vapi AI Call to Business node Add your Vapi API key to the credentials section Call Logging Create another tab in your sheet (e.g., βleadsβ) with these headers: company name phone number website This will be used in the Log Called Business Info to Sheet node How to customize this workflow to your needs Modify the business search terms in your Google Sheet to target specific industries or locations. Add filters to exclude certain businesses based on ratings, keywords, or location. Update your Vapi assistant script to match the type of outreach or pitch youβre using. Add additional integrations (e.g., CRM logging, Slack notifications, follow-up emails). Change the trigger to run on a schedule or webhook instead of manually. Nodes and Functions Breakdown Start Workflow Manually: Initiates the automation manually for testing or controlled runs. Get Search Keywords from Google Sheets: Reads search phrases from the spreadsheet. Scrape Google Map Businesses using Dumpling AI: Sends each search query to Dumpling AI and receives matching local business data. Split Each Business Result: Breaks the returned array of businesses into individual records for processing. Extract Business Name, Phone and website: Extracts title, phone, and website from each business record. Filter Valid Phone Numbers Only: Ensures only entries with a phone number move forward. Format Phone Number for Calling: Adds a +1 country code and strips non-numeric characters. Initiate Vapi AI Call to Business: Uses the business name and number to initiate a Vapi AI outbound call. Log Called Business Info to Sheet: Appends business details into a Google Sheet for tracking. Notes You must have valid API keys and authorized connections for Dumpling AI, Google Sheets, and Vapi. Make sure to handle API rate limits if you're running the workflow on large datasets. This workflow is optimized for US-based leads (+1 country code); adjust the formatting node if calling internationally.
by Calistus Christian
What this workflow does Automatically triages risky AWS misconfigurations and alerts your team. Pipeline: Security Hub or AWS Config -> EventBridge rules -> SNS (HTTP) -> n8n Webhook -> Normalize -> AI Prioritizer -> Airtable (log) -> Gmail (email) Normalizes incoming findings (S3 / Security Groups / IAM / RDS) into a consistent JSON. Uses an LLM to assign a priority (P0βP3) with rationale and remediation steps. Upserts the finding into Airtable (avoids duplicates). Emails a compact incident summary to your inbox. This can be swapped for Microsoft Teams or Slack, etc. Category: Security / Cloud / Alerting Time to set up: ~10β15 minutes Difficulty: BeginnerβIntermediate Cost: Mostly free (n8n CE + AWS SNS/EventBridge; OpenAI + Airtable/Gmail as used) What youβll need An n8n instance reachable over HTTP. AWS account (one region) with permissions to create SNS topics and EventBridge rules. Security Hub** enabled (or AWS Config rules that emit compliance events). n8n credentials: OpenAI, Airtable, Gmail. Nodes used Webhook** (POST /aws-misconfig) Code:** SNS Handler (token check, confirm/unwrap) IF:** route mode === "confirm" vs notification HTTP Request:** SNS SubscriptionConfirmation (GET) Code:** Normalize Finding Message a model:** AI Prioritizer (JSON out) Airtable:** Create/Upsert Gmail:** Send message Edit Fields:** final JSON response Setup steps Import and activate the workflow in n8n. Webhook Respond: When Last Node Finishes -> First Entry JSON. Append a shared secret to the URL, e.g. ?token=MY_SUPER_TOKEN, and keep the check in the SNS Handler code node. Create an SNS topic (e.g., misconfig-events) in the same region as your EventBridge rules. Create EventBridge rules targeting the SNS topic: Rule A (Security Hub): source = aws.securityhub, detail-type = Security Hub Findings - Imported Rule B (AWS Config): source = aws.config, detail-type = Config Rules Compliance Change Create an SNS subscription with Protocol = HTTP and Endpoint = your production webhook URL: http://YOUR_HOST:5678/webhook/aws-misconfig?token=MY_SUPER_TOKEN (The workflow auto-confirms the subscription on first POST.) Configure Airtable (Upsert on Finding ID) and Gmail recipients.
by Yang
π₯ Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. Itβs especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. π§© What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your emailβall triggered automatically when a new URL is added to Google Sheets. βοΈ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail π οΈ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email π How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets β οΈ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure youβre not scraping content from websites that prohibit automated scraping.
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 β§ n8n Version: 1.88.0+ β§ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features π¬ AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. β Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. π¦ Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. π« Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. π‘ Post Data Retrieval: Fetches the mediaβs id and caption from the Graph API (v22.0) before generating a reply. π§ Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. π Automated Replies: Posts the AI-generated message back to the comment thread using Instagramβs API. π§© Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the mediaβs caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agentβs prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each nodeβs execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media β’ Instagram Automation β’ Webhook Verification β’ AI Agent β’ HTTP Request β’ Auto Reply β’ Community Management
by Flavio Angeleu
WhatsApp Flows Encrypted Data Exchange Workflow Summary This workflow enables secure end-to-end encrypted data exchange with WhatsApp Flows for interactive applications inside Whatsapp. It implements the WhatsApp Business Encryption protocol using RSA for key exchange and AES-GCM for payload encryption, providing a secure channel for sensitive data transmission while interfacing with WhatsApp's Business API. This follows the official WhatsApp Business Encryption specifications to establish an encrypted GraphQL-powered data exchange channel between your business and the WhatsApp consumer client. How It Works Encryption Flow Webhook Reception: Receives encrypted data from WhatsApp containing: encrypted_flow_data: The AES-encrypted payload encrypted_aes_key: The RSA-encrypted AES key initial_vector: Initialization vector for AES decryption Decryption Process: The workflow decrypts the AES key using an RSA private key Then uses this AES key to decrypt the payload data The inverted IV is used for response encryption Data Processing: The workflow parses the decrypted JSON data Routes requests based on the screen parameter. Response Generation: Generates appropriate response data based on the request type Encrypts the response using the same AES key and inverted IV Returns the base64-encoded encrypted response Key Components Webhook Endpoint**: Entry point for encrypted WhatsApp requests Decryption Pipeline**: RSA and AES decryption components Business Logic Router**: Screen-based routing for different functionality Encryption Pipeline**: Secure response encryption How to Use Deploy the Workflow: Import the workflow JSON into your n8n instance Set Up WhatsApp Integration: Configure your WhatsApp Business API to send requests to your n8n webhook URL Ensure your WhatsApp integration is set up to encrypt data using the public key pair of the private key used in this workflow Test the Flow: Send an encrypted test message from WhatsApp to verify connectivity Check if appointment data is being retrieved correctly Validate that seat selection is functioning as expected Production Use: Monitor the workflow performance in production Set up error notification if needed Requirements Authentication Keys RSA Private Key: Required for decrypting the AES key (included in the workflow) WhatsApp Business Public Key: Must be registered with the WhatsApp Business API PostgreSQL Credentials: For accessing appointment data from the database WhatsApp Business Encryption Setup As specified in the WhatsApp Business Encryption documentation: Generate a 2048-bit RSA Key Pair: The private key remains with your business (used in this workflow) The public key is shared with WhatsApp Register the Public Key with WhatsApp: Use the WhatsApp Cloud API to register your public key Set up the public key using the /v17.0/{WhatsApp-Business-Account-ID}/whatsapp_business_encryption endpoint Key Registration API Call: POST /v17.0/{WhatsApp-Business-Account-ID}/whatsapp_business_encryption { "business_public_key": "YOUR_PUBLIC_KEY" } Verification: Verify your public key is registered using a GET request to the same endpoint Ensure the key status is "active"