by KlickTipp
Community Node Disclaimer This workflow uses KlickTipp community nodes, available for self-hosted n8n instances only. Who’s it for Marketing teams, agencies, and content creators who want to turn Instagram post comments into automated conversations — capturing leads, sending personalized DMs, and enriching contacts in KlickTipp without manual work. How it works This workflow automates engagement between Instagram users and your marketing funnel. It listens for new Instagram comments, validates the Meta webhook, and sends personalized DMs with form links. The workflow then stores and syncs user data for tagging and enrichment in KlickTipp. When a new comment appears, it: Validates the webhook setup via the Meta hub.challenge Captures the commenter’s username and ID Sends a personalized DM with a form link for lead capture Stores the data in Google Sheets for tracking Updates or tags the contact in KlickTipp The result: every Instagram comment turns into a structured, tagged lead for your marketing automation. How to set up Connect accounts for Meta (Instagram), Google Sheets, and KlickTipp. Set up your Meta App webhook for Instagram comments, using your workflow’s webhook URL and verify token (e.g., KlickTipp). Create a Google Sheet as a matching table with the columns: Instagram username Instagram ID Authenticate KlickTipp with API credentials and ensure your subscriber fields are configured. Test by commenting on a connected Instagram post to trigger the workflow. 💡 Pro Tip: Customize the DM to include your brand’s tone and lead form link for higher engagement. Requirements Meta (Instagram) Business Account Facebook Graph API with pages_messaging permission Google Sheets OAuth connection KlickTipp account with API access How to customize Replace the default form link with your own JotForm or landing page URL. Adjust DM content to fit your tone and campaign messaging. Add logic to send different DMs based on comment keywords. Integrate with KlickTipp tags for automatic segmentation. Expand the workflow to handle repeat commenters or trigger follow-ups.
by masaya kawabe
Auto-like and repost latest tweets from accounts in Google Sheets Who’s it for Teams and solo creators who manage multiple X (Twitter) accounts they follow and want consistent engagement with minimal effort. Ideal for social managers, community leads, and campaign operators who need safe, repeatable automation. What it does / How it works On a schedule, the workflow reads a Google Sheet of screen names, fetches each account’s latest tweets, then likes and reposts them. A Limit node caps daily actions to respect rate limits and reduce risk. Every step includes Sticky notes: a yellow Overview (full description + safety notes) and white per-step notes (setup tips, filters, and expansion ideas). How to set up Add Credentials for X (Twitter) OAuth2 and Google Sheets (no hardcoded tokens). Point the Google Sheets node to your sheet (header アカウントID, screen names without @). Adjust the search query (e.g., -is:reply -is:retweet) and results per run. Set the Schedule Trigger cadence and Limit (start with 1–3). Test with a staging account, then enable scheduling. Requirements n8n 1.x+ X (Twitter) OAuth2 and Google Sheets OAuth2 credentials A Google Sheet with column アカウントID How to customize the workflow Add a dry-run flag (Set → IF) to skip actions in testing. Insert IF filters for NG words, language, or tweet age (created_at). Add a Wait node between Like and Repost for cooldowns. Append logs to another Google Sheet (status, URL, timestamp). Security & quality: Use Credentials only, avoid personal IDs in nodes, and keep actions modest to respect API limits.
by Evoort Solutions
🔎 Automated Keyword Research Workflow with Google sheet logging & Semrush API Description: Easily collect keywords and country input, run automated keyword research via the Semrush Keyword Research API on RapidAPI, and store results in Google Sheets for seamless tracking and analysis. ⚙️ Node-by-Node Explanation 🟢 On form submission (formTrigger) Collects keyword and country inputs from the user via a simple form to start the research process. Triggers the workflow execution upon submission. 🌐 Keyword Research (httpRequest) Sends a POST request with user inputs (keyword and country) to the Semrush Keyword Research. Retrieves keyword suggestions, search volume, and related data for comprehensive keyword insights. 📄 Append Data to Google Sheet (googleSheets) Automatically appends the keyword research results into a connected Google Sheets document. Enables easy tracking, sharing, and further analysis of keyword data. 📈 Example Spreadsheet Structure | Keyword | Country | Search Volume | CPC | Competition | Keyword Difficulty | Related Keywords | Date of Research | |---------------|---------|---------------|------|-------------|--------------------|-------------------------------------|------------------| | keyword1 | US | 10,000 | $2.50| 0.75 | 45 | keyword2, keyword3 | 2025-09-09 | | example term | UK | 15,000 | $1.80| 0.60 | 38 | term1, example keyword | 2025-09-09 | 🌟 Benefits 🚀 Powered by **Semrush Keyword Research API on RapidAPI:** Reliable, up-to-date keyword insights accessible via a simple API integration. 🔄 Fully Automated: From user input to data storage, the process is seamless and requires no manual handling. 📊 Centralized Data Storage: Storing results in Google Sheets ensures accessibility and easy collaboration. 📈 Scalable & Repeatable: Run keyword research on-demand for multiple keywords and countries effortlessly. 🚀 Use Cases 🏢 SEO Agencies: Quickly gather keyword data for clients in different markets using the Semrush Keyword Research. 📱 Digital Marketing Teams: Monitor and expand keyword strategies by collecting keyword ideas and volume regularly through the Semrush Keyword Research. 🔎 Content Creators: Identify trending and high-traffic keywords tailored by country to optimize content via the Semrush Keyword Research API. 📅 Automated Reporting: Generate scheduled keyword research reports by integrating this workflow into larger marketing automation pipelines. 🔑 How to Get Your API Key for Semrush Keyword Research Visit the API Page: Go to the Semrush Keyword Research on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY" 🛠 Customizing the Workflow To modify the automated workflow and adapt it to your specific use case, follow these guidelines: Adjust the Data Retrieval Process: You can modify the data you want to receive from the Semrush API. For example, if you’re only interested in search volume and CPC, you can filter out the other results in the API request. Add More Countries: If you work with multiple regions, modify the workflow to accept multiple country inputs. You could either pass in a list of countries or have a dropdown on the form that lets users select their country of choice. Expand Keyword Types: The workflow can be expanded to collect data for different types of keywords, such as long-tail or LSI (Latent Semantic Indexing) keywords, depending on your SEO needs. Set Up Scheduled Keyword Reporting: To automate reporting, you can schedule keyword research reports to run at regular intervals, such as monthly, using Google Apps Script or another task scheduler. This way, you’ll always have fresh data on hand for analysis. ✅ Tips for Smooth Workflow Integration Test Your API Integration: Run a test to check if data is properly flowing into your Google Sheet before automating the process. Set Up Notifications: Use Google Sheets' built-in notifications or an external automation tool (e.g., Zapier, Integromat) to notify you when new data is added or if there’s an issue with the workflow. Handle Errors Gracefully: Add error handling in your automated process to prevent issues like missing data or API request failures from disrupting your workflow.
by Trung Tran
AWS Certificate Manager (ACM) Auto-Renew with Slack notify & approval Who’s it for SRE/DevOps teams managing many ACM certs. Cloud ops who want hands-off renewals with an approval step in Slack. MSPs that need auditable reminders and renewals on schedule. How it works / What it does Schedule Trigger – runs daily (or your cadence). Get many certificates – fetches ACM certs (paginate if needed). Filter: expiring in next 7 days – keeps items where: NotAfter before today + 7d NotBefore before today (already valid) Send message and wait for response (Slack) – posts a certificate summary and pauses until Approve/Reject. Renew a certificate – on Approve, calls the renew action for the item. How to set up Credentials AWS in n8n with permissions to list/read/renew certs. Slack OAuth (bot in the target channel). Schedule Trigger Set to run once per day (e.g., 09:00 local). Get many certificates Region: your ACM region(s). If you have several regions, loop regions or run multiple branches. Filter (IF / Filter node) Add these two conditions (AND): {{ $json.NotAfter.toDateTime('s') }} is before {{ $today.plus(7,'days') }} {{ $json.NotBefore.toDateTime('s') }} is before {{ $today }} Slack → Send & Wait Message (text input): :warning: ACM Certificate Expiry Alert :warning: Domain: {{ $json.DomainName }} SANs: {{ $json.SubjectAlternativeNameSummaries }} ARN: {{ $json.CertificateArn }} Algo: {{ $json.KeyAlgorithm }} Status: {{ $json.Status }} Issued: {{ $json.IssuedAt | toDate | formatDate("YYYY-MM-DD HH:mm") }} Expires: {{ $json.NotAfter | toDate | formatDate("YYYY-MM-DD HH:mm") }} Approve to start renewal. Add two buttons: Approve / Reject (the node will output which was clicked). Renew a certificate Map the CertificateArn from the Slack Approved branch. Requirements n8n (current version with Slack Send & Wait). AWS IAM permissions (read + renew ACM), e.g.: acm:ListCertificates, acm:DescribeCertificate, acm:RenewCertificate (plus region access). Slack bot with permission to post & use interactivity in the target channel. How to customize the workflow Window size:** change 7 to 14 or 30 days in the filter. Catch expired: add an OR path {{ $json.NotAfter.toDateTime('s') }} is before {{ $today }} → send a **red Slack alert. Auto-renew w/o approval:** bypass Slack and renew directly for low-risk domains. Multiple regions/accounts:** iterate over a list of regions or assume roles per account. Logging:** add a Google Sheet/DB append after Slack click with user, time, result. Escalation:** if no Slack response after N hours, ping @oncall or open a ticket. Notes The Slack node pauses execution until a button is clicked—perfect for change control. Time conversions above assume NotAfter/IssuedAt are Unix seconds ('s'). Adjust if your data differs.
by Aitor | 1Node
Overview This n8n workflow provides seamless integration with Cerebras' high-performance inference platform to leverage OpenAI's open-source GPT-OSS-120B model. With industry-leading speeds of thousands of tokens per second and ultra-low latency under 0.5 seconds, this template enables developers and businesses to build responsive AI applications without the complexity of managing infrastructure or dealing with slow response times that plague traditional AI integrations. How it works This streamlined workflow establishes a direct connection to Cerebras' inference API through four simple nodes. When a chat message is received, the workflow processes it through the configured API key, sends it to the Cerebras endpoint with your specified parameters (temperature, completion tokens, top P, reasoning effort), and returns the AI-generated response. Detailed Workflow Explanation 1. When chat message received: This trigger node initiates the workflow whenever a new chat message is detected. It captures the user's input and passes it to the next node in the chain, supporting various input formats and message sources. 2. Set API Key: A manual configuration node where you securely store your Cerebras API key. This node handles authentication and ensures your requests are properly authorized when communicating with the Cerebras inference API. 3. Cerebras endpoint: The core HTTP request node that communicates with Cerebras' chat completions API. This node is pre-configured to work with the GPT-OSS-120B model and includes parameter settings for temperature, completion tokens, top P, and reasoning effort that can be customized based on your specific needs. 4. Return Output: The final node that processes and formats the AI response, delivering the generated text back to your application or user interface in a clean, usable format. Who is it for Developers building real-time chat applications, conversational AI systems, or interactive web applications who need consistent sub-second response times without managing complex AI infrastructure. Content creators and marketing teams who require rapid text generation for blogs, social media content, product descriptions, or marketing copy, enabling faster content production cycles and improved productivity. Businesses implementing customer service automation, lead qualification systems, or interactive FAQ solutions where response latency directly impacts user experience and conversion rates. SaaS companies looking to integrate AI features into existing products without the overhead of training models or managing inference servers, allowing them to focus on core business logic. Researchers and data scientists experimenting with high-performance language models for prototyping, A/B testing different prompting strategies, or conducting performance benchmarks against other AI providers. Startups and small teams seeking enterprise-grade AI capabilities without the infrastructure costs or technical complexity typically associated with large language model deployment. Comprehensive Setup Instructions 1. Cerebras Account Setup Visit Cerebras and create a new account Complete email verification and profile setup Navigate to the API Keys section in your dashboard Generate a new API key and securely store it Review the rate limits for free tier accounts and upgrade if needed 2. N8N Workflow Configuration Import the template into your n8n instance Click on the "Set API Key" node and enter your Cerebras API key Configure the trigger node based on your input source (webhook, manual, scheduled) Test the workflow using the built-in execution feature 3. Parameter Customization Open the "Cerebras endpoint" node to access the parameter configuration Adjust temperature, completion tokens, top P, and reasoning effort based on your use case Save and test the workflow to ensure proper functionality Customization and Configuration Guide Model Parameters in the Cerebras Endpoint Node: Temperature** (0.0-2.0): Lower values (0.1-0.3) for factual, consistent responses; higher values (0.7-1.5) for creative, varied content Completion Tokens**: Set based on expected response length - 150 for short answers, 500+ for detailed explanations, 1000+ for long-form content Top P** (0.1-1.0): Controls response diversity; 0.9 works well for most applications, lower values for more focused responses Reasoning Effort**: Adjusts the model's computational effort for complex reasoning tasks; higher values for analytical or problem-solving queries Use Case Specific Configurations: Customer Support**: Temperature 0.2, moderate completion tokens, consistent helpful responses Creative Writing**: Temperature 1.0-1.2, higher completion tokens for diverse, imaginative content Technical Documentation**: Temperature 0.3, structured output with examples and code snippets Casual Conversation**: Temperature 0.7, balanced creativity and coherence Integration Scenarios: Connect the trigger to webhooks for external application integration Modify the output node to format responses for specific platforms (Slack, Discord, web apps) Add conditional logic to handle different types of user queries Implement input validation and sanitization for production environments Possible Enhancements Multi-model support: Extend the workflow to switch between different Cerebras models based on query complexity or specific requirements. Response caching: Add caching mechanisms to store frequently requested responses, reducing API calls and improving performance. Advanced error handling: Implement retry logic and fallback mechanisms for improved reliability in production environments. Content filtering: Integrate moderation capabilities to ensure appropriate responses in customer-facing applications. Analytics integration: Connect monitoring tools to track usage patterns, response quality, and performance metrics. Multi-channel triggers: Set up automated responses for various platforms like Slack, Discord, or custom webhooks. Template management: Create reusable prompt templates for different scenarios and use cases. Output formatting: Add post-processing for specific output formats (HTML, Markdown, JSON) based on integration requirements.
by Kev
Example generated with this workflow: Simply upload a image and a watermark file, and the workflow will automatically combine them into a professional watermarked image. Use cases include adding logos to content, branding product photos, or protecting images with copyright marks. Good to know Completely free solution with no ongoing costs or subscriptions Processing typically takes 5-15 seconds depending on image size The workflow uses a polling mechanism to check job completion every 3 seconds Supports standard image formats (PNG, JPG, etc.) No credit card required to get started How it works The Form Trigger creates a user-friendly upload interface for two files: main image and watermark Both images are uploaded simultaneously to the API's file storage via parallel HTTP requests The uploaded file URLs are aggregated and used to create an image composition job The workflow polls the API every 3 seconds to check job completion status Once completed, the final watermarked image is downloaded and returned as a file download The watermark is automatically positioned in the bottom-right corner with 50% opacity, but this can be easily customized. How to use The form trigger provides a clean interface, but you can replace this with other triggers like webhooks or manual triggers if needed. The workflow handles all file processing automatically and returns the result as a downloadable file. Requirements Free account at jsoncut.com API key with full access (generated at app.jsoncut.com) HTTP Header Auth credential configured in n8n with header name x-api-key Setup steps Sign up for a free account at jsoncut.com Navigate to your dashboard at app.jsoncut.com → API Keys and create a new key with full access In n8n, create an HTTP Header Auth credential named "JsonCut API Key" Set the header name to x-api-key and the value to your API key Apply this credential to all HTTP Request nodes in the workflow Customising this workflow The watermark positioning, size, and opacity can be easily adjusted by modifying the JSON body in the "Create Job" node. You can change: Position coordinates (x, y values from 0 to 1) Watermark dimensions (width, height in pixels) Transparency (opacity from 0.1 to 1.0) Output image dimensions Fit options (cover, contain, fill) For more advanced image generation examples and configuration options, check out the documentation and image generation examples. For bulk processing, you could extend this workflow to handle multiple images or integrate it with cloud storage/database services.
by Sergey Tulyaganov
What Is TWZ n8n MailPoet Integration? This workflow adds subscribers to MailPoet using n8n by bridging WordPress through a custom REST API and logging results in Google Sheets. MailPoet is a popular WordPress email marketing plugin, but it does not provide a public REST API. Because of this limitation, n8n cannot connect to MailPoet directly using native nodes or standard integrations. This workflow demonstrates a practical and production-ready solution for connecting n8n → WordPress → MailPoet using a custom WordPress REST API plugin called TWZ N8N MailPoet. 🚧 Problem This Workflow Solves ❌ MailPoet has no public REST API ❌ n8n cannot add MailPoet subscribers natively ❌ External forms and automations cannot push data into MailPoet ✅ Solution Architecture This workflow solves the problem by: Creating a secure REST API endpoint inside WordPress Using an n8n HTTP Request node to send subscriber data Adding subscribers directly to MailPoet Preventing duplicate subscribers Logging subscribers in Google Sheets for visibility This creates a reliable bridge between n8n and MailPoet, enabling automation workflows that were previously not possible. 🔌 How It Works (High Level) 📥 n8n receives form submission data 🔍 The workflow checks if the subscriber already exists 📧 The subscriber is added to MailPoet via the custom REST API 📊 Subscriber data is logged in Google Sheets ✅ The workflow returns a success or error response 🎯 Why This Workflow Is Useful Works around MailPoet’s missing API Enables full automation from external tools Uses standard n8n nodes (HTTP Request, IF, Google Sheets) Secure and reusable integration pattern Ideal for WordPress-based businesses 🧩 About the TWZ n8n MailPoet Plugin TWZ n8n MailPoet is a free, lightweight WordPress plugin designed to provide a simple and reliable integration between n8n (or any external service) and MailPoet. It implements a single MailPoet API operation: ➡ Add Subscriber This keeps the plugin lightweight, fast, and focused.
by Davide
This workflow automates the process of generating 3D human body models (in .glb format) from single image using SAM-3D model. It operates by connecting a Google Sheet as a data source with the external AI processing API. | Start | Result| |--------|---------| | | | Use Cases 1. ✅ Sports Analysis & Motion Optimization 3D models allow precise analysis of posture, angles, and technique. Possible applications: Golf swing analysis** Identify stance, rotation, shoulder/hip alignment, and follow-through. Tennis serve biomechanics** Optimize shoulder rotation, racket angle, leg push-off. Running gait analysis** Evaluate stride symmetry, foot strike, and body tilt. Cycling posture optimization** Reduce drag by analyzing torso angle and hand position. Swimming technique evaluations** Compare ideal vs. actual joint angles. 2. ✅ Fitness, Health & Physiotherapy 3D models can visually highlight imbalances or incorrect positions. Posture correction assessments** Identify spinal misalignment or uneven weight distribution. Physical therapy progress tracking** Compare poses over time to assess recovery. Ergonomics and workplace safety** Evaluate whether a worker’s posture is safe during lifting or repetitive tasks. Home fitness coaching** Automated feedback for yoga, pilates, stretching exercises. 3. ✅ Fashion, Apparel & Virtual Try-On Photorealistic body reconstruction helps generate tailored outfits or evaluate fit. Virtual try-on for clothing brands** Produce accurate 3D avatars to test garments digitally. Custom-made fashion** Use 3D measurements for bespoke tailoring patterns. Model pose simulation** Test clothing fit in dynamic or unusual positions (e.g., dance, athletic poses). 4. ✅ Gaming, Animation & Digital Content Creation Quick 3D reconstruction reduces production time for digital humans. Character rigging from real people** Generate 3D avatars ready for animation. Motion capture alternatives** Recreate specific poses without expensive mocap systems. VR/AR content creation** Deploy 3D characters into immersive environments. Comics, illustration, and concept art** Use 3D poses as reference models to speed up drawing. 5. ✅ Medical, Research & Educational Applications Human-body 3D models provide insights in scientific or practical contexts. Anthropometric measurements** Estimate height, limb length, body proportions from images. Posture and musculoskeletal studies** Analyze joint angle distribution in different poses. Rehabilitation robotics or exoskeleton design** Fit devices to a patient’s real body shape. Training materials for anatomy or movement science** Generate accurate pose examples for students. 6.✅ Security, Forensics & Reconstruction When allowed ethically and legally, 3D models can support investigations. Reconstruction of accident scenes** Understand how a person fell, collided, or moved. Analysis of body posture in video frames** Useful for determining gesture patterns or mobility constraints. 7. ✅ Art, Photography & Creative Industries Artists often need unusual or complex human poses. Pose reference creation** For painting, 3D sculpting, illustration, or storyboarding. Recreating dynamic action scenes** Parkour, martial arts, ballet, expressive dance. Virtual studio lighting tests** Apply simulated lighting to a 3D model before shooting. How It Works This workflow automates the process of generating 3D human body models (in .glb format) from single images using the FAL.AI SAM-3D service. It operates by connecting a Google Sheet as a data source with the external AI processing API. Here is the operational flow: Trigger & Data Fetch: The workflow begins either manually (via "Test workflow") or on a schedule. It queries a designated Google Sheet to find rows where the "3D RESULT" column is empty, indicating a new image needs processing. API Request & Queuing: For each new image, it sends the image URL to the FAL.AI SAM-3D API endpoint (/fal-ai/sam-3/3d-body), which queues the job and returns a unique request_id. Status Polling & Waiting: The workflow enters a polling loop. It waits 60 seconds, then checks the job's status using the request_id. If the status is not "COMPLETED", it waits another 60 seconds and checks again. Result Retrieval & Storage: Once the status is "COMPLETED", the workflow fetches the final result, which contains the URL of the generated 3D model file (.glb). This file is then downloaded via an HTTP request. Sheet Update: Finally, the workflow updates the original Google Sheet row. It writes the URL of the generated 3D model into the "IMAGE RESULT" column for the corresponding row_number, thus marking the task as complete. Set Up Steps To configure this workflow in your n8n environment, follow these steps: Prepare the Google Sheet: Clone the provided Google Sheet template. Insert the URLs of the model images you want to convert into the "IMAGE MODEL" column. Leave the "IMAGE RESULT" column empty; it will be populated automatically. In n8n, set up a "Google Sheets OAuth2 API" credential and connect it to the Get new image and Update result nodes. Ensure the documentId points to your cloned sheet. Configure the FAL.AI API Connection: Create an account at fal.ai and obtain your API key. In n8n, create an "HTTP Header Auth" credential. Set the Header Name to Authorization and the Header Value to Key YOUR_API_KEY_HERE (replace with your actual key). Apply this credential to the following nodes: Create 3D Image, Get status, and Get Url 3D image. Verify Workflow Logic (Key Nodes): Get new image: Confirm the filtersUI is set to look for empty rows in the correct column (e.g., "3D RESULT" or "IMAGE RESULT"). Create 3D Image: Verify the JSON body correctly references the image URL from the previous node ({{ $json.image }}). Completed? (If node): Ensure the condition checks for the string COMPLETED from {{ $json.status }}. Update result: Double-check that the column mapping correctly uses row_number to match the row and updates the "IMAGE RESULT" column with the GLB URL Activate & Test: Save the workflow. Use the When clicking ‘Test workflow’ node for an initial manual test with one image URL in your sheet. Once confirmed working, you can enable the Schedule Trigger node for automatic, periodic execution. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Aryan Shinde
Overview This workflow automates the process of generating niche-specific business leads from Google Maps, leveraging the Google Places API and Google Sheets for seamless data collection and storage. Who Is This For? Business owners**, marketers, sales teams, or anyone needing to build targeted lead lists by business type and location quickly. Main Use Cases Building outreach lists for local marketing campaigns. Finding potential clients in a specific location and industry. Automating research for sales prospecting. How It Works Collect Inputs via Form: Gather your business type (search term), target location, desired number of results, and Google Maps API key using a simple built-in form. Geocode Location: The workflow automatically converts your location input into geographic coordinates. Search Businesses: It utilizes the Google Places API to search for businesses that match your criteria within a 10-km radius of your location. Extract & Validate Data: For each business found, it extracts key contact details (name, address, phone, website, etc.), validates for essential info, and automatically appends valid leads into your connected Google Sheet—ready for action. Prerequisites Google account connected to Google Sheets. Active Google Maps API key. Your target Google Sheet is set up to receive leads. Setup Steps Connect your Google Sheets account inside n8n. Obtain a Google Maps API key (usually takes a few minutes from the Google Cloud Console). Configure the workflow: Fill out the form inside the workflow with your business type, location, number of results, and your API key. Run the workflow and watch qualified leads flow into your Google Sheet in real-time. Customization Options Adjust the search radius or result count to match your needs. Extend extracted fields or add filters for advanced lead qualification. Change the Google Sheet structure as per your business process. Example Output Each row in your sheet contains: Business Name Address Phone Website Google Maps URL Ratings & Reviews Business Types Search Query & Location Scraped At timestamp > Tip: > For more details and advanced customizations, refer to the in-workflow sticky notes.
by Blurit
This n8n template demonstrates how to use Blurit to anonymize faces and/or license plates in images or videos directly within your workflow. Use cases include: automatically anonymizing dashcam videos, securing photos before sharing them publicly, or ensuring compliance with privacy regulations like GDPR. How it works The workflow starts with a Form Trigger where you can upload your image or video file. An HTTP Request node authenticates with the BlurIt API using your Client ID and Secret. The file is then uploaded to BlurIt via an HTTP Request to create a new anonymization task. A polling loop checks the task status until it succeeds. Once complete, the anonymized media is retrieved and saved using a Write Binary File node. How to use Replace the placeholder credentials in the Set Auth Config node with your BlurIt Client ID and Secret (found in your BlurIt Developer Dashboard). Execute the workflow, open the provided form link, and upload an image or video. The anonymized file will be written to your chosen output directory (or you can adapt the workflow to upload to cloud storage). Requirements A BlurIt account and valid API credentials (Client ID & Secret). A running instance of n8n (cloud or self-hosted). (Optional) Access to a shared folder or cloud storage service if you want to automate file delivery. Need Help? Contact us at support@blurit.io, or visit the BlurIt Documentation. Happy Coding!
by Ossian Madisson
This n8n template makes it easy to perform DNS lookups directly within your n8n workflow using dns.google, without any API credentials. Use Cases Track changes:** Schedule execution and log DNS answers to track changes to records over time. Monitoring and alerts:** Schedule execution for DNS monitoring to detect misconfiguration and to trigger immediate alerts. Prerequisite checks:** Use in more extensive workflows to ensure DNS resolves correctly before running a website crawl or other sensitive tasks. Good to Know Requires no API credentials. You do not need to sign up for any third party service for DNS resolution. Can easily be modified to use with a webhook instead of the default Forms node for external triggering. By default performs lookup for: A CNAME AAAA MX TXT NS How It Works The workflow checks the input for a specified DNS type. If none is found, it uses all types in a predefined list. It splits the data into separate items for each DNS type. It loops through all items and executes DNS resolution via the highly reliable dns.google service. It aggregates all results into a single, easy-to-use output structure. How to Use Import the template and execute the workflow to enter the domain you want to look up in the Form interface. Connect the final output node to your specific use case (logging, alerting, subsequent workflow steps, etc.).
by Aitor | 1Node
This n8n automation connects your Typeform forms with Vapi AI, allowing you to immediately call new form respondents with a personalized message from a Vapi AI assistant, as soon as a form submission is received. 🧾 Requirements Typeform A Typeform account Typeform personal access token and credentials enabled in n8n A Typeform form published that includes a phone number field Vapi A Vapi account with credit A connected phone number to make calls An assistant created and ready to make calls Your Vapi API key 🔗 Useful Links n8n Typeform Credentials Setup Vapi Docs 🔄 Workflow Breakdown 1. Trigger: Typeform Submission Triggered when a new response is submitted to your Typeform. The form must include a phone number field. 2. Wait 2 Minutes Adds a short delay before proceeding. Useful to ensure form data is fully synced or to give time for related automations. 3. Set Vapi Fields (Manual Step) Set the required fields for the Vapi API call: phone number id - connected in Vapi assistant id - the assistant enabled in the call Vapi API key - your secure API key 4. Start Outbound Vapi Call Sends a POST request to https://api.vapi.ai/call Payload includes: Respondent’s phone number (from Typeform) Vapi assistant id Vapi phone number id to initiate the call ✏️ Template Customization Guidance How to Adapt for Your Specific Needs Personalize the Call Content:** Include additional fields in your Typeform (e.g., first name, interest, location). In n8n, map these form fields into the payload sent to Vapi. Update your Vapi assistant’s prompt/script to reference these variables for a highly personalized experience. Conditional Call Logic:** Use n8n's logic nodes (e.g., IF, Switch) to, for example: Only trigger calls if a respondent checks a checkbox (e.g., consent or interest). Use a different Vapi assistant or phone number based on responses (e.g., language preference or location). Advanced Routing:** Configure the workflow to choose different assistants, phone numbers, or call scripts based on the respondent’s answers. Store assistant IDs or numbers as environment variables or reference them from a lookup table for dynamic selection. 📞 Examples: Using Form Data to Personalize Calls Greeting by Name:** If your Typeform collects first_name, map it into the Vapi payload. Your assistant script can begin, "Hi {{first_name}}, thanks for your interest in XYZ!" Custom Message Based on Product Interest:** Add a product_interest field in Typeform. Pass its value to Vapi and have the assistant mention the product, e.g., “I see you’re interested in our Premium Plan…” Reference Appointment Times or Locations:** Collect appointment_time and/or city fields, and tailor the call to reconfirm booking details using these inputs. 🛠️ Troubleshooting & Tips Call Not Triggering:** Ensure your Typeform webhook connection and credentials are correctly set up in n8n. Check that your workflow is active and the trigger node is configured for the correct form. Invalid Phone Number Format:** Vapi requires numbers in full international format (e.g., +11234567890). Use n8n expressions to clean or verify the incoming number if needed. Missing Data in the Call:** Confirm that additional fields (e.g., first_name) exist in the Typeform response and that your mapping in n8n matches the exact field names. Failed API Call:** Double-check your Vapi phone number id, assistant id, and API key. Use n8n’s execution logs to inspect the payload sent to Vapi for debugging. Duplicate Calls:** If your Typeform allows multiple submissions, add logic in n8n to check for and avoid duplicate calls, for example by maintaining a record of called numbers. 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.