by Piotr Sikora
[LI] – Search Profiles > ⚠️ Self-hosted disclaimer: > This workflow uses the SerpAPI community node, which is available only on self-hosted n8n instances. > For n8n Cloud, you may need to use an HTTP Request node with the SerpAPI REST API instead. Who’s it for Recruiters, talent sourcers, SDRs, and anyone who wants to automatically gather public LinkedIn profiles from Google search results based on keywords — across multiple pages — and log them to a Google Sheet for further analysis. What it does / How it works This workflow extends the standard LinkedIn profile search to include pagination, allowing you to fetch results from multiple Google result pages in one go. Here’s the step-by-step process: Form Trigger – “LinkedIn Search” Collects: Keywords (comma separated) – e.g., python, fintech, warsaw Pages to fetch – number of Google pages to scrape (each page ≈ 10 results) Triggers the workflow when submitted. Format Keywords (Set) Converts the keywords into a Google-ready query string: ("python") ("fintech") ("warsaw") These parentheses improve relevance in Google searches. Build Page List (Code) Creates a list of pages to iterate through. For example, if “Pages to fetch” = 3, it generates 3 search batches with proper start offsets (0, 10, 20). Keeps track of: Grouped keywords (keywordsGrouped) Raw keywords Submission timestamp Loop Over Items (Split In Batches) Loops through the page list one batch at a time. Sends each batch to SerpAPI Search and continues until all pages are processed. SerpAPI Search Queries Google with: site:pl.linkedin.com/in/ ("keyword1") ("keyword2") ("keyword3") Fixed to the Warsaw, Masovian Voivodeship, Poland location. The start parameter controls pagination. Check how many results are returned (Switch) If no results → Triggers No profiles found. If results found → Passes data forward. Split Out Extracts each LinkedIn result from the organic_results array. Get Full Name to property of object (Code) Extracts a clean full name from the search result title (text before “–” or “|”). Append profile in sheet (Google Sheets) Saves the following fields into your connected sheet: | Column | Description | |---------|-------------| | Date | Submission timestamp | | Profile | Public LinkedIn profile URL | | Full name | Extracted candidate name | | Keywords | Original keywords from the form | Loop Over Items (continue) After writing each batch, it loops to the next Google page until all pages are complete. Form Response (final step) Sends a confirmation back to the user after all pages are processed: Check linked file 🧾 Google Sheets Setup Before using the workflow, prepare your Google Sheet with these columns in row 1: | Column Name | Description | |--------------|-------------| | Date | Automatically filled with the form submission time | | Profile | LinkedIn profile link | | Full name | Extracted name from search results | | Keywords | Original search input | > You can expand the sheet to include optional fields like Snippet, Job Title, or Notes if you modify the mapping in the Append profile in sheet node. Requirements SerpAPI account* – with API key stored securely in *n8n Credentials**. Google Sheets OAuth2 credentials** – connected to your target sheet with edit access. n8n instance (Cloud or self-hosted)** > Note: SerpAPI node is part of the Community package and may require self-hosted n8n. How to set up Import the [LI] - Search profiles workflow into n8n. Connect your credentials: SerpAPI – use your API key. Google Sheets OAuth2 – ensure you have write permissions. Update the Google Sheets node to point to your own spreadsheet and worksheet. (Optional) Edit the location field in SerpAPI Search for different regions. Activate the workflow and open the public form (via webhook URL). Enter your keywords and specify the number of pages to fetch. How to customize the workflow Change search region:** Modify the location in the SerpAPI node or change the domain to site:linkedin.com/in/ for global searches. Add pagination beyond 3–4 pages:** Increase “Pages to fetch” — but note that excessive pages may trigger Google rate limits. Avoid duplicates:* Add a *Google Sheets → Read* + *IF** node before appending new URLs. Add notifications:* Add *Slack, **Discord, or Email nodes after Google Sheets to alert your team when new data arrives. Capture more data:** Map additional fields like title, snippet, or position into your Sheet. Security notes Never store API keys directly in nodes — always use n8n Credentials. Keep your Google Sheet private and limit edit access. Remove identifying data before sharing your workflow publicly. 💡 Improvement suggestions | Area | Recommendation | Benefit | |-------|----------------|----------| | Dynamic location | Add a “Location” field to the form and feed it to SerpAPI dynamically. | Broader and location-specific searches | | Rate limiting | Add a short Wait node (e.g., 1–2s) between page fetches. | Prevents API throttling | | De-duplication | Check for existing URLs before appending. | Prevents duplicates | | Logging | Add a second sheet or log file with timestamps per run. | Easier debugging and tracking | | Data enrichment | Add a LinkedIn or People Data API enrichment step. | Collect richer candidate data | ✅ Summary: This workflow automates the process of searching public LinkedIn profiles from Google across multiple pages. It formats user-entered keywords into advanced Google queries, iterates through paginated SerpAPI results, extracts profile data, and stores it neatly in a Google Sheet — all through a single, user-friendly form.
by Muhammad Farooq Iqbal
This n8n template demonstrates how to generate animated videos from static images using ByteDance Seedance 1.5 Pro model through the KIE.AI API. The workflow creates dynamic video content based on text prompts and input images, supporting custom aspect ratios, resolutions, and durations for versatile video creation. Use cases are many: Create animated videos from product photos, generate social media content from images, produce video ads from static graphics, create animated story videos, transform photos into dynamic content, generate video presentations, create animated thumbnails, or produce video content for marketing campaigns! Good to know The workflow uses ByteDance Seedance 1.5 Pro model via KIE.AI API for high-quality image-to-video generation Creates animated videos from static images based on text prompts Supports multiple aspect ratios (9:16 vertical, 16:9 horizontal, 1:1 square) Configurable resolution options (720p, 1080p, etc.) Customizable video duration (in seconds) KIE.AI pricing: Check current rates at https://kie.ai/ for video generation costs Processing time: Varies based on video length and KIE.AI queue, typically 1-5 minutes Image requirements: File must be publicly accessible via URL (HTTPS recommended) Supported image formats: PNG, JPG, JPEG Output format: Video file URL (MP4) ready for download or streaming Automatic polling system handles processing status checks and retries How it works Video Parameters Setup: The workflow receives video prompt and image URL (set in 'Set Video Parameters' node or via trigger) Video Generation Submission: Parameters are submitted to KIE.AI API using ByteDance Seedance 1.5 Pro model Processing Wait: Workflow waits 5 seconds, then polls the generation status Status Check: Checks if video generation is complete, queuing, generating, or failed Polling Loop: If still processing, workflow waits and checks again until completion Video URL Extraction: Once complete, extracts the generated video file URL from the API response Video Download: Downloads the generated video file for local use or further processing The workflow automatically handles different processing states (queuing, generating, success, fail) and retries polling until video generation is complete. The Seedance model creates smooth, animated videos from static images based on the provided text prompt, bringing images to life with natural motion. How to use Setup Credentials: Configure KIE.AI API key as HTTP Bearer Auth credential Set Video Parameters: Update 'Set Video Parameters' node with: prompt: Text description of the desired video animation/scene image_url: Publicly accessible URL of the input image Configure Video Settings: Adjust in 'Submit Video Generation Request' node: aspect_ratio: 9:16 (vertical), 16:9 (horizontal), 1:1 (square) resolution: 720p, 1080p, etc. duration: Video length in seconds (e.g., 8, 10, 15) Deploy Workflow: Import the template and activate the workflow Trigger Generation: Use manual trigger to test, or replace with webhook/other trigger Receive Video: Get generated video file in the output, ready for download or streaming Pro tip: For best results, ensure your image is hosted on a public URL (HTTPS) and matches the desired aspect ratio. Use clear, high-quality images for better video generation. Write detailed, descriptive prompts to guide the animation - the more specific your prompt, the better the video output. The workflow automatically handles polling and status checks, so you don't need to worry about timing. Requirements KIE.AI API** account for accessing ByteDance Seedance 1.5 Pro video generation Image File URL** that is publicly accessible (HTTPS recommended) Text Prompt** describing the desired video animation/scene n8n** instance (cloud or self-hosted) Supported image formats: PNG, JPG, JPEG Customizing this workflow Trigger Options: Replace the manual trigger with webhook trigger for API-based video generation, schedule trigger for batch processing, or form trigger for user image uploads. Video Settings: Modify aspect ratio, resolution, and duration in 'Submit Video Generation Request' node to match your content needs (TikTok vertical, YouTube horizontal, Instagram square, etc.). Prompt Engineering: Enhance prompts in 'Set Video Parameters' node with detailed descriptions, camera movements, animation styles, and scene details for better video quality. Output Formatting: Modify the 'Extract Video URL' code node to format output differently (add metadata, include processing time, add file size, etc.). Error Handling: Add notification nodes (Email, Slack, Telegram) to alert when video generation fails or completes. Post-Processing: Add nodes after video generation to save to cloud storage, upload to YouTube/Vimeo, send to video editing tools, or integrate with content management systems. Batch Processing: Add loops to process multiple images from a list or spreadsheet automatically, generating videos for each image. Storage Integration: Connect output to Google Drive, Dropbox, S3, or other storage services for organized video file management. Social Media Integration: Automatically post generated videos to TikTok, Instagram Reels, YouTube Shorts, or other platforms. Video Enhancement: Chain with other video processing workflows - add captions, music, transitions, or combine multiple generated videos. Aspect Ratio Variations: Generate multiple versions of the same video in different aspect ratios (9:16, 16:9, 1:1) for different platforms.
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform Complex Research Papers into Accessible Summaries This workflow automatically generates multiple types of summaries from research papers, making complex academic content accessible to different audiences. By combining PDF Vector's advanced parsing capabilities with GPT-4's language understanding, researchers can quickly digest papers outside their expertise, communicate findings to diverse stakeholders, and create social media-friendly research highlights. Target Audience & Problem Solved This template is designed for: Research communicators** translating complex findings for public audiences Journal editors** creating accessible abstracts and highlights Science journalists** quickly understanding technical papers Academic institutions** improving research visibility and impact Funding agencies** reviewing large volumes of research outputs It solves the critical challenge of research accessibility by automatically generating summaries tailored to different audience needs - from technical experts to the general public. Prerequisites n8n instance with PDF Vector node installed OpenAI API key with GPT-4 or GPT-3.5 access PDF Vector API credentials Basic understanding of webhook setup Optional: Slack/Email integration for notifications Minimum 20 API credits per paper summarized Step-by-Step Setup Instructions Configure API Credentials Navigate to n8n Credentials section Add PDF Vector credentials with your API key Add OpenAI credentials with your API key Test both connections to ensure they work Set Up the Webhook Endpoint Import the workflow template into n8n Note the webhook URL from the "Webhook - Paper URL" node This URL will receive POST requests with paper URLs Example request format: { "paperUrl": "https://example.com/paper.pdf" } Configure Summary Models Review the OpenAI model settings in each summary node GPT-4 recommended for executive and technical summaries GPT-3.5-turbo suitable for lay and social media summaries Adjust temperature settings for creativity vs accuracy Customize Output Formats Modify the "Combine All Summaries" node for your needs Add additional fields or metadata as required Configure response format (JSON, HTML, plain text) Test the Workflow Use a tool like Postman or curl to send a test request Monitor the execution for any errors Verify all four summary types are generated Check response time and adjust timeout if needed Implementation Details The workflow implements a sophisticated summarization pipeline: PDF Parsing: Uses LLM-enhanced parsing for accurate extraction from complex layouts Parallel Processing: Generates all summary types simultaneously for efficiency Audience Targeting: Each summary type uses specific prompts and constraints Quality Control: Structured prompts ensure consistent, high-quality outputs Flexible Output: Returns all summaries in a single API response Customization Guide Adding Custom Summary Types: Create new summary nodes with specialized prompts: // Example: Policy Brief Summary { "content": "Create a policy brief (max 300 words) highlighting: Policy-relevant findings Recommendations for policymakers Societal implications Implementation considerations Paper content: {{ $json.content }}" } Modifying Summary Lengths: Adjust word limits in each summary prompt: // In Executive Summary node: "max 500 words" // Change to your desired length // In Tweet Summary node: "max 280 characters" // Twitter limit Adding Language Translation: Extend the workflow with translation nodes: // After summary generation, add: "Translate this summary to Spanish: {{ $json.executiveSummary }}" Implementing Caching: Add a caching layer to avoid reprocessing: Use Redis or n8n's static data Cache based on paper DOI or URL hash Set appropriate TTL for cache entries Batch Processing Enhancement: For multiple papers, modify the workflow: Accept array of paper URLs Use SplitInBatches node for processing Aggregate results before responding Summary Types: Executive Summary: 1-page overview for decision makers Technical Summary: Detailed summary for researchers Lay Summary: Plain language for general audience Social Media: Tweet-sized key findings Key Features: Parse complex academic PDFs with LLM enhancement Generate multiple summary types simultaneously Extract and highlight key methodology and findings Create audience-appropriate language and depth API-driven for easy integration Advanced Features Quality Metrics: Add a quality assessment node: // Evaluate summary quality const qualityChecks = { hasKeyFindings: summary.includes('findings'), appropriateLength: summary.length <= maxLength, noJargon: !technicalTerms.some(term => summary.includes(term)) }; Template Variations: Create field-specific templates: Medical research: Include clinical implications Engineering papers: Focus on technical specifications Social sciences: Emphasize methodology and limitations
by Yaron Been
Generate 3D Models & Textures from Images with Hunyuan3D AI This workflow connects n8n → Replicate API to generate 3D-like outputs using the ndreca/hunyuan3d-2.1-test model. It handles everything: sending the request, waiting for processing, checking status, and returning results. ⚡ Section 1: Trigger & Setup ⚙️ Nodes 1️⃣ On Clicking “Execute” What it does:** Starts the workflow manually in n8n. Why it’s useful:** Great for testing or one-off runs before automation. 2️⃣ Set API Key What it does:* Stores your *Replicate API Key**. Why it’s useful:** Keeps authentication secure and reusable across HTTP nodes. 💡 Beginner Benefit No coding needed — just paste your API key once. Easy to test: press Execute, and you’re live. 🤖 Section 2: Send Job to Replicate ⚙️ Nodes 3️⃣ Create Prediction (HTTP Request) What it does:* Sends a *POST request** to Replicate’s API with: Model version (70d0d816...ae75f) Input image URL Parameters like steps, seed, generate_texture, remove_background Why it’s useful:** This kicks off the AI generation job on Replicate’s servers. 4️⃣ Extract Prediction ID (Code) What it does:* Grabs the *prediction ID** from the API response and builds a status-check URL. Why it’s useful:** Every job has a unique ID — this lets us track progress later. 💡 Beginner Benefit You don’t need to worry about JSON parsing — the workflow extracts the ID automatically. Everything is reusable if you run multiple generations. ⏳ Section 3: Poll Until Complete ⚙️ Nodes 5️⃣ Wait (2s) What it does:** Pauses for 2 seconds before checking the job status. Why it’s useful:** Prevents spamming the API with too many requests. 6️⃣ Check Prediction Status (HTTP Request) What it does:** GET request to see if the job is finished. 7️⃣ Check If Complete (IF Node) What it does:** If status = succeeded → process results. If not → loops back to Wait and checks again. 💡 Beginner Benefit Handles waiting logic for you — no manual refreshing needed. Keeps looping until the AI job is really done. 📦 Section 4: Process the Result ⚙️ Nodes 8️⃣ Process Result (Code) What it does:** Extracts: status output (final generated file/URL) metrics (performance stats) Timestamps (created_at, completed_at) Model info Why it’s useful:** Packages the response neatly for storage, email, or sending elsewhere. 💡 Beginner Benefit Get clean, structured data ready for saving or sending. Can be extended easily: push output to Google Drive, Notion, or Slack. 📊 Workflow Overview | Section | What happens | Key Nodes | Benefit | | --------------------- | --------------------------------- | ----------------------------- | --------------------------------- | | ⚡ Trigger & Setup | Start workflow + set API key | Manual Trigger, Set | Easy one-click start | | 🤖 Send Job | Send input & get prediction ID | Create Prediction, Extract ID | Launches AI generation | | ⏳ Poll Until Complete | Waits + checks status until ready | Wait, Check Status, IF | Automated loop, no manual refresh | | 📦 Process Result | Collects output & metrics | Process Result | Clean result for next steps | 🎯 Overall Benefits ✅ Fully automates Replicate model runs ✅ Handles waiting, retries, and completion checks ✅ Clean final output with status + metrics ✅ Beginner-friendly — just add API key + input image ✅ Extensible: connect results to Google Sheets, Gmail, Slack, or databases ✨ In short: This is a no-code AI image-to-3D content generator powered by Replicate and automated by n8n.
by Sk developer
🚀 Facebook to MP4 Video Downloader – Fully Customizable Automated Workflow Easily convert Facebook videos into downloadable MP4 files using Facebook Video Downloader API. This n8n workflow automates fetching videos, downloading them, uploading them to Google Drive, and logging results in Google Sheets. Users can modify and extend this flow according to their own needs (e.g., add email notifications, change storage location, or use another API). 📝 Node-by-Node Explanation On form submission → Triggers when a user submits a Facebook video URL via the form. (You can customize this form to include email or multiple URLs.) Facebook RapidAPI Request → Sends a POST request to Facebook Video Downloader API to fetch downloadable MP4 links. (Easily replace or update API parameters as needed.) If Node → Checks API response for errors before proceeding. (You can add more conditions to handle custom error scenarios.) MP4 Downloader → Downloads the Facebook video file from the received media URL. (You can change download settings, add quality filters, or store multiple resolutions.) Upload to Google Drive → Uploads the downloaded MP4 file to a Google Drive folder. (Easily switch to Dropbox, S3, or any other storage service.) Google Drive Set Permission → Sets the uploaded file to be publicly shareable. (You can make it private or share only with specific users.) Google Sheets → Logs successful conversions with the original URL and shareable MP4 link. (Customizable for additional fields like video title, size, or download time.) Wait Node → Delays before logging failed conversions to avoid rapid writes. (You can adjust the wait duration or add retry attempts.) Google Sheets Append Row → Records failed conversion attempts with N/A as the Drive URL. (You can add notification alerts for failed downloads.) ✅ Use Cases Automate Facebook video downloads for social media teams Instantly generate shareable MP4 links for clients or marketing campaigns Maintain a centralized log of downloaded videos for reporting Customizable flow for different video quality, formats, or storage needs 🚀 Benefits Fast and reliable Facebook video downloading with Facebook Video Downloader API Flexible and fully customizable – adapt nodes, storage, and notifications as required Automatic error handling and logging in Google Sheets Cloud-based storage with secure and shareable Google Drive links Seamless integration with n8n and Facebook Video Downloader API for scalable automation 🔑 Resolved: Manual Facebook video downloads are now fully automated, customizable, and scalable using Facebook Video Downloader API, Google Drive uploads, and detailed logging via Google Sheets.
by Yashraj singh sisodiya
AI Latest 24 Update Workflow Explanation Aim The aim of the AI Latest 24 Update Workflow is to automate the daily collection and distribution of the most recent Artificial Intelligence and Technology news from the past 24 hours. It ensures users receive a clean, well-structured HTML email containing headlines, summaries, and links to trusted sources, styled professionally for easy reading. Goal The goal is to: Automate news retrieval by fetching the latest AI developments from trusted sources daily. Generate structured HTML output with bold headlines, concise summaries, and clickable links. Format the content professionally with inline CSS, ensuring the email is visually appealing. Distribute updates automatically to selected recipients via Gmail. Provide reusability so the HTML can be processed for other platforms if needed. This ensures recipients receive accurate, up-to-date, and well-formatted AI news without manual effort. Requirements The workflow relies on the following components and configurations: n8n Platform Acts as the automation environment for scheduling, fetching, formatting, and delivering AI news updates. Node Requirements Schedule Trigger Runs the workflow every day at 10:00 AM. Automates the process without manual initiation. Message a model (Perplexity API) Uses the sonar-pro model from Perplexity AI. Fetches the most recent AI developments in the past 24 hours. Outputs results as a self-contained HTML email with inline CSS (card-style layout). Send a message (Gmail) Sends the generated HTML email with the subject: “Latest Tech and AI news update 🚀”. Recipients: xyz@gmail.com. HTML Node Processes the AI model’s HTML response. Ensures the email formatting is clean, valid, and ready for delivery. Credentials Perplexity API account**: For fetching AI news. Gmail OAuth2 account**: For secure email delivery. Input Requirements No manual input required; the workflow runs automatically on schedule. Output A daily AI news digest email containing: Headlines in bold. One-sentence summaries in normal text. Full URLs as clickable links. Styled in a clean card-based format with hover effects. API Usage The workflow integrates APIs to achieve automation: Perplexity API Used in the Message a model node. The API fetches the latest AI news, ensures data accuracy, and outputs HTML formatted content. Provides styling via inline CSS (Segoe UI font, light background, card design). Ensures the news is fresh (past 24 hours only). Gmail API Used in the Send a message node. Handles the secure delivery of emails with OAuth2 authentication. Sends AI news updates directly to inboxes. HTML Processing The HTML Node ensures proper formatting before email delivery: Process**: Cleans and validates HTML ($json.message) generated by Perplexity. Output**: A self-contained HTML email with proper structure (<html>, <head>, <body>). Relevance: Ensures Gmail sends a **styled digest instead of raw text. Workflow Summary The AI Latest 24 Update Workflow automates daily AI news collection and delivery by: Triggering at 10:00 AM using the Schedule Trigger node. Fetching AI news via Perplexity API (Message a model node). Formatting results into clean HTML (HTML node). Sending the email via Gmail API to multiple recipients. This workflow ensures a seamless, hands-off system where recipients get accurate, fresh, and well-designed AI news updates every day.
by Sk developer
Automate Text To Video Generation with Google Veo3 API and Google Drive Integration Create CGI ads effortlessly by integrating the Google Veo3 API for video generation and uploading to Google Drive with seamless email notifications. Node-by-Node Explanation: On form submission: Triggers the workflow when a form is submitted with a prompt for the video. Wait for API Response: Waits for 35 seconds for the API to respond. API Request: Check Task Status: Sends an HTTP request to check the task status for success or failure. Condition: Task Output Status: Checks the task's output status and triggers the appropriate action (success, processing, or failure). Wait for Task to Complete: Waits for another 30 seconds to recheck the task's completion status. Send Email: API Error - Task ID Missing: Sends an email if the task ID is missing from the API response. Upload File to Google Drive: Uploads the generated video to Google Drive. Set Google Drive Permissions: Configures the permissions for the uploaded video on Google Drive. Send an email: Video Link: Sends a final email with the link to the completed video on Google Drive. Download Video: Downloads the video from the generated URL. How to Obtain RapidAPI Key: Visit Google Veo3 API on RapidAPI. Sign up or log in to your account. Subscribe to the Google Veo3 API plan. Copy the API key provided in your RapidAPI dashboard. How to Configure Google Drive: Go to Google Cloud Console. Enable the Google Drive API. Create credentials for OAuth 2.0 and download the credentials file. In your workflow, authenticate using these credentials to upload and manage files on Google Drive. Use Case: This workflow is ideal for businesses looking to automate CGI video creation for advertisements using the Google Veo3 API, with seamless file management and sharing via Google Drive. Benefits: Automation**: Completely automates the CGI video creation and sharing process. Error Handling**: Sends error notifications for task failures or missing task IDs. File Management**: Automatically uploads and manages videos on Google Drive. Easy Sharing**: Generates shareable links to videos via email. Who Is This For? Digital marketers looking to create ads at scale. Creative agencies producing CGI content. Developers integrating API workflows for video generation. Link to Google Veo3 API: Google Veo3 API on RapidAPI
by Oneclick AI Squad
Live Airport Delay Dashboard with FlightStats & Team Alerts Description Automates live monitoring of airport delays using FlightStats API. Stores and displays delay data, with Slack alerts for severe delays to operations/sales teams. Essential Information Runs on a scheduled trigger (e.g., hourly or daily). Fetches real-time delay data from FlightStats API. Stores data in Google Sheets and alerts teams via Slack for severe delays. System Architecture Delay Monitoring Pipeline**: Set Schedule: Triggers the workflow hourly or daily via Cron. FlightStats API: Retrieves live airport delay data. Data Management Flow**: Set Output Data: Prepares data for storage or display. Merge API Data: Combines and processes delay data. Alert and Display**: Send Response via Slack: Alerts ops/sales for severe delays. No Action for Minor Delays: Skips minor delays with no action. Implementation Guide Import the workflow JSON into n8n. Configure Cron node for desired schedule (e.g., every 1 hr). Set up FlightStats API credentials and endpoint (e.g., https://api.flightstats.com). Configure Google Sheets or Notion for data storage/display. Test with a sample API call and verify Slack alerts. Adjust delay severity thresholds as needed. Technical Dependencies Cron service for scheduling. FlightStats API for real-time delay data. Google Sheets API or Notion API for data storage/display. Slack API for team notifications. n8n for workflow automation. Database & Sheet Structure Delay Tracking Sheet** (e.g., AirportDelays): Columns: airport_code, delay_status, delay_minutes, timestamp, alert_sent Example: JFK, Severe, 120, 2025-07-29T20:28:00Z, Yes Customization Possibilities Adjust Cron schedule for different frequencies (e.g., every 30 min). Modify FlightStats API parameters to track specific airports. Customize Slack alert messages in the Send Response via Slack node. Integrate with a dashboard tool (e.g., Google Data Studio) for live display. Add email alerts for additional notification channels.
by SpaGreen Creative
Automated Web Form Data Collection and Storage to Google Sheets Overview This n8n workflow allows you to collect data from a web form and automatically store it in a Google Sheet. It includes data cleanup, date stamping, optional batching, and throttling for smooth handling of single or bulk submissions. What It Does Accepts data submitted from a frontend form via HTTP POST Cleans and structures the incoming JSON data Adds the current date automatically Appends structured data into a predefined Google Sheet Supports optional batch processing and a wait/delay mechanism to control data flow Features Webhook trigger for external form submissions JavaScript-based data cleaning and formatting Looping and delay nodes to manage bulk submissions Direct integration with Google Sheets via OAuth2 Fully mapped columns to match sheet structure Custom date field (submitted_date) auto-generated per entry Who’s It For This workflow is perfect for: Developers or marketers collecting lead data via online forms Small businesses tracking submissions from landing pages or contact forms Event organizers managing RSVP or booking forms Anyone needing to collect and store structured data in Google Sheets automatically Prerequisites Make sure the following are ready before use: An n8n instance (self-hosted or cloud) A Google account with edit access to the target Google Sheet Google Sheets OAuth2 API credentials** configured in n8n A web form or app capable of sending POST requests with the following fields: business_name location whatsapp email name Google Sheet Format Ensure your Google Sheet contains the following exact column names (case-sensitive): | Business Name | Location | WhatsApp Number | Email | Name | Date | |---------------|------------|------------------|----------------------|----------------|------------| | SpaGreen | Bangladesh | 8801322827753 | spagreen@gmail.com | Abdul Mannan | 2025-09-14 | | Dev Code Journey | Bangladesh | 8801322827753 | admin@gmail.com | Shakil Ahammed | 2025-09-14 | > Note: The "Email" column includes a trailing space — this must match exactly in both the sheet and column mapping settings. Setup Instructions 1. Configure Webhook Use the Webhook node with path: /93a81ced-e52c-4d31-96d2-c91a20bd7453 Accepts POST requests from a frontend form or application 2. Clean Incoming Data The JavaScript (Code) node extracts the submitted fields Adds a submitted_date in YYYY-MM-DD format 3. Loop Over Items (Optional for Batches) The Split In Batches node allows handling bulk form submissions For single entries, the workflow still works without adjustment 4. Append to Google Sheet The Google Sheets node appends each submission as a new row Mapped fields include: Business Name Location WhatsApp Number Email Name Date (auto-filled) 5. Add Delay (Optional) The Wait node adds a 5-second delay per loop Helps throttle requests when handling large batches How to Use It Clone or import the workflow into your n8n instance Update the Webhook URL in your frontend form’s POST action Connect your Google Sheets account in the Google Sheets node Confirm that your target sheet matches the required column structure Start sending data from your form — new entries will appear in your sheet automatically > This setup ensures form submissions are received, cleaned, stored efficiently, and processed in a controlled manner. Notes Use the Sticky Notes in the workflow to understand each node’s purpose You can modify the delay duration or disable looping for single submissions For added security, consider securing your webhook with headers or tokens Ideal Use Cases Contact forms Lead capture pages Event signups or bookings Newsletter or email list opt-ins Surveys or feedback forms Support WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: spagreen Developer Portfolio: Codecanyon SpaGreen
by Oneclick AI Squad
This automated n8n workflow delivers daily multi-currency exchange rate updates via API to email and WhatsApp. The system fetches the latest exchange rates, formats the data, and sends alerts to designated recipients to keep users informed of currency fluctuations. What is Multi-Currency Exchange Update? Multi-currency exchange updates involve retrieving the latest exchange rates for multiple currencies against a base currency (INR) via an API, formatting the data, and distributing it through email and WhatsApp for real-time financial awareness. Good to Know Exchange rate accuracy depends on the reliability of the external API source API rate limits should be respected to ensure system stability Manual configuration of API keys and recipient lists is required Real-time alerts help users stay updated on currency movements How It Works Daily Trigger** - Triggers the workflow daily at 7:30 AM IST to fetch and send exchange rates Set Config: API Key & Currencies** - Defines the API key and target currencies (INR, CAD, AUD, CNY, EUR, USD) for use in the API call Fetch Exchange Rates (CurrencyFreaks)** - Calls the exchange rate API and fetches the latest rates with INR as the base Wait for API Response** - Adds a short delay (5s) to ensure API rate limits are respected and system stability is maintained Set Email & WhatsApp Recipients** - Sets the list of email addresses and WhatsApp numbers who should receive the currency update Create Message Subject & Body** - Dynamically generates a subject line (e.g., "Today's Currency Exchange Rates [(Date)]") and the body containing all rates Send Email Alert** - Sends the formatted currency rate update via email Send WhatsApp Alert** - Sends the formatted currency rate update via WhatsApp How to Use Import the workflow into n8n Configure the API key for the CurrencyFreaks API Set the list of target currencies and ensure INR is the base currency Configure email credentials for sending alerts Configure WhatsApp credentials or API integration for sending messages Test the workflow with sample data to verify rate fetching and alert delivery Adjust trigger time or recipient list as needed Requirements CurrencyFreaks API credentials Email service credentials (Gmail, SMTP, etc.) WhatsApp API or integration credentials Customizing This Workflow Modify the target currencies in the Set Config node to include additional currencies Adjust the delay time in the Wait node based on API rate limits Customize the email and WhatsApp message formats in the Create Message node to suit user preferences
by Florent
n8n Restore workflows & credentials from Disk - Self-Hosted Solution This n8n template provides a safe and intelligent restore solution for self-hosted n8n instances, allowing you to restore workflows and credentials from disk backups. Perfect for disaster recovery or migrating between environments, this workflow automatically identifies your most recent backup and provides a manual restore capability that intelligently excludes the current workflow to prevent conflicts. Works seamlessly with date-organized backup folders. Good to know This workflow uses n8n's native import commands (n8n import:workflow and n8n import:credentials) Works with date-formatted backup folders (YYYY-MM-DD) for easy version identification The restore process intelligently excludes the current workflow to prevent overwriting itself Requires proper Docker volume configuration and file system permissions All operations are performed server-side with no external dependencies Compatible with backups created by n8n's export commands How it works Restore Process (Manual) Manual trigger with configurable pinned data options (credentials: true/false, workflows: true/false) The Init node sets up all necessary paths, timestamps, and configuration variables using your environment settings The workflow scans your backup folder and automatically identifies the most recent backup If restoring credentials: Direct import from the latest backup folder using n8n's import command Credentials are imported with their encrypted format intact If restoring workflows: Scans the backup folder for all workflow JSON files Creates a temporary folder with all workflows from the backup Intelligently excludes the current restore workflow to prevent conflicts Imports all other workflows using n8n's import command Cleans up temporary files automatically Optional email notifications provide detailed restore summaries with command outputs How to use Prerequisites Existing n8n backups in date-organized folder structure (format: /backup-folder/YYYY-MM-DD/) Workflow backups as JSON files in the date folder Credentials backups in subfolder: /backup-folder/YYYY-MM-DD/n8n-credentials/ For new environments: N8N_ENCRYPTION_KEY from source environment (see dedicated section below) Initial Setup Configure your environment variables: N8N_ADMIN_EMAIL: Your email for notifications (optional) N8N_BACKUP_FOLDER: Location where your backups are stored (e.g., /files/n8n-backups) N8N_PROJECTS_DIR: Projects root directory GENERIC_TIMEZONE: Your local timezone N8N_ENCRYPTION_KEY: Required if restoring credentials to a new environment (see dedicated section below) Update the Init node: (Optional) Configure your email here: const N8N_ADMIN_EMAIL = $env.N8N_ADMIN_EMAIL || 'youremail@world.com'; Set PROJECT_FOLDER_NAME to "Workflow-backups" (or your preferred name) Set credentials to "n8n-credentials" (or your backup credentials folder name) Verify BACKUP_FOLDER path matches where your backups are stored Ensure your Docker setup has: Mounted volume containing backups (e.g., /local-files:/files) Access to n8n's CLI import commands Proper file system permissions (read access to backup directories) Performing a Restore Open the workflow and locate the "Start Restore" manual trigger node Edit the pinned data to choose what to restore: credentials: true - Restore credentials workflows: true - Restore workflows Set both to true to restore everything Click "Execute workflow" on the "Start Restore" node to execute the restore The workflow will automatically find the most recent backup (latest date) Check the console logs or optional email for detailed restore summary Important Notes The workflow automatically excludes itself during restore to prevent conflicts Credentials are restored with their encryption intact. If restoring to a new environment, you must configure the N8N_ENCRYPTION_KEY from the source environment (see dedicated section below) Existing workflows/credentials with the same names will be overwritten Test in a non-production environment first if unsure ⚠ Critical: N8N_ENCRYPTION_KEY Configuration Why this is critical: n8n generates an encryption key automatically on first launch and saves it in the ~/.n8n/config file. However, if this file is lost (for example, due to missing Docker volume persistence), n8n will generate a NEW key, making all previously encrypted credentials inaccessible. When you need to configure N8N_ENCRYPTION_KEY: Restoring to a new n8n instance When your data directory is not persisted between container recreations Migrating from one server to another As a best practice to ensure key persistence across updates How credentials encryption works: Credentials are encrypted with a specific key unique to each n8n instance This key is auto-generated on first launch and stored in /home/node/.n8n/config When you backup credentials, they remain encrypted but the key is NOT included If the key file is lost or a new key is generated, restored credentials cannot be decrypted Setting N8N_ENCRYPTION_KEY explicitly ensures the key remains consistent Solution: Retrieve and configure the encryption key Step 1: Get the key from your source environment Check if the key is defined in environment variables docker-compose exec n8n printenv N8N_ENCRYPTION_KEY If this command returns nothing, the key is auto-generated and stored in n8n's data volume: Enter the container docker-compose exec n8n sh Check configuration file cat /home/node/.n8n/config Exit container exit Step 2: Configure the key in your target environment Option A: Using .env file (recommended for security) Add to your .env file N8N_ENCRYPTION_KEY=your_retrieved_key_here Then reference it in docker-compose.yml: services: n8n: environment: N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY} Option B: Directly in docker-compose.yml (less secure) services: n8n: environment: N8N_ENCRYPTION_KEY=your_retrieved_key_here Step 3: Restart n8n docker-compose restart n8n Step 4: Now restore your credentials Only after configuring the encryption key, run the restore workflow with credentials: true. Best practice for future backups: Always save your N8N_ENCRYPTION_KEY in a secure location alongside your backups Consider storing it in a password manager or secure vault Document it in your disaster recovery procedures Requirements Existing Backups Date-organized backup folders (YYYY-MM-DD format) Backup files created by n8n's export commands or compatible format Environment Self-hosted n8n instance (Docker recommended) Docker volumes mounted with access to backup location Optional: SMTP server configured for email notifications Credentials (Optional) SMTP credentials for email notifications (if using email nodes) Technical Notes Smart Workflow Exclusion During workflow restore, the current workflow's name is cleaned and matched against backup files This prevents the restore workflow from overwriting itself The exclusion logic handles special characters and spaces in workflow names A temporary folder is created with all workflows except the current one Timezone Handling All timestamps use UTC for technical operations Display times use local timezone for user-friendly readability Backup folder scanning works with YYYY-MM-DD format regardless of timezone Security Credentials are imported in n8n's encrypted format (encryption preserved) Ensure backup directories have appropriate read permissions Consider access controls for who can trigger restore operations No sensitive data is logged in console output Troubleshooting Common Issues No backups found: Verify the N8N_BACKUP_FOLDER path is correct and contains date-formatted folders Permission errors: Ensure Docker user has read access to backup directories Path not found: Verify all volume mounts in docker-compose.yml match your backup location Import fails: Check that backup files are in valid n8n export format Workflow conflicts: The workflow automatically excludes itself, but ensure backup files are properly named Credentials not restored: Verify the backup contains a n8n-credentials folder with credential files Credentials decrypt error: Ensure N8N_ENCRYPTION_KEY matches the source environment Version Compatibility Tested with n8n version 1.113.3 Compatible with Docker-based n8n installations Requires n8n CLI access (available in official Docker images) This workflow is designed for self-hosted server backup restoration. For FTP/SFTP remote backups, see the companion workflow "n8n Restore from FTP". Works best with backups from: "Automated n8n Workflows & Credentials Backup to Local/Server Disk & FTP"
by Oneclick AI Squad
Automatically detects and hides hate speech/toxic comments, alerts your team, and logs flagged content for review. Workflow Overview Trigger**: A Schedule node runs every 15 minutes to poll for new comments (Instagram doesn't natively push notifications easily, so polling is used). You could replace this with a Webhook if you set up Instagram webhooks via Graph API. Scan Comments**: Uses Instagram Graph API (via HTTP Request) to fetch recent posts and their comments. Assumes you have an Instagram Business Account and a valid access token (from Facebook Developer Portal). Detect Toxicity**: For each comment, it sends the text to Google's Perspective API (a free toxicity detection API; sign up at https://perspectiveapi.com/ for an API key). Threshold for "toxic" is set to >0.7 toxicity score (configurable). Auto-Hide Offensive Ones**: If toxic, uses Instagram API to hide the comment. Alert Team**: Sends a Slack notification (or email; configurable) with details. Store Evidence**: Appends the toxic comment details (text, user, score, timestamp) to a Google Sheet for auditing. Error Handling**: Basic error node to notify if API calls fail. Business Value Alignment**: This automates protection, reducing manual moderation and building trust. Prerequisites: n8n installed (self-hosted or cloud). Instagram Graph API access token (set in n8n credentials or as environment variable). Perspective API key (free tier available). Slack webhook or email credentials. Google Sheets API credentials (for storage). How to Import In n8n, go to the workflows list. Click "Import from JSON" (or paste into a new workflow). Update placeholders: Replace YOUR_INSTAGRAM_ACCESS_TOKEN with your token. Replace YOUR_PERSPECTIVE_API_KEY with your key. Set up credentials for HTTP Request (Instagram), Slack, and Google Sheets. Adjust YOUR_INSTAGRAM_BUSINESS_ACCOUNT_ID and YOUR_MEDIA_ID (or make it dynamic). Test and activate. If you encounter issues (e.g., API rate limits), adjust the schedule or add waits. Notes on Customization Looping**: The "Loop Over Comments" uses SplitInBatches to process comments one by one, avoiding API rate limits. Toxicity API**: I used Perspective API as it's reliable and free for low volume. If you prefer another (e.g., Hugging Face), swap the HTTP Request body. Instagram API**: This fetches comments for the first recent post (simplified). To handle multiple posts, add another loop. Alerts**: Slack is used; change to Email node if preferred. Storage**: Google Sheets for simplicity; could be swapped for MongoDB or Airtable. Sticky Notes**: Three notes explain phases – they won't affect execution but help in the UI. Testing**: Start with test data. Instagram API requires app review for production.