by Meelioo
How it Works This workflow creates automated daily backups of your n8n workflows to a GitLab repository: Scheduled Trigger - Runs automatically at noon each day to initiate the backup process Fetch Workflows - Retrieves all active workflows from your n8n instance, filtering out archived ones Compare & Process - Checks existing files in GitLab and compares them with current workflows Smart Upload - For each workflow, either updates the existing file in GitLab (if it exists) or creates a new one Notification System - Sends success/failure notifications to a designated Slack channel with execution details >The workflow intelligently handles each file individually, cleaning up unnecessary metadata before converting workflows to formatted JSON files ready for version control. Set up Steps Estimated setup time: 15-20 minutes You'll need to configure three credential connections and customize the Configuration node: GitLab API**: Create a project access token with write permissions to your backup repository n8n Internal API**: Generate an API key from your n8n user settings Slack Bot**: Set up a Slack app with bot token permissions for posting messages to your notification channel > Once credentials are configured, update the Configuration node with your GitLab project owner, repository name, and target branch. The workflow includes detailed setup instructions in the sticky notes for each credential type. After setup, activate the workflow to begin daily automated backups.
by Piotr Sikora
[LI] – Search Profiles > ⚠️ Self-hosted disclaimer: > This workflow uses the SerpAPI community node, which is available only on self-hosted n8n instances. > For n8n Cloud, you may need to use an HTTP Request node with the SerpAPI REST API instead. Who’s it for Recruiters, talent sourcers, SDRs, and anyone who wants to automatically gather public LinkedIn profiles from Google search results based on keywords — across multiple pages — and log them to a Google Sheet for further analysis. What it does / How it works This workflow extends the standard LinkedIn profile search to include pagination, allowing you to fetch results from multiple Google result pages in one go. Here’s the step-by-step process: Form Trigger – “LinkedIn Search” Collects: Keywords (comma separated) – e.g., python, fintech, warsaw Pages to fetch – number of Google pages to scrape (each page ≈ 10 results) Triggers the workflow when submitted. Format Keywords (Set) Converts the keywords into a Google-ready query string: ("python") ("fintech") ("warsaw") These parentheses improve relevance in Google searches. Build Page List (Code) Creates a list of pages to iterate through. For example, if “Pages to fetch” = 3, it generates 3 search batches with proper start offsets (0, 10, 20). Keeps track of: Grouped keywords (keywordsGrouped) Raw keywords Submission timestamp Loop Over Items (Split In Batches) Loops through the page list one batch at a time. Sends each batch to SerpAPI Search and continues until all pages are processed. SerpAPI Search Queries Google with: site:pl.linkedin.com/in/ ("keyword1") ("keyword2") ("keyword3") Fixed to the Warsaw, Masovian Voivodeship, Poland location. The start parameter controls pagination. Check how many results are returned (Switch) If no results → Triggers No profiles found. If results found → Passes data forward. Split Out Extracts each LinkedIn result from the organic_results array. Get Full Name to property of object (Code) Extracts a clean full name from the search result title (text before “–” or “|”). Append profile in sheet (Google Sheets) Saves the following fields into your connected sheet: | Column | Description | |---------|-------------| | Date | Submission timestamp | | Profile | Public LinkedIn profile URL | | Full name | Extracted candidate name | | Keywords | Original keywords from the form | Loop Over Items (continue) After writing each batch, it loops to the next Google page until all pages are complete. Form Response (final step) Sends a confirmation back to the user after all pages are processed: Check linked file 🧾 Google Sheets Setup Before using the workflow, prepare your Google Sheet with these columns in row 1: | Column Name | Description | |--------------|-------------| | Date | Automatically filled with the form submission time | | Profile | LinkedIn profile link | | Full name | Extracted name from search results | | Keywords | Original search input | > You can expand the sheet to include optional fields like Snippet, Job Title, or Notes if you modify the mapping in the Append profile in sheet node. Requirements SerpAPI account* – with API key stored securely in *n8n Credentials**. Google Sheets OAuth2 credentials** – connected to your target sheet with edit access. n8n instance (Cloud or self-hosted)** > Note: SerpAPI node is part of the Community package and may require self-hosted n8n. How to set up Import the [LI] - Search profiles workflow into n8n. Connect your credentials: SerpAPI – use your API key. Google Sheets OAuth2 – ensure you have write permissions. Update the Google Sheets node to point to your own spreadsheet and worksheet. (Optional) Edit the location field in SerpAPI Search for different regions. Activate the workflow and open the public form (via webhook URL). Enter your keywords and specify the number of pages to fetch. How to customize the workflow Change search region:** Modify the location in the SerpAPI node or change the domain to site:linkedin.com/in/ for global searches. Add pagination beyond 3–4 pages:** Increase “Pages to fetch” — but note that excessive pages may trigger Google rate limits. Avoid duplicates:* Add a *Google Sheets → Read* + *IF** node before appending new URLs. Add notifications:* Add *Slack, **Discord, or Email nodes after Google Sheets to alert your team when new data arrives. Capture more data:** Map additional fields like title, snippet, or position into your Sheet. Security notes Never store API keys directly in nodes — always use n8n Credentials. Keep your Google Sheet private and limit edit access. Remove identifying data before sharing your workflow publicly. 💡 Improvement suggestions | Area | Recommendation | Benefit | |-------|----------------|----------| | Dynamic location | Add a “Location” field to the form and feed it to SerpAPI dynamically. | Broader and location-specific searches | | Rate limiting | Add a short Wait node (e.g., 1–2s) between page fetches. | Prevents API throttling | | De-duplication | Check for existing URLs before appending. | Prevents duplicates | | Logging | Add a second sheet or log file with timestamps per run. | Easier debugging and tracking | | Data enrichment | Add a LinkedIn or People Data API enrichment step. | Collect richer candidate data | ✅ Summary: This workflow automates the process of searching public LinkedIn profiles from Google across multiple pages. It formats user-entered keywords into advanced Google queries, iterates through paginated SerpAPI results, extracts profile data, and stores it neatly in a Google Sheet — all through a single, user-friendly form.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow automatically generates 10-second UGC-style portrait video ads for any product — entirely powered by AI. Simply provide your Product Name, Prompt or Idea, and Image Link in Google Sheets, and the system will research your product, craft a modern video prompt, and generate a professional short ad using Kie.ai Sora 2. It combines Tavily search, OpenAI prompt engineering, and Kie.ai image to video generation to create fresh, authentic, and trending video ads that look like real influencer content or cinematic brand clips — perfect for social media campaigns. 💡 Why This Workflow? Creating quality short-form ads usually takes a video editor, copywriter, and creative researcher. This workflow automates all of that. It: Researches your product’s category and trends using Tavily Search API Generates optimized video prompts using AI Agent Automatically creates realistic 10-second videos via Kie.ai Sora 2 Updates your CRM (Google Sheets) with the finished video link Handles retries, errors, and success tracking automatically Ideal for UGC marketers, product owners, and AI automation freelancers who want to scale ad content creation. 👤 Who It’s For E-commerce brands** wanting fast ad content for new or existing products Freelancers and agencies** creating short-form AI ad videos for clients Automation enthusiasts** building no-code AI video generation systems Marketing teams** testing multiple product angles and styles efficiently ⚙️ How It Works Manual Trigger Run the workflow manually to start video generation for one product entry at a time. 📄 Google Sheets Integration The workflow reads product info (Name, Prompt, Image Link, Processed Status) and fetches one unprocessed row. 🤖 AI Prompt Engineering (via OpenAI) The AI Agent uses a custom system message to act as a video prompt engineer, designing rich cinematic or UGC-style prompts for Sora 2. It researches trends and related product insights through Tavily Describes detailed scene, tone, lighting, camera motion, and emotion Adapts to either cinematic or handheld influencer style automatically 🎬 Sora 2 Video Generation (Kie.ai API) The refined video prompt and product image are sent to Kie.ai Sora 2 to create a 10-second portrait video. ⏳ Progress Monitoring A Wait node (15 s) plus a Switch node checks the generation status: ✅ Success → Save video link ⚠️ 500 Error → Log error message 🔁 Pending → Loop back to wait and recheck 🗂️ Save to Google Sheets Once successful, the workflow updates your CRM sheet with: Video Link (no watermark) Processed = “Yes” 🛠 How to Set It Up Open n8n (Cloud or Self-Hosted). Import the workflow file: Sora 2 Video Generator.json. Create and connect these credentials: 🧾 Google Sheets OAuth 2.0 🔍 Tavily Search API (Header Auth) 🤖 OpenAI API Key 🎥 Kie.ai Sora 2 API (Header Auth) Update the Google Sheets link inside the nodes to your own sheet. Ensure the sheet columns include: ID | Product Name | Prompt | Image Link | Video Link | Processed Click Execute Workflow to begin generating your first ad video. ⚡ Example Use Case You’re launching a new skincare product. Add its name, image, and a short description to your Google Sheet — and this workflow will automatically research the market, generate a trending 10-second UGC ad prompt, and produce a ready-to-share Sora 2 video link — all hands-free.
by Evoort Solutions
🚀 Automated Keyword Analysis with On-Page SEO Workflow 📌 Description Boost your SEO strategy by automating keyword research and on-page SEO analysis with n8n. This workflow uses user input (keyword + country), retrieves essential data using the powerful SEO On-Page API, and saves it directly into Google Sheets. Ideal for marketers, content strategists, and SEO agencies looking for efficiency. 🔁 Node-by-Node Flow explanation 1. 🟢 On form submission Triggers the workflow when a user submits a keyword and country via a simple form. 2. 📦 Global Storage Captures and stores the submitted keyword and country for use across the workflow. 3. 🌍 Keyword Insights Request Sends a POST request to the SEO On-Page API to fetch keyword suggestions (broad match keywords). 4. 🧾 Re-Format Extracts the relevant broadMatchKeywords array from the keyword API response. 5. 📊 Keyword Insights Appends extracted keyword suggestions into the "Keyword Insights" tab in Google Sheets. 6. 📉 KeyWord Difficulty Request Sends a second POST request to the SEO On-Page API to fetch keyword difficulty and SERP data. 7. 📈 Re-Format 2 Extracts the keywordDifficultyIndex value from the API response. 8. 📄 KeyWord Difficulty Saves the keyword difficulty score into the "KeyWord Difficulty" sheet for reference. 9. 🔍 Re -Format 5 Extracts SERP result data from the difficulty API response. 10. 🗂️ SERP Result Appends detailed SERP data into the "Serp Analytics" sheet in Google Sheets. 🎯 Benefits ✅ Fully Automated SEO Research – No manual data entry or API calls required. 🔁 Real-time Data Collection – Powered by SEO On-Page API on RapidAPI, ensuring fresh and reliable results. 📊 Organized Insights – Data is cleanly categorized into separate Google Sheets tabs. ⏱️ Time Saver – Instantly analyze keywords without switching between tools. 💡 Use Cases 📌 SEO Agencies – Generate keyword reports for clients automatically. 📝 Content Writers – Discover keyword difficulty and SERP competition before drafting. 🧑💻 Digital Marketers – Monitor keyword trends and search visibility in real-time. 📈 Bloggers & Influencers – Choose better keywords to rank faster on search engines. 🔗 API Reference This workflow is powered by the SEO On-Page API available on RapidAPI. It offers keyword research, difficulty metrics, and SERP analytics through simple endpoints, making it ideal for automation with n8n. > ⚠️ Note: Make sure to replace "your key" with your actual RapidAPI key in both HTTP Request nodes for successful API calls. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Muhammad Farooq Iqbal
This n8n template demonstrates how to generate animated videos from static images using ByteDance Seedance 1.5 Pro model through the KIE.AI API. The workflow creates dynamic video content based on text prompts and input images, supporting custom aspect ratios, resolutions, and durations for versatile video creation. Use cases are many: Create animated videos from product photos, generate social media content from images, produce video ads from static graphics, create animated story videos, transform photos into dynamic content, generate video presentations, create animated thumbnails, or produce video content for marketing campaigns! Good to know The workflow uses ByteDance Seedance 1.5 Pro model via KIE.AI API for high-quality image-to-video generation Creates animated videos from static images based on text prompts Supports multiple aspect ratios (9:16 vertical, 16:9 horizontal, 1:1 square) Configurable resolution options (720p, 1080p, etc.) Customizable video duration (in seconds) KIE.AI pricing: Check current rates at https://kie.ai/ for video generation costs Processing time: Varies based on video length and KIE.AI queue, typically 1-5 minutes Image requirements: File must be publicly accessible via URL (HTTPS recommended) Supported image formats: PNG, JPG, JPEG Output format: Video file URL (MP4) ready for download or streaming Automatic polling system handles processing status checks and retries How it works Video Parameters Setup: The workflow receives video prompt and image URL (set in 'Set Video Parameters' node or via trigger) Video Generation Submission: Parameters are submitted to KIE.AI API using ByteDance Seedance 1.5 Pro model Processing Wait: Workflow waits 5 seconds, then polls the generation status Status Check: Checks if video generation is complete, queuing, generating, or failed Polling Loop: If still processing, workflow waits and checks again until completion Video URL Extraction: Once complete, extracts the generated video file URL from the API response Video Download: Downloads the generated video file for local use or further processing The workflow automatically handles different processing states (queuing, generating, success, fail) and retries polling until video generation is complete. The Seedance model creates smooth, animated videos from static images based on the provided text prompt, bringing images to life with natural motion. How to use Setup Credentials: Configure KIE.AI API key as HTTP Bearer Auth credential Set Video Parameters: Update 'Set Video Parameters' node with: prompt: Text description of the desired video animation/scene image_url: Publicly accessible URL of the input image Configure Video Settings: Adjust in 'Submit Video Generation Request' node: aspect_ratio: 9:16 (vertical), 16:9 (horizontal), 1:1 (square) resolution: 720p, 1080p, etc. duration: Video length in seconds (e.g., 8, 10, 15) Deploy Workflow: Import the template and activate the workflow Trigger Generation: Use manual trigger to test, or replace with webhook/other trigger Receive Video: Get generated video file in the output, ready for download or streaming Pro tip: For best results, ensure your image is hosted on a public URL (HTTPS) and matches the desired aspect ratio. Use clear, high-quality images for better video generation. Write detailed, descriptive prompts to guide the animation - the more specific your prompt, the better the video output. The workflow automatically handles polling and status checks, so you don't need to worry about timing. Requirements KIE.AI API** account for accessing ByteDance Seedance 1.5 Pro video generation Image File URL** that is publicly accessible (HTTPS recommended) Text Prompt** describing the desired video animation/scene n8n** instance (cloud or self-hosted) Supported image formats: PNG, JPG, JPEG Customizing this workflow Trigger Options: Replace the manual trigger with webhook trigger for API-based video generation, schedule trigger for batch processing, or form trigger for user image uploads. Video Settings: Modify aspect ratio, resolution, and duration in 'Submit Video Generation Request' node to match your content needs (TikTok vertical, YouTube horizontal, Instagram square, etc.). Prompt Engineering: Enhance prompts in 'Set Video Parameters' node with detailed descriptions, camera movements, animation styles, and scene details for better video quality. Output Formatting: Modify the 'Extract Video URL' code node to format output differently (add metadata, include processing time, add file size, etc.). Error Handling: Add notification nodes (Email, Slack, Telegram) to alert when video generation fails or completes. Post-Processing: Add nodes after video generation to save to cloud storage, upload to YouTube/Vimeo, send to video editing tools, or integrate with content management systems. Batch Processing: Add loops to process multiple images from a list or spreadsheet automatically, generating videos for each image. Storage Integration: Connect output to Google Drive, Dropbox, S3, or other storage services for organized video file management. Social Media Integration: Automatically post generated videos to TikTok, Instagram Reels, YouTube Shorts, or other platforms. Video Enhancement: Chain with other video processing workflows - add captions, music, transitions, or combine multiple generated videos. Aspect Ratio Variations: Generate multiple versions of the same video in different aspect ratios (9:16, 16:9, 1:1) for different platforms.
by Evoort Solutions
🔍 Analyze Competitor Keywords with RapidAPI and Google Sheets Reporting 📄 Description This n8n workflow streamlines the process of analyzing SEO competitor keywords using the Competitor Keyword Analysis API on RapidAPI. It collects a website and country via form submission, calls the API to retrieve keyword metrics, reformats the response, and logs the results into Google Sheets — all automatically. It is ideal for SEO analysts, marketing teams, and agencies who need a hands-free solution for competitive keyword insights. 🧩 Node-by-Node Explanation 📝 On form submission (formTrigger) Starts the workflow when a user submits their website and country through a form. 🌐 Competitor Keyword Analysis (httpRequest) Sends a POST request to the Competitor Keyword Analysis API on RapidAPI with form input to fetch keyword data. 🔄 Reformat Code (code) Extracts the domainOrganicSearchKeywords array from the API response for structured processing. 📊 Google Sheets (googleSheets) Appends the cleaned keyword metrics into a Google Sheet for easy viewing and tracking. 🚀 Benefits of This Workflow ✅ Automates SEO research using the Competitor Keyword Analysis API. ✅ Eliminates manual data entry — results go straight into Google Sheets. ✅ Scalable and reusable for any number of websites or countries. ✅ Reformatting logic is built-in, so you get clean, analysis-ready data. 💼 Use Cases Marketing Agencies Use the Competitor Keyword Analysis API to gather insights for client websites and store the results automatically. In-house SEO Teams Quickly compare keyword performance across competitors and monitor shifts over time with historical Google Sheets logs. Freelancers and Consultants Provide fast, data-backed SEO reports using this automation with the Competitor Keyword Analysis API. Keyword Research Automation Make this flow part of a larger system for identifying keyword gaps, content opportunities, or campaign ideas. 📁 Output Example (Google Sheets) | keyword | searchVolume | cpc | competition | position | previousPosition | keywordDifficulty | |---------------|--------------|-----|-------------|----------|------------------|-------------------| | best laptops | 9900 | 2.3 | 0.87 | 5 | 7 | 55 | 🔐 How to Get Your API Key for the Competitor Keyword Analysis API Go to 👉 Competitor Keyword Analysis API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. ✅ Summary This workflow is a powerful no-code automation tool that leverages the Competitor Keyword Analysis API on RapidAPI to deliver real-time SEO insights directly to Google Sheets — saving time, boosting efficiency, and enabling smarter keyword strategy decisions. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Evoort Solutions
📊 GST Data Analytics Automation Flow with Google Docs Reporting Description: Streamline GST data collection, analysis, and automated reporting using the GST Insights API and Google Docs integration. This workflow allows businesses to automate the extraction of GST data and directly generate formatted reports in Google Docs, making compliance easier. ⚙️ Node-by-Node Explanation On form submission Triggers the automation whenever a user submits the GST-related data (like GSTIN) via a web form. It collects all necessary input for further processing in the workflow. Fetch GST Data Using GST Insights API Sends a request to the GST Insights API to fetch GST data based on the user's input. This is done via a POST request that includes the required authentication and the inputted GSTIN. Data Reformatting This node processes and structures the raw GST data received from the API. The reformatting ensures only the essential information (e.g., tax summaries, payment status, etc.) is extracted for reporting. Google Docs Reporting Generates a Google Docs document and auto-populates it with the reformatted GST data. The report is structured in a clean format, ready for sharing or downloading. 💡 Use Cases Tax Consultants & Agencies:** Automate the GST insights and reporting process for clients by extracting key metrics directly from the GST Insights API. Accountants & Auditors:** Streamline GST compliance by generating automated reports based on the most current data from the API. E-commerce Platforms:** Automatically track GST payments, returns, and summaries for each sale and consolidate them into structured reports. SMEs and Startups:** Track your GST status and compliance without the need for manual intervention. Generate reports directly within Google Docs for easy access. 🎯 Benefits of this Workflow Automated GST Data Collection:* Fetch GST insights directly using the *GST Insights API** without manually searching through different resources. Google Docs Integration:** Automatically generate customized Google Docs reports with detailed GST data, making the reporting process efficient. Error-Free Data Analysis:** Automates data extraction and reporting, significantly reducing the risk of human errors. Customizable Reporting:** Customize the flow for various GST-related data such as payments, returns, and summaries. Centralized Document Storage:** All GST reports are saved and managed within Google Docs, ensuring easy collaboration and access. Quick Note: The GST Insights API provides detailed GST data analysis for Indian businesses. It can extract crucial data like returns, payments, and summaries directly from the GST system, which you can then use for compliance and reporting. Would you like to explore the API further or need help with other integrations? 🔑 How to Get Your API Key for GST Insights API Visit the API Page: Go to the GST Insights API on RapidAPI. Sign Up/Login: Create an account or log in if you already have one. Subscribe to the API: Click "Subscribe to Test" and choose a plan (free or paid). Copy Your API Key: After subscribing, your API Key will be available in the "X-RapidAPI-Key" section under "Endpoints". Use the Key: Include the key in your API requests like this: -H "X-RapidAPI-Key: YOUR_API_KEY"
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform Complex Research Papers into Accessible Summaries This workflow automatically generates multiple types of summaries from research papers, making complex academic content accessible to different audiences. By combining PDF Vector's advanced parsing capabilities with GPT-4's language understanding, researchers can quickly digest papers outside their expertise, communicate findings to diverse stakeholders, and create social media-friendly research highlights. Target Audience & Problem Solved This template is designed for: Research communicators** translating complex findings for public audiences Journal editors** creating accessible abstracts and highlights Science journalists** quickly understanding technical papers Academic institutions** improving research visibility and impact Funding agencies** reviewing large volumes of research outputs It solves the critical challenge of research accessibility by automatically generating summaries tailored to different audience needs - from technical experts to the general public. Prerequisites n8n instance with PDF Vector node installed OpenAI API key with GPT-4 or GPT-3.5 access PDF Vector API credentials Basic understanding of webhook setup Optional: Slack/Email integration for notifications Minimum 20 API credits per paper summarized Step-by-Step Setup Instructions Configure API Credentials Navigate to n8n Credentials section Add PDF Vector credentials with your API key Add OpenAI credentials with your API key Test both connections to ensure they work Set Up the Webhook Endpoint Import the workflow template into n8n Note the webhook URL from the "Webhook - Paper URL" node This URL will receive POST requests with paper URLs Example request format: { "paperUrl": "https://example.com/paper.pdf" } Configure Summary Models Review the OpenAI model settings in each summary node GPT-4 recommended for executive and technical summaries GPT-3.5-turbo suitable for lay and social media summaries Adjust temperature settings for creativity vs accuracy Customize Output Formats Modify the "Combine All Summaries" node for your needs Add additional fields or metadata as required Configure response format (JSON, HTML, plain text) Test the Workflow Use a tool like Postman or curl to send a test request Monitor the execution for any errors Verify all four summary types are generated Check response time and adjust timeout if needed Implementation Details The workflow implements a sophisticated summarization pipeline: PDF Parsing: Uses LLM-enhanced parsing for accurate extraction from complex layouts Parallel Processing: Generates all summary types simultaneously for efficiency Audience Targeting: Each summary type uses specific prompts and constraints Quality Control: Structured prompts ensure consistent, high-quality outputs Flexible Output: Returns all summaries in a single API response Customization Guide Adding Custom Summary Types: Create new summary nodes with specialized prompts: // Example: Policy Brief Summary { "content": "Create a policy brief (max 300 words) highlighting: Policy-relevant findings Recommendations for policymakers Societal implications Implementation considerations Paper content: {{ $json.content }}" } Modifying Summary Lengths: Adjust word limits in each summary prompt: // In Executive Summary node: "max 500 words" // Change to your desired length // In Tweet Summary node: "max 280 characters" // Twitter limit Adding Language Translation: Extend the workflow with translation nodes: // After summary generation, add: "Translate this summary to Spanish: {{ $json.executiveSummary }}" Implementing Caching: Add a caching layer to avoid reprocessing: Use Redis or n8n's static data Cache based on paper DOI or URL hash Set appropriate TTL for cache entries Batch Processing Enhancement: For multiple papers, modify the workflow: Accept array of paper URLs Use SplitInBatches node for processing Aggregate results before responding Summary Types: Executive Summary: 1-page overview for decision makers Technical Summary: Detailed summary for researchers Lay Summary: Plain language for general audience Social Media: Tweet-sized key findings Key Features: Parse complex academic PDFs with LLM enhancement Generate multiple summary types simultaneously Extract and highlight key methodology and findings Create audience-appropriate language and depth API-driven for easy integration Advanced Features Quality Metrics: Add a quality assessment node: // Evaluate summary quality const qualityChecks = { hasKeyFindings: summary.includes('findings'), appropriateLength: summary.length <= maxLength, noJargon: !technicalTerms.some(term => summary.includes(term)) }; Template Variations: Create field-specific templates: Medical research: Include clinical implications Engineering papers: Focus on technical specifications Social sciences: Emphasize methodology and limitations
by Yaron Been
Generate 3D Models & Textures from Images with Hunyuan3D AI This workflow connects n8n → Replicate API to generate 3D-like outputs using the ndreca/hunyuan3d-2.1-test model. It handles everything: sending the request, waiting for processing, checking status, and returning results. ⚡ Section 1: Trigger & Setup ⚙️ Nodes 1️⃣ On Clicking “Execute” What it does:** Starts the workflow manually in n8n. Why it’s useful:** Great for testing or one-off runs before automation. 2️⃣ Set API Key What it does:* Stores your *Replicate API Key**. Why it’s useful:** Keeps authentication secure and reusable across HTTP nodes. 💡 Beginner Benefit No coding needed — just paste your API key once. Easy to test: press Execute, and you’re live. 🤖 Section 2: Send Job to Replicate ⚙️ Nodes 3️⃣ Create Prediction (HTTP Request) What it does:* Sends a *POST request** to Replicate’s API with: Model version (70d0d816...ae75f) Input image URL Parameters like steps, seed, generate_texture, remove_background Why it’s useful:** This kicks off the AI generation job on Replicate’s servers. 4️⃣ Extract Prediction ID (Code) What it does:* Grabs the *prediction ID** from the API response and builds a status-check URL. Why it’s useful:** Every job has a unique ID — this lets us track progress later. 💡 Beginner Benefit You don’t need to worry about JSON parsing — the workflow extracts the ID automatically. Everything is reusable if you run multiple generations. ⏳ Section 3: Poll Until Complete ⚙️ Nodes 5️⃣ Wait (2s) What it does:** Pauses for 2 seconds before checking the job status. Why it’s useful:** Prevents spamming the API with too many requests. 6️⃣ Check Prediction Status (HTTP Request) What it does:** GET request to see if the job is finished. 7️⃣ Check If Complete (IF Node) What it does:** If status = succeeded → process results. If not → loops back to Wait and checks again. 💡 Beginner Benefit Handles waiting logic for you — no manual refreshing needed. Keeps looping until the AI job is really done. 📦 Section 4: Process the Result ⚙️ Nodes 8️⃣ Process Result (Code) What it does:** Extracts: status output (final generated file/URL) metrics (performance stats) Timestamps (created_at, completed_at) Model info Why it’s useful:** Packages the response neatly for storage, email, or sending elsewhere. 💡 Beginner Benefit Get clean, structured data ready for saving or sending. Can be extended easily: push output to Google Drive, Notion, or Slack. 📊 Workflow Overview | Section | What happens | Key Nodes | Benefit | | --------------------- | --------------------------------- | ----------------------------- | --------------------------------- | | ⚡ Trigger & Setup | Start workflow + set API key | Manual Trigger, Set | Easy one-click start | | 🤖 Send Job | Send input & get prediction ID | Create Prediction, Extract ID | Launches AI generation | | ⏳ Poll Until Complete | Waits + checks status until ready | Wait, Check Status, IF | Automated loop, no manual refresh | | 📦 Process Result | Collects output & metrics | Process Result | Clean result for next steps | 🎯 Overall Benefits ✅ Fully automates Replicate model runs ✅ Handles waiting, retries, and completion checks ✅ Clean final output with status + metrics ✅ Beginner-friendly — just add API key + input image ✅ Extensible: connect results to Google Sheets, Gmail, Slack, or databases ✨ In short: This is a no-code AI image-to-3D content generator powered by Replicate and automated by n8n.
by Sk developer
🚀 Facebook to MP4 Video Downloader – Fully Customizable Automated Workflow Easily convert Facebook videos into downloadable MP4 files using Facebook Video Downloader API. This n8n workflow automates fetching videos, downloading them, uploading them to Google Drive, and logging results in Google Sheets. Users can modify and extend this flow according to their own needs (e.g., add email notifications, change storage location, or use another API). 📝 Node-by-Node Explanation On form submission → Triggers when a user submits a Facebook video URL via the form. (You can customize this form to include email or multiple URLs.) Facebook RapidAPI Request → Sends a POST request to Facebook Video Downloader API to fetch downloadable MP4 links. (Easily replace or update API parameters as needed.) If Node → Checks API response for errors before proceeding. (You can add more conditions to handle custom error scenarios.) MP4 Downloader → Downloads the Facebook video file from the received media URL. (You can change download settings, add quality filters, or store multiple resolutions.) Upload to Google Drive → Uploads the downloaded MP4 file to a Google Drive folder. (Easily switch to Dropbox, S3, or any other storage service.) Google Drive Set Permission → Sets the uploaded file to be publicly shareable. (You can make it private or share only with specific users.) Google Sheets → Logs successful conversions with the original URL and shareable MP4 link. (Customizable for additional fields like video title, size, or download time.) Wait Node → Delays before logging failed conversions to avoid rapid writes. (You can adjust the wait duration or add retry attempts.) Google Sheets Append Row → Records failed conversion attempts with N/A as the Drive URL. (You can add notification alerts for failed downloads.) ✅ Use Cases Automate Facebook video downloads for social media teams Instantly generate shareable MP4 links for clients or marketing campaigns Maintain a centralized log of downloaded videos for reporting Customizable flow for different video quality, formats, or storage needs 🚀 Benefits Fast and reliable Facebook video downloading with Facebook Video Downloader API Flexible and fully customizable – adapt nodes, storage, and notifications as required Automatic error handling and logging in Google Sheets Cloud-based storage with secure and shareable Google Drive links Seamless integration with n8n and Facebook Video Downloader API for scalable automation 🔑 Resolved: Manual Facebook video downloads are now fully automated, customizable, and scalable using Facebook Video Downloader API, Google Drive uploads, and detailed logging via Google Sheets.
by Yashraj singh sisodiya
AI Latest 24 Update Workflow Explanation Aim The aim of the AI Latest 24 Update Workflow is to automate the daily collection and distribution of the most recent Artificial Intelligence and Technology news from the past 24 hours. It ensures users receive a clean, well-structured HTML email containing headlines, summaries, and links to trusted sources, styled professionally for easy reading. Goal The goal is to: Automate news retrieval by fetching the latest AI developments from trusted sources daily. Generate structured HTML output with bold headlines, concise summaries, and clickable links. Format the content professionally with inline CSS, ensuring the email is visually appealing. Distribute updates automatically to selected recipients via Gmail. Provide reusability so the HTML can be processed for other platforms if needed. This ensures recipients receive accurate, up-to-date, and well-formatted AI news without manual effort. Requirements The workflow relies on the following components and configurations: n8n Platform Acts as the automation environment for scheduling, fetching, formatting, and delivering AI news updates. Node Requirements Schedule Trigger Runs the workflow every day at 10:00 AM. Automates the process without manual initiation. Message a model (Perplexity API) Uses the sonar-pro model from Perplexity AI. Fetches the most recent AI developments in the past 24 hours. Outputs results as a self-contained HTML email with inline CSS (card-style layout). Send a message (Gmail) Sends the generated HTML email with the subject: “Latest Tech and AI news update 🚀”. Recipients: xyz@gmail.com. HTML Node Processes the AI model’s HTML response. Ensures the email formatting is clean, valid, and ready for delivery. Credentials Perplexity API account**: For fetching AI news. Gmail OAuth2 account**: For secure email delivery. Input Requirements No manual input required; the workflow runs automatically on schedule. Output A daily AI news digest email containing: Headlines in bold. One-sentence summaries in normal text. Full URLs as clickable links. Styled in a clean card-based format with hover effects. API Usage The workflow integrates APIs to achieve automation: Perplexity API Used in the Message a model node. The API fetches the latest AI news, ensures data accuracy, and outputs HTML formatted content. Provides styling via inline CSS (Segoe UI font, light background, card design). Ensures the news is fresh (past 24 hours only). Gmail API Used in the Send a message node. Handles the secure delivery of emails with OAuth2 authentication. Sends AI news updates directly to inboxes. HTML Processing The HTML Node ensures proper formatting before email delivery: Process**: Cleans and validates HTML ($json.message) generated by Perplexity. Output**: A self-contained HTML email with proper structure (<html>, <head>, <body>). Relevance: Ensures Gmail sends a **styled digest instead of raw text. Workflow Summary The AI Latest 24 Update Workflow automates daily AI news collection and delivery by: Triggering at 10:00 AM using the Schedule Trigger node. Fetching AI news via Perplexity API (Message a model node). Formatting results into clean HTML (HTML node). Sending the email via Gmail API to multiple recipients. This workflow ensures a seamless, hands-off system where recipients get accurate, fresh, and well-designed AI news updates every day.
by Marth
How It Works: The 5-Node Anomaly Detection Flow This workflow efficiently processes logs to detect anomalies. Scheduled Check (Cron Node): This is the primary trigger. It schedules the workflow to run at a defined interval (e.g., every 15 minutes), ensuring logs are routinely scanned for suspicious activity. Fetch Logs (HTTP Request Node): This node is responsible for retrieving logs from an external source. It sends a request to your log API endpoint to get a batch of the most recent logs. Count Failed Logins (Code Node): This is the core of the detection logic. The JavaScript code filters the logs for a specific event ("login_failure"), counts the total, and identifies unique IPs involved. This information is then passed to the next node. Failed Logins > Threshold? (If Node): This node serves as the final filter. It checks if the number of failed logins exceeds a threshold you set (e.g., more than 5 attempts). If it does, the workflow is routed to the notification node; if not, the workflow ends safely. Send Anomaly Alert (Slack Node): This node sends an alert to your team if an anomaly is detected. The Slack message includes a summary of the anomaly, such as the number of failed attempts and the IPs involved, enabling a swift response. How to Set Up Implementing this essential log anomaly detector in your n8n instance is quick and straightforward. Prepare Your Credentials & API: Log API: Make sure you have an API endpoint or a way to get logs from your system (e.g., a server, CMS, or application). The logs should be in JSON format, and you'll need any necessary API keys or tokens. Slack Credential: Set up a Slack credential in n8n and get the Channel ID of your security alert channel (e.g., #security-alerts). Import the Workflow JSON: Create a new workflow in n8n and choose "Import from JSON." Paste the JSON code (which was provided in a previous response). Configure the Nodes: Scheduled Check (Cron): Set the schedule according to your preference (e.g., every 15 minutes). Fetch Logs (HTTP Request): Update the URL and header/authentication to match your specific log API endpoint. Count Failed Logins (Code): Verify that the JavaScript code matches your log's JSON format. You may need to adjust log.event === 'login_failure' if your log events use a different name. Failed Logins > Threshold? (If): Adjust the threshold value (e.g., 5) based on your risk tolerance. Send Anomaly Alert (Slack): Select your Slack credential and enter the correct Channel ID. Test and Activate: Manual Test: Run the workflow manually to confirm it fetches logs and processes them correctly. You can temporarily lower the threshold to 0 to ensure the alert is triggered. Verify Output: Check your Slack channel to confirm that alerts are formatted and sent correctly. Activate: Once you're confident in its function, activate the workflow. n8n will now automatically monitor your logs on the schedule you set.