by AArtIntelligent
Objective This workflow automatically imports product images from Google Drive and associates them with templates and products in Odoo.
by Sk developer
📥 Pinterest Video to MP4 Downloader with Email Delivery | RapidAPI Integration This n8n workflow automates downloading Pinterest videos as MP4 files using the Pinterest Video Downloader API, uploads them to Google Drive, sets public access permissions, and emails the sharable download link to the user. 📝 Node-by-Node Explanation 1️⃣ n8n Form Trigger → Captures the Pinterest video URL and user email from a web form to start the workflow. 2️⃣ HTTP Request → Sends the submitted URL to Pinterest Video Downloader API to process and fetch downloadable MP4 links. 3️⃣ Wait → Pauses the workflow, allowing the API enough time to complete the MP4 conversion. 4️⃣ HTTP Downloader → Downloads the generated MP4 video from the API response. 5️⃣ Upload To Google Drive → Uploads the downloaded MP4 file to Google Drive for cloud storage. 6️⃣ Set Permissions Google Drive → Sets file permissions to allow public access via sharable link. 7️⃣ Send Email → Sends an automated email with the Google Drive download link to the user’s provided email address. 💡 Use Case Ideal for social media managers, digital marketers, educators, and content creators who frequently need to repurpose Pinterest videos for campaigns, training materials, or social posts. Saves time by automating the entire process—from URL submission to receiving a ready-to-share MP4 link via email, without any manual downloading, renaming, or cloud uploading. Perfect for agencies handling multiple clients who want to streamline bulk Pinterest video downloads and securely distribute them via email in seconds. ✅ Benefits Time Efficiency:** Automates video conversion and delivery, eliminating manual steps. Cloud Storage:** Automatically uploads videos to Google Drive, ensuring secure backup and easy organization. Public Access Links:** Instantly creates shareable links without extra permission settings. Seamless Email Delivery:** Sends ready-to-use download links directly to the user’s inbox. Scalable for Teams:** Supports multiple submissions, making it suitable for agencies managing high download volumes. Powered by RapidAPI:* Utilizes *Pinterest Video Downloader** for reliable, fast, and secure video extraction.
by Fariez
Automatically create AI-generated anime wallpapers, transform them into animated videos, and post them to TikTok — all with one n8n workflow. What Problem Is This Workflow Solving? / Use Case Creating and publishing engaging anime content for TikTok is often time-consuming. From generating ideas, creating visuals, animating them, and finally uploading to TikTok, the process usually requires multiple tools and manual effort. This workflow solves that by automating the entire pipeline — from anime wallpaper generation to video animation and auto-posting on TikTok — all in one place. Perfect for content creators, anime enthusiasts, and marketers who want to consistently deliver fresh, unique TikTok content without the hassle. Who Is This For Anime Creators & Fans**: Share unique AI-generated anime content with your TikTok audience. Content Creators & Influencers**: Keep your TikTok feed active without spending hours designing and editing. Marketers & Social Media Managers**: Automate anime-themed campaigns to attract new audiences. Automation Enthusiasts**: Explore creative ways to connect AI models and publishing platforms using n8n. What This Workflow Does Collects anime topic & style via an n8n Form (or scheduled trigger). Uses Groq + GPT-OSS to generate a text-to-image prompt. Creates an anime wallpaper using the Flux AI model (Pollination AI). Transforms the wallpaper into an animated video with Fal AI (Minimax Hailuo 02 Fast). Automatically posts the final video to TikTok via the GetLate API. How to Use Set up Groq Add your Groq API Key to the Groq Chat Model node. Select an LLM model (default: OpenAI GPT-OSS 120B). Set up TikTok posting Get your API key from getlate.dev. Add the credentials to the Upload IMG and TikTok Post nodes. Set up Fal AI for video generation Get your API key from Fal.ai and top up credits. Add your Fal AI credentials to the Create Video, Get Status, and Get Video nodes. Run the workflow Open the n8n Form URL (Test or Production). Enter your anime topic and style. The workflow will generate the image, animate it, and post directly to TikTok. Possible Customizations: Replace the default Form Trigger with a Scheduled Trigger. Connect a topics database (e.g., Google Sheets or Airtable) to automatically generate and post animated anime wallpapers on TikTok at regular intervals.
by phil
This generate unique AI-powered music tracks using the ElevenLabs Music API. Enter a text description of the music you envision, and the workflow will compose it, save the MP3 file to your Google Drive, and instantly provide a link to listen to your creation. It is a powerful tool for quickly producing background music, soundscapes, or musical ideas without any complex software. Who's it for This template is ideal for: Content Creators: Generate royalty-free background music for videos, podcasts, and streams on the fly. Musicians & Producers: Quickly brainstorm musical themes and ideas from a simple text prompt. Developers & Hobbyists: Integrate AI music generation into projects or simply experiment with the capabilities of the ElevenLabs API. How to set up Configure API Key: Sign up for an ElevenLabs account and get your API key. In the "API Key" node, replace the placeholder value with your actual ElevenLabs API key. Connect Google Drive: Select the "Upload mp3" node. Create new credentials to connect your Google Drive account. Activate the Workflow: Save and activate the workflow. Use the Form Trigger's production URL to access the AI Music Generator web form. Requirements An active n8n instance. An ElevenLabs account for the API key. A Google Drive account. How to customize this workflow Change Storage: Replace the Google Drive node with another storage service node like Dropbox, AWS S3, or an FTP server to save your music elsewhere. Modify Music Quality: In the "elevenlabs_api" node, you can change the output_format in the body to adjust the MP3 quality. Refer to the ElevenLabs API documentation for available options. Customize Confirmation Page: Edit the "prepare reponse" node to change the design and text of the final page shown to the user. . Phil | Inforeole
by Gloria
Premium n8n Workflow: SMART AI Keyword Categorization & Content Strategy This n8n workflow transforms raw keyword data into actionable content intelligence using advanced AI categorization and clustering. It creates a comprehensive content strategy with ready-to-use titles and descriptions for your blog posts. 🚀 Features 🧠 AI-Powered Categorization Automatically sorts keywords into strategic buckets — Quick Wins, Authority Builders, Emerging Topics, Intent Signals, and Semantic Topics — for targeted content creation 🎯. 🔗 Semantic Clustering Identifies meaningful relationships between keywords to create logically grouped content clusters 🕸️. 📑 Content Blueprint Generation Creates compelling titles and descriptions for each keyword and cluster to streamline your content creation process 📝. 🌐 Hub & Spoke Strategy Builder Develops a complete site architecture plan with main hub articles and supporting spoke content 🏗️. ⚡ Airtable Integration Organizes all outputs in a structured database for seamless integration with your content workflow. 🤖 n8n AI Agents Leverages advanced AI capabilities to analyze keyword intent and potential without manual intervention. 👥 This Workflow is Perfect For: Content strategists 📊 SEO professionals 🔍 Content marketing teams 👥 Blog managers 💻 Digital publishers 📰 Website owners (E-Commerce) 🏢 Anyone using the SMART AI Keyword Research Workflow 🔄 Stop struggling with manual keyword grouping and transform your raw keyword data into a comprehensive content strategy. 👉 Get this premium workflow today! 📝 What's Included? ⚙️ n8n Workflow Template Ready-to-use workflow with AI-powered nodes for keyword analysis and categorization. 📊 Airtable Database Structure Pre-configured tables for categorized keywords, content ideas, and hub-spoke relationships. 🧩 Keyword Categorization System Automated logic to sort keywords by opportunity, competition, and relevance. 📋 Content Titles and Descriptions AI prompts to create optimized titles and descriptions for each content piece. 🌐 Hub & Spoke Mapper Logic to identify main topics and supporting content opportunities. 📚 Documentation Step-by-step instructions for importing keyword data and running the workflow. 🏆 Why Choose This Workflow? ⏱️ Save Massive Time: Automate what would take days of manual analysis and planning. 📈 Improve Content ROI: Create content that targets the right keywords in the right way. 🔄 Seamless Integration: Works perfectly with the AI Keyword Research and Blog Writing workflows available on my profile. 🏗️ Build Site Authority: Create topically relevant content clusters that boost domain expertise. 🎯 Strategic Focus: Stop guessing which keywords to target and how to organize your content. 🛠️ How It Works 1️⃣ Import the provided n8n workflow into your n8n instance 📥. 2️⃣ Connect to your Airtable base containing keyword research data ⚙️. 3️⃣ Configure the AI agents with your preferred content parameters 🤖. 4️⃣ Run the workflow to automatically categorize, cluster, and create content briefs 🔄. 5️⃣ Use the generated content strategy to guide your blogging or feed directly into the Multi-Agent Blog Writing System available on my profile 📝. Additional detailed instructions are provided in the workflow. 🏁 What You Need to Get Started 🔹 Access to n8n (self-hosted or cloud) ☁️ 🔹 An Airtable account with keyword data (ideally from the AI Keyword Research Workflow on my profile) 📊 🔹 OpenAI API credentials for the AI categorization and title generation 🔑 🔹 Basic understanding of content strategy and n8n workflows 🧠 💡 You can also connect this workflow with my SEO Keyword Research Automation using DataForSEO and Airtable and my Multi-Agent SEO Optimized Blog Writing System with Hyperlinks for E-Commerce, both available on my profile, to build a fully automated, end-to-end SEO content machine.
by Harshil Agrawal
This workflow allows you to create a group, add members to the group, and get the members of the group. Bitwarden node: This node will create a new group called documentation in Bitwarden. Bitwarden1 node: This node will get all the members from Bitwarden. Bitwarden2 node: This node will update all the members in the group that we created earlier. Bitwarden3 node: This node will get all the members in the group that we created earlier.
by PDF Vector
Overview Conducting comprehensive literature reviews is one of the most time-consuming aspects of academic research. This workflow revolutionizes the process by automating literature search, paper analysis, and review generation across multiple academic databases. It handles both digital papers and scanned documents (PDFs, JPGs, PNGs), using OCR technology for older publications or image-based content. What You Can Do Automate searches across multiple academic databases simultaneously Analyze and rank papers by relevance, citations, and impact Generate comprehensive literature reviews with proper citations Process both digital and scanned documents with OCR Identify research gaps and emerging trends systematically Who It's For Researchers, graduate students, academic institutions, literature review teams, and academic writers who need to conduct comprehensive literature reviews efficiently while maintaining high quality and thoroughness. The Problem It Solves Manual literature reviews are extremely time-consuming and often miss relevant papers across different databases. Researchers struggle to synthesize large volumes of academic papers, track citations properly, and identify research gaps systematically. This template automates the entire process from search to synthesis, ensuring comprehensive coverage and proper citation management. Setup Instructions: Configure PDF Vector API credentials with academic search access Set up search parameters including databases and date ranges Define inclusion and exclusion criteria for paper selection Choose citation style (APA, MLA, Chicago, etc.) Configure output format preferences Set up reference management software integration if needed Define research topic and keywords for search Key Features: Simultaneous search across PubMed, arXiv, Semantic Scholar, and other databases Intelligent paper ranking based on citation count, recency, and relevance OCR support for scanned documents and older publications Automatic extraction of methodologies, findings, and limitations Citation network analysis to identify seminal works Automatic theme organization and research gap identification Multiple citation format support (APA, MLA, Chicago) Quality scoring based on journal impact factors Customization Options: Configure search parameters for specific research domains Set up automated searches for ongoing literature monitoring Integrate with reference management software (Zotero, Mendeley) Customize output format and structure Add collaborative review features for research teams Set up quality filters based on journal rankings Configure notification systems for new relevant papers Implementation Details: The workflow uses advanced algorithms to search multiple academic databases simultaneously, ranking papers by relevance and impact. It processes full-text PDFs when available and uses OCR for scanned documents. The system automatically extracts key information, organizes findings by themes, and generates structured literature reviews with proper citations and reference management. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by PDF Vector
Overview HR departments and recruiters spend countless hours manually reviewing resumes, often missing qualified candidates due to time constraints. This workflow automates the entire resume screening process by extracting structured data from resumes in any format (PDF, Word documents, or even photographed/scanned resume images), calculating experience scores, and creating comprehensive candidate profiles ready for your ATS system. What You Can Do This workflow automatically retrieves resumes from Google Drive and uses AI to extract all relevant candidate information including personal details, work experience with dates, education, skills, and certifications. It intelligently handles various resume formats including PDFs, Word documents, and even scanned or photographed resumes using OCR. The workflow calculates total years of experience, tracks skill-specific experience, generates proficiency scores for each skill, and provides an AI-powered assessment of candidate strengths and suitability for different roles. Who It's For Perfect for HR departments processing high volumes of applications, recruitment agencies managing multiple clients, talent acquisition teams seeking to improve candidate quality, and hiring managers who want data-driven insights for decision making. Ideal for organizations that need to maintain consistent evaluation standards across different reviewers and want to reduce time-to-hire while improving candidate match quality. The Problem It Solves Manual resume screening is inefficient and inconsistent. Different reviewers may evaluate the same resume differently, leading to missed opportunities and bias. This workflow standardizes the extraction process, automatically calculates years of experience for each skill, and provides objective scoring metrics to help identify the best candidates faster while reducing human bias in the initial screening process. Setup Instructions Configure Google Drive credentials in n8n Install the PDF Vector community node from the n8n marketplace Configure your PDF Vector API credentials Set up your preferred data storage (database or spreadsheet) Customize the skill categories for your industry Configure the scoring algorithm based on your requirements Connect to your existing ATS system if needed Key Features Automatic Resume Retrieval**: Pull resumes from Google Drive folders automatically Universal Format Support**: Process PDFs, Word documents, and photographed resumes OCR Capabilities**: Extract text from scanned or photographed documents Experience Calculation**: Automatically compute total and skill-specific experience Proficiency Scoring**: Generate objective skill proficiency ratings AI Assessment**: Get intelligent insights on candidate fit and strengths Multi-Language Support**: Handle resumes in various languages ATS Integration**: Output structured data compatible with major ATS systems Customization Options Define custom skill categories relevant to your industry, adjust scoring weights for different experience types, add specific extraction fields for your organization, implement keyword matching for job requirements, set up automated candidate ranking systems, create role-specific evaluation criteria, and integrate with LinkedIn or other professional networks for enhanced candidate insights. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Harshil Agrawal
This workflow allows you to create an invoice with the information received via Typeform submission. Typeform node: This node triggers the workflow. Whenever the form is submitted, the node triggers the workflow. We will use the information received in this node to generate the invoice. APITemplate.io node: This node generates the invoice using the information from the previous node.
by WeblineIndia
Automate Video Upload → Auto-Thumbnail → Google Drive This workflow accepts a video via HTTP upload, verifies it’s a valid video, extracts a thumbnail frame at the 5-second mark using FFmpeg (auto-installs a static build if missing), uploads the image to a specified Google Drive folder and returns a structured JSON response containing the new file’s details. Who’s it for Marketing / Social teams** who need ready-to-publish thumbnails from raw uploads. Developers** who want an API-first thumbnail microservice without standing up extra infrastructure. Agencies / Creators** standardizing assets in a shared Drive. How it works Accept Video Upload (Webhook) Receives multipart/form-data with file in field media at /mediaUpload. Response is deferred until the final node. Validate Upload is Video (IF) Checks {{$binary.media.mimeType}} contains video/. Non-video payloads can be rejected with HTTP 400. Persist Upload to /tmp (Write Binary File) Writes the uploaded file to /tmp/<originalFilename or input.mp4> for stable processing. Extract Thumbnail with FFmpeg (Execute Command) Uses system ffmpeg if available; otherwise downloads a static binary to /tmp/ffmpeg. Runs: ffmpeg -y -ss 5 -i -frames:v 1 -q:v 2 /tmp/thumbnail.jpg Load Thumbnail from Disk (Read Binary File) Reads /tmp/thumbnail.jpg into the item’s binary as thumbnail. Upload Thumbnail to Drive (Google Drive) Uploads to your target folder. File name is <original>-thumb.jpg. Return API Response (Respond to Webhook) Sends JSON to the client including Drive file id, name, links, size, and checksums (if available). How to set up Import the workflow JSON into n8n. Google Drive Create (or choose) a destination folder; copy its Folder ID. Add Google Drive OAuth2 credentials in n8n and select them in the Drive node. Set the folder in the Drive “Upload” node. Webhook The endpoint is POST /webhook/mediaUpload. Test: curl -X POST https://YOUR-N8N-URL/webhook/mediaUpload \ -F "media=@/path/to/video.mp4" FFmpeg Nothing to install manually: the Execute Command node auto-installs a static ffmpeg if it’s not present. (Optional) If running n8n in Docker and you want permanence, use an image that includes ffmpeg. Response body The Respond node returns JSON with file metadata. You can customize the fields as needed. (Optional) Non-video branch On the IF node’s false output, add a Respond node with HTTP 400 and a helpful message. Requirements n8n instance with Execute Command node enabled (self-hosted/container/VM). Outbound network** access (to fetch static FFmpeg if not installed). Google Drive OAuth2** credential with permission to the destination folder. Adequate temp space in /tmp for the uploaded video and generated thumbnail. How to customize Timestamp**: change -ss 5 to another second, or parameterize it via query/body (e.g., timestamp=15). Multiple thumbnails**: duplicate the FFmpeg + Read steps with -ss 5, -ss 15, -ss 30, suffix names -thumb-5.jpg, etc. File naming**: include the upload time or Drive file ID: {{ base + '-' + $now + '-thumb.jpg' }}. Public sharing: add a **Drive → Permission: Create node (Role: reader, Type: anyone) and return webViewLink. Output target: replace the Drive node with **S3 Upload or Zoho WorkDrive (HTTP Request) if needed. Validation**: enforce max file size/MIME whitelist in a small Function node before writing to disk. Logging**: append a row to Google Sheets/Notion with sourceFile, thumbId, size, duration, status. Add-ons Slack / Teams notification** with the uploaded thumbnail link. Image optimization** (e.g., convert to WebP or resize variants). Retry & alerts** via error trigger workflow. Audit log** to a database (e.g., Postgres) for observability. Use Case Examples CMS ingestion**: Editors upload videos; workflow returns a thumbnail URL to store alongside the article. Social scheduling**: Upload longform to generate a quick hero image for a post. Client portals**: Clients drop raw footage; you keep thumbnails uniform in one Drive folder. Common troubleshooting | Issue | Possible Cause | Solution | | ----------------------------------------------------- | ------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------- | | ffmpeg: not found | System lacks ffmpeg and static build couldn’t download | Ensure outbound HTTPS allowed; keep the auto-installer lines intact; or use a Docker image that includes ffmpeg. | | Webhook returns 400 “not a video” | Wrong field name or non-video MIME | Send file in media field; ensure it’s video/*. | | Drive upload fails (403 / insufficient permissions) | OAuth scope or account lacks folder access | Reconnect Drive credential; verify the destination Folder ID and sharing/ownership. | | Response missing webViewLink / webContentLink | Drive node not returning link fields | Enable link fields in the Drive node or build URLs using the returned id. | | 413 Payload Too Large at reverse proxy | Proxy limits on upload size | Increase body size limits in your proxy (e.g., Nginx client_max_body_size). | | Disk full / ENOSPC | Large uploads filling /tmp | Increase temp storage; keep Cleanup step; consider size caps and early rejection. | | Corrupt thumbnail or black frame | Timestamp lands on a black frame | Change -ss or use -ss before -i vs. after; try different seconds (e.g., 1–3s). | | Slow extraction | Large or remote files; cold FFmpeg download | Warm the container; host near upload source; keep static ffmpeg cached in image. | | Duplicate outputs | Repeat requests with same video/name | Add a de-dup check (query Drive for existing <base>-thumb.jpg before upload). | Need Help? Want this wired to S3 or Zoho WorkDrive or to generate multiple timestamps and public links out of the box? We're happy to help.
by Sabrina Ramonov 🍄
Description This automation publishes to 9 social platforms daily! Manage your content in a simple Google Sheet. When you set a post's status to "Ready to Post" in your Google Sheet, this workflow grabs your image/video from your Google Drive, posts your content to 9 social platforms, then updates the Google Sheet post status to "Posted". Overview 1. Trigger: Check Every 3 Hours Check Google Sheet for posts with Status ""Ready to Post"" Return 1 post that is ready to go 2. Upload Media to Blotato Fetch image/video from Google Drive Upload image/video to Blotato 3. Publish to Social Media via Blotato Connect your Blotato account Choose your social accounts Either post immediately or schedule for later Includes support for images, videos, slideshows, carousels, and threads Setup Sign up for Blotato Generate Blotato API Key by going to Settings > API > Generate API Key (paid feature only) Ensure you have "Verified Community Nodes" enabled in your n8n Admin Panel. Install "Blotato" community node. Create credential for Blotato. Connect your Google Drive to n8n: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service Copy this sample Google Sheet. Do NOT change the column names, unless you know what you're doing: https://docs.google.com/spreadsheets/d/1v5S7F9p2apfWRSEHvx8Q6ZX8e-d1lZ4FLlDFyc0-ZA4/edit Make your Google Drive folder containing images/videos PUBLIC (i.e. Anyone with the link) Complete the 3 setup steps shown in BROWN sticky notes in this template Troubleshooting Checklist your Google Drive is public column names in your Google Sheet match the original example file size < 60MB; for large files, Google Drive does not work, use Amazon S3 instead 📄 Documentation Full Tutorial Troubleshooting Check your Blotato API Dashboard to see every request, response, and error. Click on a request to see the details. Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions
by Blurit
This n8n template demonstrates how to use Blurit to anonymize faces and/or license plates in images or videos directly within your workflow. Use cases include: automatically anonymizing dashcam videos, securing photos before sharing them publicly, or ensuring compliance with privacy regulations like GDPR. How it works The workflow starts with a Form Trigger where you can upload your image or video file. An HTTP Request node authenticates with the BlurIt API using your Client ID and Secret. The file is then uploaded to BlurIt via an HTTP Request to create a new anonymization task. A polling loop checks the task status until it succeeds. Once complete, the anonymized media is retrieved and saved using a Write Binary File node. How to use Replace the placeholder credentials in the Set Auth Config node with your BlurIt Client ID and Secret (found in your BlurIt Developer Dashboard). Execute the workflow, open the provided form link, and upload an image or video. The anonymized file will be written to your chosen output directory (or you can adapt the workflow to upload to cloud storage). Requirements A BlurIt account and valid API credentials (Client ID & Secret). A running instance of n8n (cloud or self-hosted). (Optional) Access to a shared folder or cloud storage service if you want to automate file delivery. Need Help? Contact us at support@blurit.io, or visit the BlurIt Documentation. Happy Coding!