by Barbora Svobodova
Sora 2 Video Generation: Prompt-to-Video Automation with OpenAI API Who’s it for This template is ideal for content creators, marketers, developers, or anyone needing automated AI video creation from text prompts. Perfect for bulk generation, marketing assets, or rapid prototyping using OpenAI's Sora 2 API. Example use cases: E-commerce sellers creating product showcase videos for multiple items without hiring videographers or renting studios Social media managers generating daily content like travel vlogs, lifestyle videos, or brand stories from simple text descriptions Marketing teams producing promotional videos for campaigns, events, or product launches in minutes instead of days How it works / What it does Submit a text prompt using a form or input node. Workflow sends your prompt to the Sora 2 API endpoint to start video generation. It polls the API to check if the video is still processing or completed. When ready, it retrieves the finished video's download link and automatically saves the file. All actions—prompt submission, status checks, and video retrieval—run without manual oversight. How to set up Use your existing OpenAI API key or create a new one at https://platform.openai.com/api-keys Replace Your_API_Key in the following nodes with your OpenAI API key: Sora 2Video, Get Video, Download Video Adjust the Wait node for Video node intervals if needed — video generation typically takes several minutes Enter your video prompt into the Text Prompt trigger form to start the workflow Requirements OpenAI account & OpenAI API key n8n instance (cloud or self-hosted) A form, webhook, or manual trigger for prompt submission How to customize the workflow Connect the prompt input to external forms, bots, or databases. Add post-processing steps like uploading videos to cloud storage or social platforms. Adjust polling intervals for efficient status checking. Limitations and Usage Tips Prompt Clarity: For optimal video generation results, ensure that prompts are clear, concise, and well-structured. Avoid ambiguity and overly complex language to improve AI interpretation. Processing Duration: Video creation may take several minutes depending on prompt complexity and system load. Users should anticipate this delay and design workflows accordingly. Polling Interval Configuration: Adjust polling intervals thoughtfully to balance prompt responsiveness with API rate limits, optimizing both performance and resource usage. API Dependency: This workflow relies on the availability and quota limits of OpenAI’s Sora 2 API. Users should monitor their API usage to avoid interruptions and service constraints.
by Edoardo Guzzi
Auto-update n8n instance with Coolify Who’s it for This workflow is designed for self-hosted n8n administrators who want to keep their instance automatically updated to the latest stable release. It removes the need for manual version checks and ensures deployments are always up to date. What it does The workflow checks your current n8n version against the latest GitHub release. If a mismatch is detected, it triggers a Coolify deployment to update your instance. If both versions match, the workflow ends safely without action. How it works Trigger: Start manually or on a schedule. HTTP Request (n8n settings): Fetches your current version (versionCli). HTTP Request (GitHub): Fetches the latest n8n release (name). Merge (SQL): Keeps only the two fields needed. Set (Normalize): Converts values into comparable variables. IF Check: Compares current vs latest version. If different → Deploy update. If same → Stop with no operation. HTTP Request (Coolify): Triggers a forced redeploy via API. How to set up Replace https://yourn8ndomain/rest/settings with your own n8n domain. Replace the Coolify API URL with your Coolify domain + app UUID. Add an HTTP Bearer credential containing your Coolify API token. Adjust the schedule interval (e.g., every 6 hours). Requirements Self-hosted n8n instance with /rest/settings endpoint accessible. Coolify (or a similar service) managing your n8n deployment. Valid API token configured as Bearer credential in n8n. How to customize Change the schedule frequency depending on how often you want checks. Modify the IF condition if you want stricter or looser version matching (e.g., ignore patch versions). Replace Coolify API call with another service (like Docker, Portainer, or Kubernetes) if you use a different deployment method.
by Wevanta Infotech
LinkedIn Auto-Post Agent for n8n 🚀 Automate your LinkedIn presence with AI-powered content generation This n8n workflow automatically generates and publishes engaging LinkedIn posts using OpenAI's GPT models. Perfect for professionals and businesses who want to maintain an active LinkedIn presence without manual effort. ✨ Features 🤖 AI-Powered Content**: Generate professional LinkedIn posts using OpenAI GPT-3.5-turbo or GPT-4 ⏰ Automated Scheduling**: Post content automatically on weekdays at 9 AM (customizable) 🎯 Manual Trigger**: Generate and post content on-demand 🔒 Secure**: All credentials stored securely in n8n's encrypted credential system 📊 Error Handling**: Built-in retry logic and error notifications 🎨 Customizable**: Easily modify prompts, scheduling, and content parameters 🏗️ Architecture This workflow uses a streamlined 3-node architecture: Schedule/Manual Trigger → OpenAI Content Generation → LinkedIn Post Node Details Schedule Trigger: Automatically triggers the workflow (default: weekdays at 9 AM) Manual Trigger: Allows on-demand content generation OpenAI Content Generation: Creates LinkedIn-optimized content using AI LinkedIn Post: Publishes the generated content to LinkedIn 📋 Prerequisites n8n instance (self-hosted or cloud) OpenAI API account and API key LinkedIn account with API access Basic familiarity with n8n workflows 🚀 Quick Start 1. Import the Workflow Download the linkedin-auto-post-agent.json file In your n8n instance, go to Workflows → Import from File Select the downloaded JSON file Click Import 2. Set Up Credentials OpenAI API Credentials Go to Credentials in your n8n instance Click Create New Credential Select OpenAI Enter your OpenAI API key Name it "OpenAI API" and save LinkedIn OAuth2 Credentials Create a LinkedIn App at LinkedIn Developer Portal Configure OAuth 2.0 settings: Redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Scopes: r_liteprofile, w_member_social In n8n, create new LinkedIn OAuth2 credentials Enter your LinkedIn App's Client ID and Client Secret Complete the OAuth authorization flow 3. Configure the Workflow Open the imported workflow Click on the OpenAI Content Generation node Select your OpenAI credentials Customize the content prompt if desired Click on the LinkedIn Post node Select your LinkedIn OAuth2 credentials Save the workflow 4. Test the Workflow Click the Manual Trigger node Click Execute Node to test content generation Verify the generated content in the LinkedIn node output Check your LinkedIn profile to confirm the post was published 5. Activate Automated Posting Click the Active toggle in the top-right corner The workflow will now run automatically based on the schedule ⚙️ Configuration Options Scheduling The default schedule posts content on weekdays at 9 AM. To modify: Click the Schedule Trigger node Modify the Cron Expression: 0 9 * * 1-5 0 9 * * 1-5: Weekdays at 9 AM 0 12 * * *: Daily at noon 0 9 * * 1,3,5: Monday, Wednesday, Friday at 9 AM Content Customization Modify the OpenAI prompt to change content style: Click the OpenAI Content Generation node Edit the System Message to adjust tone and style Modify the User Message to change topic focus Example Prompts Professional Development Focus: Create a LinkedIn post about professional growth, skill development, or career advancement. Keep it under 280 characters and include 2-3 relevant hashtags. Industry Insights: Generate a LinkedIn post sharing an industry insight or trend in technology. Make it thought-provoking and include relevant hashtags. Motivational Content: Write an inspiring LinkedIn post about overcoming challenges or achieving goals. Keep it positive and engaging with appropriate hashtags. Model Selection Choose between OpenAI models based on your needs: gpt-3.5-turbo**: Cost-effective, good quality gpt-4**: Higher quality, more expensive gpt-4-turbo**: Latest model with improved performance 🔧 Advanced Configuration Error Handling The workflow includes built-in error handling: Retry Logic**: 3 attempts with 1-second delays Continue on Fail**: Workflow continues even if individual nodes fail Error Notifications**: Optional email/Slack notifications on failures Content Review Workflow (Optional) To add manual content review before posting: Add a Wait node between OpenAI and LinkedIn nodes Configure webhook trigger for approval Add conditional logic based on approval status Rate Limiting To respect API limits: OpenAI: 3 requests per minute (default) LinkedIn: 100 posts per day per user Adjust scheduling frequency accordingly 📊 Monitoring and Analytics Execution History Go to Executions in your n8n instance Filter by workflow name to see all runs Click on individual executions to see detailed logs Key Metrics to Monitor Success Rate**: Percentage of successful executions Content Quality**: Review generated posts periodically API Usage**: Monitor OpenAI token consumption LinkedIn Engagement**: Track post performance on LinkedIn 🛠️ Troubleshooting Common Issues OpenAI Node Fails Verify API key is correct and has sufficient credits Check if you've exceeded rate limits Ensure the model name is spelled correctly LinkedIn Node Fails Verify OAuth2 credentials are properly configured Check if LinkedIn app has required permissions Ensure the content doesn't violate LinkedIn's posting policies Workflow Doesn't Trigger Confirm the workflow is marked as "Active" Verify the cron expression syntax Check n8n's timezone settings Debug Mode Enable Save Manual Executions in workflow settings Run the workflow manually to see detailed execution data Check each node's input/output data 🔒 Security Best Practices Store all API keys in n8n's encrypted credential system Regularly rotate API keys (monthly recommended) Use environment variables for sensitive configuration Enable execution logging for audit trails Monitor for unusual API usage patterns 📈 Optimization Tips Content Quality Review and refine prompts based on output quality A/B test different prompt variations Monitor LinkedIn engagement metrics Adjust posting frequency based on audience response Cost Optimization Use gpt-3.5-turbo for cost-effective content generation Set appropriate token limits (200 tokens recommended) Monitor OpenAI usage in your dashboard Performance Keep workflows simple with minimal nodes Use appropriate retry settings Monitor execution times and optimize if needed 🤝 Contributing We welcome contributions to improve this workflow: Fork the repository Create a feature branch Make your improvements Submit a pull request 📄 License This project is licensed under the MIT License - see the LICENSE file for details. 🆘 Support If you encounter issues or have questions: Check the troubleshooting section above Review n8n's official documentation Join the n8n community forum Create an issue in this repository 🔗 Useful Links n8n Documentation OpenAI API Documentation LinkedIn API Documentation n8n Community Forum Happy Automating! 🚀 This workflow helps you maintain a consistent LinkedIn presence while focusing on what matters most - your business and professional growth.
by Yasser Sami
Olostep Amazon Products Scraper This n8n template automates Amazon product scraping using the Olostep API. Simply enter a search query, and the workflow scrapes multiple Amazon search pages to extract product titles and URLs. Results are cleaned, normalized, and saved into a Google Sheet or Data Table. Who’s it for E-commerce analysts researching competitors and pricing Product sourcing teams Dropshippers and Amazon sellers Automation builders who want quick product lists without manual scraping Growth hackers collecting product data at scale How it works / What it does Form Trigger User enters a search query (e.g., “wireless bluetooth headphones”). The query is used to build the Amazon search URL. Pagination Setup A list of page numbers (1–10) is generated automatically. Each number loads the corresponding Amazon search results page. Scrape Amazon with Olostep For each page, Olostep scrapes Amazon search results. Olostep’s LLM extraction returns: title — product title url — product link Parse & Split Results The JSON output is decoded and turned into individual product items. URL Normalization If the product URL is relative, it is automatically converted into a full Amazon URL. Conditional Check (IF node) Ensures only valid product URLs are stored. Helps avoid scraping Amazon navigation links or invalid items. Insert into Sheet / Data Table Each valid product is saved in: title url Automatic Looping & Rate Management A wait step ensures API rate limits are respected while scraping multiple pages. This workflow gives you a complete, reliable Amazon scraper with no browser automation and no manual copy/paste — everything runs through the Olostep API and n8n. How to set up Import this template into your n8n account. Add your Olostep API key. Connect your Google Sheets or Data Table. Deploy the form and start scraping with any Amazon search phrase. Requirements Olostep API key Google Sheets or Data Table n8n cloud or self-hosted instance How to customize the workflow Add more product fields (price, rating, number of reviews, seller name, etc.). Extend pagination range (1–20 or more pages). Add filtering logic (e.g., ignore sponsored results). Send scraped results to Notion, Airtable, or a CRM. Trigger via Telegram bot instead of a form. 👉 This workflow is perfect for e-commerce research, competitive analysis, or building Amazon product datasets with minimal effort.
by franck fambou
Extract and Convert PDF Documents to Markdown with LlamaIndex Cloud API Overview This workflow automatically converts PDF documents to Markdown format using the LlamaIndex Cloud API. LlamaIndex is a powerful data framework that specializes in connecting large language models with external data sources, offering advanced document processing capabilities with high accuracy and intelligent content extraction. How It Works Automatic Processing Pipeline: Form Submission Trigger**: Workflow initiates when a user submits a document through a web form Document Upload**: PDF files are automatically uploaded to LlamaIndex Cloud for processing Smart Status Monitoring**: The system continuously checks processing status and adapts the workflow based on results Conditional Content Extraction**: Upon successful processing, extracted Markdown content is retrieved for further use Setup Instructions Estimated Setup Time: 5-10 minutes Prerequisites LlamaIndex Cloud account and API credentials Access to n8n instance (cloud or self-hosted) Configuration Steps Configure Form Trigger Set up the webhook form trigger with file upload capability Add required fields to capture document metadata and processing preferences Setup LlamaIndex API Connection Obtain your API key from LlamaIndex Cloud dashboard Configure the HTTP Request node with your credentials and endpoint URL Set proper authentication headers and request parameters Configure Status Verification Define polling intervals for status checks (recommended: 10-30 seconds) Set maximum retry attempts to avoid infinite loops Configure success/failure criteria based on API response codes Setup Content Extractor Configure output format preferences (Markdown styling, headers, etc.) Set up error handling for failed extractions Define content storage or forwarding destinations Use Cases Document Digitization**: Convert legacy PDF documents to editable Markdown format Content Management**: Prepare documents for CMS integration or static site generators Knowledge Base Creation**: Transform PDF manuals and guides into searchable Markdown content Academic Research**: Convert research papers and publications for analysis and citation Technical Documentation**: Process PDF specifications and manuals for developer documentation Key Features Fully automated PDF to Markdown conversion Intelligent content structure preservation Error handling and retry mechanisms Status monitoring with real-time feedback Scalable processing for batch operations Requirements LlamaIndex Cloud API key n8n instance (v0.200.0 or higher recommended) Internet connectivity for API access Support For issues related to LlamaIndex API, consult their official documentation docs. For n8n-specific questions, refer to the n8n community forum.
by Sk developer
🎵 Spotify to MP3 → Upload to Google Drive Automate the process of converting Spotify track URLs into MP3 files, uploading them to Google Drive, and instantly generating shareable links — all triggered by a simple form. ✅ What This Workflow Does Accepts a Spotify URL from a form. Sends the URL to Spotify Downloader MP3 API on RapidAPI. Waits briefly for conversion. Downloads the resulting MP3 file. Uploads it to Google Drive. Sets public sharing permissions for easy access. 🧩 Workflow Structure | Step | Node Name | Description | |------|--------------------------------|-----------------------------------------------------------------------------| | 1 | On form submission | Collects Spotify track URL via an n8n Form Trigger node. | | 2 | Spotify Rapid API | Calls Spotify Downloader MP3 API to generate the MP3 download link. | | 3 | Wait | Ensures download link is processed before proceeding. | | 4 | Downloader | Downloads the MP3 using the generated link. | | 5 | Upload MP3 to Google Drive | Uploads the file using Google Drive credentials. | | 6 | Update Permission | Makes the uploaded file publicly accessible via a shareable link. | 🔧 Requirements n8n instance (self-hosted or cloud) RapidAPI account & subscription to Spotify Downloader MP3 API Google Cloud service account with Drive API access Active Google Drive (root or specified folder) 🚀 How to Use Set up Google API credentials in n8n. Subscribe to the Spotify Downloader MP3 API on RapidAPI. Insert your RapidAPI key into the HTTP Request node. Deploy the workflow and access the webhook form URL. Submit a Spotify URL — the MP3 gets downloaded, uploaded, and shared. 🎯 Use Cases 🎧 Music collectors automating downloads 🧑🏫 Teachers creating music-based lessons 🎙 Podcasters pulling music samples 📥 Anyone who needs quick Spotify → MP3 conversion 🛠 Tech Stack n8n**: Visual workflow automation RapidAPI**: Spotify Downloader MP3 API Google Drive**: File storage and sharing Form Trigger**: Input collection interface HTTP Request Node**: Handles API communication 🔐 Notes on Security Do not expose your x-rapidapi-key publicly. Use environment variables or n8n credentials for secure storage. Adjust sharing permissions (reader, writer, or restricted) per your needs. 🔗 API Reference 🎵 Spotify Downloader MP3 API – skdeveloper 📦 Tags spotify mp3 google-drive automation rapidapi n8n music
by Tony Ciencia
Overview This template provides an automatic backup solution for all your n8n workflows, saving them directly to Google Drive. It’s designed for freelancers, agencies, and businesses that want to keep their automations safe, versioned, and always recoverable. Why Backups Matter Disaster recovery – Restore workflows quickly if your instance fails. Version control – Track workflow changes over time. Collaboration – Share workflow JSON files easily with teammates. How it Works Fetches the complete list of workflows from your n8n instance via API. Downloads each workflow in JSON format. Converts the data into a file with a unique name (workflow name + ID). Uploads all files to a chosen Google Drive folder. Can be run manually or via an automatic schedule (daily, weekly, etc.). Requirements An active n8n instance with API access enabled API credentials for n8n (API key or basic auth) A Google account with access to Google Drive Google Drive credentials connected in n8n Setup Instructions Connect your n8n API (authenticate your instance). Connect your Google Drive account. Select or create the Drive folder where backups will be stored. Customize the Schedule Trigger to define backup frequency. Run once to confirm files are stored correctly. Customization Options Frequency → Set daily, weekly, or monthly backups. File Naming → Adjust filename expression (e.g., {{workflowName}}-{{workflowId}}-{{date}}.json). Folder Location → Store backups in separate Google Drive folders per project or client. Target Audience This template is ideal for: Freelancers managing multiple client automations. Agencies delivering automation services. Teams that rely on n8n for mission-critical workflows. It reduces risk, saves time, and ensures you never lose your work. ⏱ Estimated setup time: 5–10 minutes.
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Gegenfeld
AI Image Generator Workflow This workflow lets you automatically generate AI images with the APImage API 🡥, download the generated image, and upload it to any serivce you want (e.g., Google Drive, Notion, Social Media, etc.). 🧩 Nodes Overview 1. Generate Image (Trigger) This node contains the following fields: Image Prompt*: *(text input) Dimensions**: Square, Landscape, Portrait AI Model**: Basic, Premium This acts as the entry point to your workflow. It collects input and sends it to the APImage API node. Note: You can swap this node with any other node that lets you define the parameters shown above._** 2. APImage API (HTTP Request) This node sends a POST request to: https://apimage.org/api/ai-image-generate The request body is dynamically filled with values from the first node: { "prompt": "{{ $json['Describe the image you want'] }}", "dimensions": "{{ $json['Dimensions'] }}", "model": "{{ $json['AI Model'] }}" } ✅ Make sure to set your API Key in the Authorization header like this: Bearer YOUR_API_KEY 🔐 You can find your API Key in your APImage Dashboard 🡥 3. Download Image (HTTP Request) Once the image is generated, this node downloads the image file using the URL returned by the API: {{ $json.images[0] }} The image is stored in the output field: generated_image 4. Upload to Google Drive This node takes the image from the generated_image field and uploads it to your connected Google Drive. 📁 You can configure a different target folder or replace this node with: Dropbox WordPress Notion Shopify Any other destination Make sure to pass the correct filename and file field, as defined in the "Download Image" node. Set up Google Drive credentials 🡥 ✨ How To Get Started Double-click the APImage API node. Replace YOUR_API_KEY with your actual key (keep Bearer prefix). Open the Generate Image node and test the form. 🔗 Open the Dashboard 🡥 🔧 How to Customize Replace the Form Trigger with another node if you're collecting data elsewhere (e.g., via Airtable, Notion, Webhook, Database, etc.) Modify the Upload node if you'd like to send the image to other tools like Slack, Notion, Email, or an S3 bucket. 📚 API Docs & Resources APImage API Docs 🡥 n8n Documentation 🡥 🖇️ Node Connections Generate Image → APImage API → Download Image → Upload to Google Drive ✅ This template is ideal for: Content creators automating media generation SaaS integrations for AI tools Text-to-image pipelines
by System Admin
Tagged with: , , , ,