by Davide
This low-code automation enables all eCommerce store visitors to upload a photo of themselves and virtually “try on” a garment in just a few clicks. With this workflow, WooCommerce, Prestashop, Shopify and more merchants can offer a cutting-edge “virtual try-on” feature with minimal development effort, enhancing customer engagement and reducing product returns. Key Advantages Zero-Coding, Visual Setup** Build end-to-end e-commerce features with drag-and-drop nodes instead of custom backend code. Asynchronous, Scalable Processing** Non-blocking “Wait” + “If” loop handles multi-second AI jobs gracefully, freeing up the workflow for other tasks. Dynamic Inputs & URLs** Query strings (e.g. ?Product=IMAGE_URL) allow you to embed the form on any product page and pass the garment image on the fly. Seamless User Experience** Instant pop-up within your storefront and automatic redirect to the generated mock-up keeps shoppers engaged without page reloads. Easy Credential Management** API keys, FTP credentials, and webhook IDs are all stored securely in n8n’s credential manager. How It Works Form Submission: A user submits a form with their name, an image of themselves ("Me"), and a hidden product image URL ("Product"). The form is triggered via the On form submission node, which collects the input data. Image Upload: The uploaded image ("Me") is sent to an FTP server for temporary storage using the FTP node. The filename includes a timestamp to ensure uniqueness. Virtual Try-on Request: The Create Image node sends a POST request to the Fal.run API, providing: The uploaded human image URL (from FTP). The product image URL (from the hidden form field). This generates a virtual try-on result. Result Processing: The workflow checks the status of the image generation (Get status node) in a loop (with a 10-second wait between checks) until it is marked as "COMPLETED." Once ready, the final image URL is fetched (Get Url image node) and displayed to the user via a redirect (Form node). User Experience: The user is redirected to the generated try-on image, completing the process. Set Up Steps API Key Setup: Create an account and obtain an API key. Configure the Create Image node with HTTP Header Authentication: Name: Authorization Value: Key YOURAPIKEY FTP/S3 Configuration: Set up an FTP server or S3 bucket to temporarily store uploaded user images. Configure the FTP node with your FTP credentials and storage path. Ecommerce Integration: On your WooCommerce site, add a "Try On" button that opens the form in a pop-up. Dynamically pass the product image URL as a query parameter: Example: https://URL_N8N/form/ca1c314d-46c6-4eeb-b6a5-359XXXXXX?Product=IMAGE_URL Testing: Verify the workflow by submitting a test form and ensuring the virtual try-on image is generated and displayed correctly. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by David Ashby
🛠️ MailerLite Tool MCP Server Complete MCP server exposing all MailerLite Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every MailerLite Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n MailerLite Tool tool with full error handling 📋 Available Operations (4 total) Every possible MailerLite Tool operation is included: 🔧 Subscriber (4 operations) • Create a subscriber • Get a subscriber • Get many subscribers • Update a subscriber 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native MailerLite Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every MailerLite Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Solomon
Based on Jonathan's work. Check out his templates. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Guido X Jansen
Introduction **Manual LinkedIn data collection is time-consuming, error-prone, and results in inconsistent data quality across CRM/database records.** This workflow is great for organizations that struggle with: Incomplete contact records with only LinkedIn URLs but missing profile details Hours spent manually copying LinkedIn information into databases Inconsistent data formats due to copy-paste from LinkedIn (emojis, styled text, special characters) Outdated profile information that doesn't reflect current roles/companies No systematic way to enrich contacts at scale Primary Users Sales & Marketing Teams Event Organizers & Conference Managers for event materials Recruitment & HR Professionals CRM Administrators Specific Problems Addressed Data Completeness: Automatically fills missing profile fields (headline, bio, skills, experience) Data Quality: Sanitizes problematic characters that break databases/exports Time Efficiency: Reduces hours of manual data entry to automated monthly updates Error Handling: Gracefully manages invalid/deleted LinkedIn profiles Scalability: Processes multiple profiles in batch without manual intervention Standardization: Ensures consistent data format across all records Cost Each URL scraped by Apify costs $0.01 to get all the data above. Apify charges per scrape, regardless of how much dta or fields you extract/use. Setup Instructions Prerequisites n8n Instance: Access to a running n8n instance (self-hosted or cloud) NocoDB Account: Database with a table containing LinkedIn URLs Apify Account: Free or paid account for LinkedIn scraping Required fields in NocoDB table Input: single LinkedIn URL NocoDB Field name LinkedIn Output: first/last/full name e-mail bio headline profile pic URL current role country skills current employer employer URL experiences (all previous jobs) personal website publications (articles) NocoDB Field names linkedin_full_name linkedin_first_name: linkedin_headline: linkedin_email: linkedin_bio: linkedin_profile_pic linkedin_current_role linkedin_current_company linkedin_country linkedin_skills linkedin_company_website linkedin_experiences linkedin_personal_website linkedin_publications linkedin_scrape_error_reason linkedin_scrape_last_attempt linkedin_scrape_status linkedin_last_modified Technically you also need an Id field, but that is always there so no need to add it :) n8n Setup 1. Import the Workflow Copy the workflow JSON from the template In n8n, click "Add workflow" → "Import from JSON" Paste the workflow and click "Import" 2. Configure NocoDB Connection Click on any NocoDB node in the workflow Add new credentials → "NocoDB Token account" Enter your NocoDB API token (found in NocoDB → User Settings → API Tokens) Update the projectId and table parameters in all NocoDB nodes 3. Set Up Apify Integration Create an Apify account at apify.com Generate an API token (Settings → Integrations → API) In the workflow, update the Apify token in the "Get Scraper Results" node Configure HTTP Query Auth credentials with your token 4. Map Your Database Fields Review the "Transform & Sanitize Data" node Update field mappings to match your NocoDB table structure Ensure these fields exist in your table: LinkedIn (URL field) linkedin_headline, linkedin_full_name, linkedin_bio, etc. linkedin_scrape_status, linkedin_last_modified 5. Configure the Filter In "Get Guests with LinkedIn" node Adjust the filter to match your requirements Default: (LinkedIn,isnot,null)~and(linkedin_headline,is,null) 6. Test the Workflow Click "Execute Workflow" with Manual Trigger Monitor execution for any errors Verify data is properly updated in NocoDB 7. Activate Automated Schedule Configure the Schedule Trigger node (default: monthly) Toggle the workflow to "Active" Monitor executions in n8n dashboard Customization Options 1. Data Source Modifications Different Database: Replace NocoDB nodes with Airtable, Google Sheets, or PostgreSQL Multiple Tables: Add parallel branches to process different contact tables Custom Filters: Modify the WHERE clause to target specific record subsets 2. Enrichment Fields Add Fields: Include additional LinkedIn data like education, certifications, or recommendations Remove Fields: Simplify by removing unnecessary fields (publications, skills) Custom Transformations: Add business logic for field calculations or formatting 3. Scheduling Options Frequency: Change from monthly to daily, weekly, or hourly Time-based: Set specific times for different timezones Event-triggered: Replace with webhook trigger for on-demand processing 4. Error Handling Enhancement Notifications: Add email/Slack nodes to alert on failures Retry Logic: Implement wait and retry for temporary failures Logging: Add database logging for audit trails 5. Data Quality Rules Validation: Add IF nodes to validate data before updates Duplicate Detection: Check for existing records before creating new ones Data Standardization: Add custom sanitization rules for industry-specific needs 6. Integration Extensions CRM Sync: Add nodes to push data to Salesforce, HubSpot, or Pipedrive AI Enhancement: Use OpenAI to summarize bios or extract key skills Image Processing: Download and store profile pictures locally 7. Performance Optimization Batch Size: Adjust the number of profiles processed per run Rate Limiting: Add delays between API calls to avoid limits Parallel Processing: Split large datasets across multiple workflow executions 8. Compliance Additions GDPR Compliance: Add consent checking before processing Data Retention: Implement automatic cleanup of old records Audit Logging: Track who accessed what data and when These customizations allow the workflow to adapt from simple contact enrichment to complex data pipeline scenarios across various industries and use cases.
by Zacharia Kimotho
This workflow makes it easier to keep track of the stocks market and get an email with a summary of the daily highlights on what happened, key insights and trends Setup Guide Define the schedule (days, times, intervals). Replace sample stock data with your desired stock list (ticker, name, etc.) in JSON format. Split Out the fields to have a clean list of the stocks to monitor set keyword node Extracts the stock ticker from each item and sets it to the keyword property. Financial times scraper Triggers the Bright Data Datasets API to scrape financial data. Set the node as below Method: POST URL: https://api.brightdata.com/datasets/v3/trigger Query Parameters: dataset_id: Replace with your Bright Data dataset ID. include_errors: true type: discover_new discover_by: keyword Headers: Authorization: Bearer YOUR_BRIGHTDATA_API_KEY Replace with your Bright Data API key. Body: JSON, ={{ $('set keyword').all().map(item => item.json)}} Execute Once: Checked. Get progress node Checks the status of the Bright Data scraping job if complete, or running Setup: URL: https://api.brightdata.com/datasets/v3/progress/{{ $json.snapshot_id }} Headers: Authorization: Bearer YOUR_BRIGHTDATA_API_KEY Replace with your Bright Data API key. Get snapshot + data retrieves the scraped data from the Bright Data API. Pass the request as URL: https://api.brightdata.com/datasets/v3/snapshot/{{ $json.snapshot_id }} Query Parameters: format: json Headers: Authorization: Bearer YOUR_BRIGHTDATA_API_KEY Replace with your Bright Data API key. Aggregate. Combines the data from each stock item into a single object Update to sheet and add all items to This sheet. Make a copy before you can map the data create summary node generates a summary of the scraped stock data using the Google Gemini AI model and notifies you via Gmail. Setup: Prompt Type: define Text: Customize the prompt to define the AI's role, input format, tasks, output format (HTML email), and constraints. Google Sheets. Appends the scraped data to a Google Sheet. This should be set to automap so as to adjust to the results found in the request Important Notes: Remember to replace placeholder values (API keys, dataset IDs, email addresses, Google Sheet IDs) with your actual values. Review and customize the AI prompt for the "create summary" node to achieve the desired email summary output. Consider adding error handling for a more robust workflow. Monitor API usage to avoid rate limits.
by Leonard
Open Deep Research - AI-Powered Autonomous Research Workflow Description This workflow automates deep research by leveraging AI-driven search queries, web scraping, content analysis, and structured reporting. It enables autonomous research with iterative refinement, allowing users to collect, analyze, and summarize high-quality information efficiently. How it works 🔹 User Input The user submits a research topic via a chat message. 🧠 AI Query Generation A Basic LLM generates up to four refined search queries to retrieve relevant information. 🔎 SERPAPI Google Search The workflow loops through each generated query and retrieves top search results using the SerpAPI API. 📄 Jina AI Web Scraping Extracts and summarizes webpage content from the URLs obtained via SerpAPI. 📊 AI-Powered Content Evaluation An AI Agent evaluates the relevance and credibility of the extracted content. 🔁 Iterative Search Refinement If the AI finds insufficient or low-quality information, it generates new search queries to improve results. 📜 Final Report Generation The AI compiles a structured markdown report, including sources with citations. Set Up Instructions 🚀 Estimated setup time: ~10-15 minutes ✅ Required API Keys:** SerpAPI → For Google Search results Jina AI → For text extraction OpenRouter → For AI-driven query generation and summarization ⚙️ n8n Components Used:** AI Agents with memory buffering for iterative research Loops to process multiple search queries efficiently HTTP Requests for direct API interactions with SerpAPI and Jina AI 📝 Recommended Enhancements:** Add sticky notes in n8n to explain each step for new users Implement Google Drive or Notion Integration to save reports automatically 🎯 Ideal for: ✔️ Researchers & Analysts - Automate background research ✔️ Journalists - Quickly gather reliable sources ✔️ Developers - Learn how to integrate multiple AI APIs into n8n ✔️ Students - Speed up literature reviews 🔗 Completely free and open-source! 🚀
by Lakshit Ukani
One-way sync between Telegram, Notion, Google Drive, and Google Sheets Who is this for? This workflow is perfect for productivity-focused teams, remote workers, virtual assistants, and digital knowledge managers who receive documents, images, or notes through Telegram and want to automatically organize and store them in Notion, Google Drive, and Google Sheets—without any manual work. What problem is this workflow solving? Managing Telegram messages and media manually across different tools like Notion, Drive, and Sheets can be tedious. This workflow automates the classification and storage of incoming Telegram content, whether it’s a text note, an image, or a document. It saves time, reduces human error, and ensures that media is stored in the right place with metadata tracking. What this workflow does Triggers on a new Telegram message** using the Telegram Trigger node. Classifies the message type** using a Switch node: Text messages are appended to a Notion block. Images are converted to base64, uploaded to imgbb, and then added to Notion as toggle-image blocks. Documents are downloaded, uploaded to Google Drive, and the metadata is logged in Google Sheets. Sends a completion confirmation** back to the original Telegram chat. Setup Telegram Bot: Set up a bot and get the API token. Notion Integration: Share access to your target Notion page/block. Use the Notion API credentials and block ID where content should be appended. Google Drive & Sheets: Connect the relevant accounts. Select the destination folder and spreadsheet. imgbb API: Obtain a free API key from imgbb. Replace placeholder credential IDs and asset URLs as needed in the imported workflow. How to customize this workflow to your needs Change Storage Locations**: Update the Notion block ID or Google Drive folder ID. Switch Google Sheet to log in a different file or sheet. Add More Filters**: Use additional Switch rules to handle other Telegram message types (like videos or voice messages). Modify Response Message**: Personalize the Telegram confirmation text based on the file type or sender. Use a different image hosting service** if you don’t want to use imgbb.
by David Ashby
Complete MCP server exposing all Cloudflare Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Cloudflare Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Cloudflare Tool tool with full error handling 📋 Available Operations (4 total) Every possible Cloudflare Tool operation is included: 🔧 Zonecertificate (4 operations) • Delete a certificate • Get a certificate • Get many certificates • Upload a certificate 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Cloudflare Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Cloudflare Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Beeminder Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Beeminder Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Beeminder Tool tool with full error handling 📋 Available Operations (4 total) Every possible Beeminder Tool operation is included: 🔧 Datapoint (4 operations) • Create datapoint for goal • Delete a datapoint • Get many datapoints for a goal • Update a datapoint 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Beeminder Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Beeminder Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jaruphat J.
⚠️ Important Disclaimer: This template is only compatible with a self-hosted n8n instance using a community node. Who is this for? This workflow is ideal for digital content creators, marketers, social media managers, and automation enthusiasts who want to produce fully automated vertical video content featuring inspirational or motivational quotes. Specifically tailored for Thai language, it effectively demonstrates integration of AI-generated imagery, video, ambient sound, and visually appealing quote overlays. What problem is this workflow solving? Manually creating high-quality, vertically formatted quote videos is often repetitive, time-consuming, and involves multiple tedious steps like selecting suitable visuals, editing audio tracks, and correctly overlaying text. Additionally, manual uploading to platforms like YouTube and maintaining accurate content records are prone to errors and inefficiencies. What this workflow does: Fetches a quote, author, and scenic background description from a Google Sheet. Automatically generates a vertical background image using the Flux AI (txt2img) API. Transforms the AI-generated image into a subtly animated cinematic vertical video using the Kling video-generation API. Generates an immersive, ambient background sound using ElevenLabs’ sound generation API. Dynamically overlays the selected Thai-language quote and author text onto the generated video using FFmpeg, ensuring visually appealing typography (e.g., Kanit font). Automatically uploads the final video to YouTube. Updates the resulting YouTube video URL back to the Google Sheet, keeping your content records current and well-organized. Setup Requirements: This workflow requires a self-hosted n8n instance, as the execution of FFmpeg commands is not supported on n8n Cloud. Ensure FFmpeg is installed on your self-hosted environment. API keys and accounts setup for Flux, Kling, ElevenLabs, Google Sheets, Google Drive, and YouTube. Google Sheets Setup: Your Google Sheet must include these columns: Index** Unique identifier for each quote Quote (Thai)** Quote text in Thai language (or your chosen language) Pen Name (Thai)** Author or pen name of the quote's creator Background (EN)** Short English description of the scene (e.g., "sunrise over mountains") Prompt (EN)** Detailed English prompt describing the image/video scene (e.g., "peaceful sunrise with misty mountains") Background Image** URL of AI-generated image (updated automatically) Background Video** URL of generated video (updated automatically) Music Background** URL of generated ambient audio (updated automatically) Video Status** YouTube URL (updated automatically after upload) A ready-to-use Google Sheets template is provided [here (provide your actual link)]. To help you get started quickly, you can use this template spreadsheet. Next steps: Authenticate Google Sheets, Google Drive, YouTube API, Flux AI, Kling API, and ElevenLabs API within n8n. Ensure FFmpeg supports fonts compatible with your chosen language (for Thai, "Kanit" font is recommended). Prepare your Google Sheets with desired quotes, authors, and image/video prompts. How to customize this workflow to your needs: Fonts:** Adjust font type, size, color, and positioning within the provided FFmpeg commands in the workflow’s code nodes. Verify that selected fonts properly support your target language. Media Customization:** Customize the scene descriptions in your Google Sheet to change image/video backgrounds automatically generated by AI. Quote Management:** Easily manage, add, or update quotes and associated details directly via Google Sheets without workflow modifications. Audio Ambiance:** Customize or adjust the ambient sound prompt for ElevenLabs within the workflow’s HTTP Request node to match your video's desired mood. Benefits of using AI-generated content and localized fonts: Leveraging AI-generated visual and audio elements along with localized fonts greatly enhances audience engagement by creating visually appealing, professional-quality content tailored specifically for your target audience. This automated workflow drastically reduces production time and manual effort, enabling rapid, consistent content creation optimized for platforms such as YouTube Shorts, Instagram Reels, and TikTok.
by Obsidi8n
How it works: Send notes from Obsidian via Webhook to start the audio conversion OpenAI converts your text to natural-sounding audio and generates episode descriptions Audio files are stored in Cloudinary and automatically attached to your notes in Obsidian A professional podcast feed is generated, compatible with all major podcast platforms (Apple, Spotify, Google) Set up steps: Install and configure the Post Webhook Plugin in Obsidian Set up Custom Auth credentials in n8n for Cloudinary using the following JSON: { "name": "Cloudinary API", "type": "httpHeaderAuth", "authParameter": { "type": "header", "key": "Authorization", "value": "Basic {{Buffer.from('your_api_key:your_api_secret').toString('base64')}}" } } Configure podcast feed metadata (title, author, cover image, etc.) Note: The second flow is a generic Podcast Feed module that can be reused in any '[...]-to-Podcast' workflow. It generates a standard RSS feed from Google Sheets data and podcast metadata, making it compatible with all major podcast platforms.
by RealSimple Solutions
Who Is This For? This workflow is designed for AI engineers, automation specialists, and content creators who need a scalable system to dynamically manage prompts stored in GitHub. It eliminates manual updates, enforces required variable checks, and ensures that AI interactions always receive fully processed prompts. 🚀 What Problem Does This Solve? Manually managing AI prompts can be inefficient and error-prone. This workflow: ✅ Fetches dynamic prompts from GitHub ✅ Auto-populates placeholders with values from the setVars node ✅ Ensures all required variables are present before execution ✅ Processes the formatted prompt through an AI agent 🛠 How This Workflow Works This workflow consists of three key branches, ensuring smooth prompt retrieval, variable validation, and AI processing. 1️⃣ Retrieve the Prompt from GitHub (HTTP Request → Extract from File → SetPrompt) The workflow starts manually or via an external trigger. It fetches a text-based prompt stored in a GitHub repository. The Extract from File Node retrieves the content from the GitHub file. The SetPrompt Node stores the prompt, making it accessible for processing. 📌 Note: The prompt must contain n8n expression format variables (e.g., {{ $json.company }}) so they can be dynamically replaced. 2️⃣ Extract & Auto-Populate Variables (Check All Prompt Vars → Replace Variables) A Code Node scans the prompt for placeholders in the n8n expression format ({{ $json.variableName }}). The workflow compares required variables against the setVars node: ✅ If all variables are present, it proceeds to variable replacement. ❌ If any variables are missing, the workflow stops and returns an error listing them. The Replace Variables Node replaces all placeholders with values from setVars. 📌 Example of a properly formatted GitHub prompt: Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. This ensures seamless replacement when processed in n8n. 3️⃣ AI Processing & Output (AI Agent → Prompt Output) The Set Completed Prompt Node stores the final, processed prompt. The AI Agent Node (Ollama Chat Model) processes the prompt. The Prompt Output Node returns the fully formatted response. 📌 Optional: Modify this to use OpenAI, Claude, or other AI models. ⚠️ Error Handling: Missing Variables If a required variable is missing, the workflow stops execution and provides an error message: ⚠️ Missing Required Variables: ["launch_date"] This ensures no incomplete prompts are sent to AI agents. ✅ Example Use Case 📜 GitHub Prompt File (Using n8n Expressions) Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. 🔹 Variables in setVars Node { "company": "PropTechPro", "features": "AI-powered Property Management", "launch_date": "March 15, 2025" } ✅ Successful Output Hello PropTechPro, your product AI-powered Property Management launches on March 15, 2025. 🚨 Error Output (If Missing launch_date) ⚠️ Missing Required Variables: ["launch_date"] 🔧 Setup Instructions 1️⃣ Connect Your GitHub Repository Store your prompt in a public or private GitHub repo. The workflow will fetch the raw file using the GitHub API. 2️⃣ Configure the SetVars Node Define the required variables in the SetVars Node. Make sure the variable names match those used in the prompt. 3️⃣ Test & Run Click Test Workflow to execute. If variables are missing, it will show an error. If everything is correct, it will output the fully formatted prompt. ⚡ How to Customize This Workflow 💡 Need CRM or Database Integration? Connect the setVars node to an Airtable, Google Sheets, or HubSpot API to pull variables dynamically. 💡 Want to Modify the AI Model? Replace the Ollama Chat Model with OpenAI, Claude, or a custom LLM endpoint. 📌 Why Use This Workflow? ✅ No Manual Updates Required – Fetches prompts dynamically from GitHub. ✅ Prevents Broken Prompts – Ensures required variables exist before execution. ✅ Works for Any Use Case – Handles AI chat prompts, marketing messages, and chatbot scripts. ✅ Compatible with All n8n Deployments – Works on Cloud, Self-Hosted, and Desktop versions.