by Krupal Patel
🔧 Workflow Summary This system automates LinkedIn lead generation and enrichment in six clear stages: 1. Lead Collection (via Apollo.io) Automatically pulls leads based on keywords, roles, or industries using Apollo’s API. Captures name, job title, company, and LinkedIn profile URL. You can kick off the workflow via form, webhook, WhatsApp, Telegram, or any other custom trigger that passes search parameters. 2. LinkedIn Username Extraction Extracts usernames from LinkedIn profile URLs using a script step. These usernames are required for further enrichment using RapidAPI. 3. Email Retrieval (via Apollo.io User ID) Fetches verified work email using the Apollo User ID. Email validity is double-checked using www.mails.so filtering out undeliverable or inactive emails by checking MX records and deliverability. 4. Profile Summary (via LinkedIn API on RapidAPI) Enriches lead data by pulling bio/summary details to understand their background and expertise. 5. Activity Insights (Posts & Reposts) Collects recent posts or reposts to help craft personalised messages based on what they’re currently engaging with. 6. Leads Sheet Update All data is written into a Google Sheet. New columns are populated dynamically without erasing existing data. ⸻ ✅ Smart Retry Logic Each workflow is equipped with a fail-safe system: Tracks status per row: ✅ done, ❌ failed, ⏳ pending Failed rows are automatically retried after a custom delay (e.g., 2 weeks). Ensures minimal drop-offs and complete data coverage. 📊 Google Sheets Setup Make a copy of the following: Template 1: Apollo Leads Scraper & Enrichment Template 2: Final Enriched Leads The system appends data (like emails, bios, activity) step by step. 🔐 API Credentials Needed 1. Apollo API Sign up and generate API key at Apollo Developer Portal Be sure to enable the “Master API Key” toggle so the same key works for all endpoints. 2. LinkedIn Data API (via RapidAPI) Subscribe at RapidAPI - LinkedIn Data Use your key in the x-rapidapi-key header. 3. Mails.so API Get your API Key from mails.so dashboard 🛠️ Troubleshooting – LinkedIn Lead Machine ✅ Common Mistakes & Fixes 1. API Keys Not Working Make sure API keys for Apollo, RapidAPI, and mails.so are correct. Apollo “Master API Key” must be enabled. Keys should be saved as Generic Credentials in n8n. 2. Leads Not Found Check if the search query (keyword/job title) is too narrow. Apollo might return empty results if the filters are incorrect. 3. LinkedIn URLs Missing or Invalid Ensure Apollo is returning valid LinkedIn URLs. Improper URLs will cause username extraction and enrichment steps to fail. 4. Emails Not Coming Through Apollo may not have verified emails for all leads. mails.so might reject invalid or expired email addresses. 5. Google Sheet Not Updating Make sure the Google Sheet is shared with the right Google account (linked to n8n). Check if the column names match and data isn’t blocked due to formatting. 6. Status Columns Not Changing Each row must have done, failed, or pending in the status column. If the status doesn’t update, the retry logic won’t trigger. 7. RapidAPI Not Returning Data Double-check if username is present and valid. Make sure the RapidAPI plan is active and within limits. 8. Workflow Not Running Check if the trigger node (form, webhook, etc.) is connected and active. Make sure you’re passing the required inputs (keyword, role, etc.). Need Help? Contact www.KrupalPatel.com for support and custom workflow development
by Lucas Peyrin
How it works This workflow automates the process of checking for and applying updates to a self-hosted n8n instance running on Docker. It runs on a schedule, checks for new versions, summarizes the release notes with AI, and asks for your approval via Telegram before updating. Scheduled Check: The workflow runs hourly, triggered by a Schedule node. Version Discovery: It first confirms it's running in a Docker environment. It uses SSH to connect to the host machine and inspects the running n8n container to find its current version tag (e.g., latest or next). It then queries the Docker Hub API to compare the image digest (a unique ID for an image version) of the running version against the latest available version for that tag. Update Detection: If the digests do not match, it means a new image has been pushed for your version tag (e.g., a new latest image is available), and an update is needed. AI-Powered Release Notes: It fetches the official release notes for the new version from the GitHub API. An AI model (LLM) summarizes these technical notes into a concise, human-readable overview of the key features and fixes. Manual Approval: It sends a message to a Telegram chat with the AI-generated summary and two buttons: "✅ Update" and "❌ Ignore". The workflow then pauses and waits for your response. Execute Update: If you approve the update, the workflow uses SSH to run a docker compose command on your server, which pulls the new image, stops the old containers, and starts the new ones. Set up steps Setup time: ~5-10 minutes SSH Credentials: Go to Credentials and create a new SSH credential with the username, host, and password/private key for the server where your n8n Docker instance is running. Select this credential in the Get n8n Current Version and Update Docker nodes. Telegram Bot Credentials: Create a Telegram Bot and get its API token. Go to Credentials and create a new Telegram credential with your bot's token. Select this credential in the Send a text message node. AI Model Credentials: Ensure you have credentials for an AI provider (like Google AI, OpenAI, etc.) set up. Select your desired credential in the Google Gemini Chat Model node (or replace it with your preferred LLM node). Configure Paths and Commands: Open the Docker Path node. Set the docker_path to the absolute path of your docker-compose.yml file on the server (e.g., /root/n8n). If you use workers, adjust the worker_command to include the correct --scale argument for your setup. If not, you can leave it blank. Set Your Chat ID: Open the Approve Update Telegram node and enter your personal Telegram Chat ID in the Chat ID field. This ensures the approval message is sent to you. Activate the workflow. It will now check for updates every hour. To enable fully automatic updates (without manual approval): Delete the nodes from Get n8n Releases to Approved ? and connect the Needs Update ? node directly to the Update Docker node.
by Falk
How it works Collects articles from your preferred RSS feeds. Rates and tags each article using an AI model (e.g., QWEN 14B-s4), filtering for relevance and quality. Summarizes high-rated articles with a language model (e.g., Gemma3 4B) for quick, digestible reading. Checks for duplicates to avoid sending the same article twice. Formats and sends the top articles as an HTML newsletter via Gmail, using OAuth2 authentication. Stores records in a Postgres database, tracking which articles have been sent and their ratings. Requirements Postgres Account AI Models (if you work localy use Ollama) In the cloud you have to change Ollama node to your prefered Model Node RSS Feed of your desire Google Auth2, if you want to use Gmail Recommendations Use n8n local version for this workflow Here are some more informations: https://github.com/falks-ai-workbench/n8n_newsletter
by Fabrizio Terzi
AI-Driven Handbook Generator with Multi-Agent Orchestration (Pyragogy AI Village) This n8n workflow is a modular, multi-agent AI orchestration system designed for the collaborative generation of Markdown-based handbooks. Inspired by peer learning and open publishing workflows, it simulates a content pipeline where specialized AI agents act in defined roles, enabling true AI–human co-creation and iterative refinement. This project is a core component of Pyragogy, an open framework dedicated to ethical cognitive co-creation, peer AI–human learning, and human-in-the-loop automation for open knowledge systems. It implements the master orchestration architecture for the Pyragogy AI Village, managing a complex sequence of AI agents to process input, perform review, synthesis, and archiving, with a crucial human oversight step for final approval. How It Works: A Deep Dive into the Workflow's Architecture The workflow orchestrates a sophisticated content generation and review process, ideal for creating AI-driven knowledge bases or handbooks with human oversight. Webhook Trigger & Input:* The process begins when the workflow receives a JSON input via a *Webhook** (specifically at /webhook/pyragogy/process). This input typically includes details like the handbook's title, initial text, and relevant tags. Database Verification:* It first verifies the connection to a *PostgreSQL database** to ensure data persistence. Meta-Orchestrator:* A powerful *Meta-Orchestrator** (powered by gpt-4o from OpenAI) analyzes the initial request. Its role is to dynamically determine and activate the optimal sequence of specialized AI agents required to fulfill the input, ensuring tasks are dynamically routed and assigned based on each agent’s responsibility. Agent Execution & Iteration:** Each activated agent executes its step using OpenAI or custom endpoints. This involves: Content Generation: Agents like the Summarizer and the Synthesizer generate new content or refine existing text. Peer Review Board: A crucial aspect is the Peer Review Board, comprised of AI agents like the Peer Reviewer, the Sensemaking Agent, and the Prompt Engineer. This board evaluates the output for quality, coherence, and accuracy. Reprocessing & Redrafting: If the review agents flag a major_issue, they trigger redrafting loops by generating specific feedback for the Synthesizer. This mechanism ensures iterative refinement until the content meets the required standards. Human-in-the-Loop (HITL) Review:* For final approval, particularly for the Archivist agent's output, a *human review process* is initiated. An email is sent to a human reviewer, prompting them to approve, reject, or comment via a "Wait for Webhook" node. This ensures *human oversight** and quality control. Content Persistence & Versioning:** If the content is approved by the human reviewer: It's saved to a PostgreSQL database (specifically to the handbook_entries and agent_contributions tables). Optionally, the content can be committed to a GitHub repository for version control, provided the necessary environment variables are configured. Notifications:* The final output and the sequence of executed agents can be sent as a notification to *Slack**, if configured. Observe the dynamic loop: orchestrate → assign → generate → review (AI/human) → store Included AI Agents This workflow leverages a suite of specialized AI agents, each with a distinct role in the content pipeline: Meta-Orchestrator:** Determines the optimal sequence of agents to execute based on the input. Summarizer Agent:** Summarizes text into key points (e.g., 3 key points). Synthesizer Agent:** Synthesizes new text and effectively incorporates reprocessing feedback from review agents. Peer Reviewer Agent:** Reviews generated text, highlighting strengths, weaknesses, and suggestions, and indicates major_issue flags. Sensemaking Agent:** Analyzes input within existing context, identifying patterns, gaps, and areas for improvement. Prompt Engineer Agent:** Refines or generates prompts for subsequent agents, optimizing their output. Onboarding/Explainer Agent:** Provides explanations of the process or offers guidance to users. Archivist Agent:** Prepares content for the handbook, manages the human review process, and handles archiving to the database and GitHub. Setup Steps & Prerequisites To get this powerful workflow up and running, follow these steps: Import the Workflow: Import the pyragogy_master_workflow.json (or generate-collaborative-handbooks-with-gpt4o-multi-agent-orchestration-human-review.json) into your n8n instance. Connect Credentials: Postgres: Set up a Postgres Pyragogy DB credential (ID: pyragogy-postgres). OpenAI: Configure an OpenAI Pyragogy credential (ID: pyragogy-openai) for all OpenAI agents. GPT-4o is highly suggested for optimal performance. Email Send: Set up a configured email credential (e.g., for sending human review requests). Define Environment Variables: Define essential environment variables (an .env.template is included in the repository). These include: API base for OpenAI. Database connection details. (Optional) GitHub: For content persistence and versioning, configure GITHUB_ACCESS_TOKEN, GITHUB_REPOSITORY_OWNER, and GITHUB_REPOSITORY_NAME. (Optional) Slack: For notifications, configure SLACK_WEBHOOK_URL. Send a sample payload to your webhook URL (/webhook/pyragogy/process): { "title": "History of Peer Learning", "text": "Peer learning is an educational approach where students learn from and with each other...", "tags": ["education", "pedagogy"], "requireHitl": true } Ideal For This workflow is perfectly suited for: Educators and researchers exploring AI-assisted publishing and co-authoring with AI. Knowledge teams looking to automate content pipelines for internal or external documentation. Anyone building collaborative Markdown-driven tools or AI-powered knowledge bases. Documentation & Contributions: An Open Source and Collaborative Project This workflow is an open-source project and community-driven. Its development is transparent and open to everyone. We warmly invite you to: Review it:** Contribute your analysis, identify potential improvements, or report issues. Remix it:** Adapt it to your specific needs, integrate new features, or modify it for a different use case. Improve it:** Propose and implement changes that enhance its efficiency, robustness, or capabilities. Share it back:** Return your contributions to the community, either through pull requests or by sharing your implementations. Every contribution is welcome and valued! All relevant information for verification, improvement, and collaboration can be found in the official repository: 🔗 GitHub – pyragogy-handbook-n8n-workflow
by Ludwig
How it works: This workflow automates tagging for WordPress posts using AI: Fetch blog post content and metadata. Generate contextually relevant tags using AI. Verify existing tags in WordPress and create new ones if necessary. Automatically update posts with accurate and optimized tags. Set up steps: Estimated time: ~15 minutes. Configure the workflow with your WordPress API credentials. Connect your content source (e.g., RSS feed or manual input). Adjust tag formatting preferences in the workflow settings. Run the workflow to ensure proper tag creation and assignment. This workflow is perfect for marketers and content managers looking to streamline their content categorization and improve SEO efficiency.
by Francis Njenga
Workflow Documentation: HR Job Posting and Evaluation with AI Detailed Description The HR Job Posting and Evaluation with AI workflow is designed to streamline and enhance recruitment for technical roles, such as Automation Specialists. By automating key stages in the hiring process, this workflow ensures a seamless experience for both candidates and HR teams. From collecting applications to evaluating candidates using AI and scheduling interviews, this workflow provides an end-to-end solution for recruitment challenges. Who is this for? This workflow is ideal for: HR Professionals**: Managing multiple job postings and candidates efficiently. Recruitment Teams**: Handling large volumes of applications for technical positions. Hiring Managers**: Ensuring structured and objective candidate evaluations. What problem does this workflow solve? Time-Consuming Processes**: Automates repetitive tasks like data entry, CV management, and scheduling. Fair Candidate Evaluation**: Leverages AI to provide objective insights based on resumes and job descriptions. Streamlined Communication**: Ensures timely and personalized candidate interactions, improving their experience. What this workflow does This workflow automates the following steps: Form Submission: Collects candidate information via a structured application form. Data Storage: Stores applicant details in Airtable for centralized tracking. CV Management: Automatically uploads resumes to Google Drive for easy access and organization. AI-Powered Candidate Evaluation: Scores candidates based on their resumes and job descriptions using OpenAI, providing actionable insights. Interview Scheduling: Automates scheduling based on candidate and interviewer availability. Communication: Sends customized emails to candidates for interview invitations and feedback. Setup Prerequisites To use this workflow, you’ll need: n8n Account**: To create and run the workflow. Airtable Account**: For managing applicant data. Google Drive Account**: For storing candidate CVs. OpenAI API Key**: For AI-powered candidate scoring. SMTP Email Account**: For sending candidate communications. Setup Process Airtable Configuration: Create a base in Airtable with tables for Applicants and Job Positions. Google Drive Setup: Create a folder for CV storage and ensure you have write permissions. Integrate Airtable in n8n: Use the Airtable API key to connect Airtable to n8n. Integrate Google Drive in n8n: Authorize Google Drive to enable CV storage automation. OpenAI Integration: Add your OpenAI API key to n8n for candidate scoring. Email Configuration: Set up your SMTP email account in n8n for sending notifications and invitations. How to customize this workflow Tailor the workflow to fit your unique recruitment needs: Edit Job Descriptions: Adjust the form parameters to match the specific role and qualifications. Refine AI Evaluation Criteria: Modify OpenAI prompts to reflect the skills and competencies for the desired position. Personalize Email Templates: Update email content to match your organization’s tone and branding. Add New Features: Incorporate additional steps like feedback collection or integration with other HR tools. Conclusion The HR Job Posting and Evaluation with AI workflow simplifies and automates the recruitment process, enabling HR teams to focus on engaging with candidates rather than handling administrative tasks. With its powerful integrations and customization options, this workflow helps organizations hire efficiently while improving the candidate experience.
by Joseph LePage
Multi-AI Agent Chatbot for Postgres/Supabase Databases and QuickChart Generation Who is this for? This workflow is ideal for data analysts, developers, and business intelligence teams who need an AI-powered chatbot to query Postgres/Supabase databases and generate dynamic charts for data visualization. What problem does this solve? It simplifies data exploration by combining conversational AI with database querying and chart generation. Users can interact with their database using natural language, retrieve insights, and visualize data without manual SQL queries or chart configuration. What this workflow does AI-Powered Chat Interface: Accepts natural language prompts to query databases or generate charts. Routes user requests through a tool agent system to determine the appropriate action (query or chart). Database Querying: Executes SQL queries on Postgres/Supabase databases based on user input. Retrieves schema information, table definitions, and specific data records. Dynamic Chart Generation: Uses QuickChart to create bar charts, line charts, or other visualizations from database records. Outputs a shareable chart URL or JSON configuration for further customization. Memory Integration: Maintains chat history using Postgres memory nodes, enabling context-aware interactions. Workflow diagram showcasing AI agents, database querying, and chart generation paths. Setup Prerequisites: A Postgres-compatible database (e.g., Supabase). API credentials for OpenAI. Configuration Steps: Add your database connection credentials in the Postgres nodes. Set up OpenAI credentials for GPT-4o-mini in the language model nodes. Adjust the QuickChart schema in the "QuickChart Object Schema" node to fit your use case. Testing: Trigger the chat workflow via the "When chat message received" node. Test with prompts like "Generate a bar chart of sales data" or "Show me all users in the database." How to customize this workflow Modify AI Prompts** Add Chart Types** Integrate Other Tools**
by Zacharia Kimotho
This workflow automates sentiment analysis of Reddit posts related to Apple's WWDC25 event. It extracts data, categorizes posts, analyzes sentiment of comments, and updates a Google Sheet with the results. Preliquisites Bright Data Account: You need a Bright Data account to scrape Reddit data. Ensure you have the correct permissions to use their API. https://brightdata.com/ Google Sheets API Credentials: Enable the Google Sheets API in your Google Cloud project and create credentials (OAuth 2.0 Client IDs). Google Gemini API Credentials: You need a Gemini API key to run the sentiment analysis. Ensure you have the correct permissions to use their API. https://ai.google.dev/". You can use any other models of choice Setup Import the Workflow: Import the provided JSON workflow into your n8n instance.", Configure Bright Data Credentials:, In the 'scrap reddit' and the 'get status' nodes, in Header Parameters find the Authorization field, replace Bearer 1234 with your Bright Data API key. Apply this to every node that utilizes your Bright Data API Key., Set up the Google Sheets API credentials, In the 'Append Sentiments' node, set up the Google Sheets API by connecting your Google Sheets account through oAuth 2 credentials. ", Configure the Google Gemini Credential ID, In the ' Sentiment Analysis per comment' node, set up the Google Gemini API by connecting your Google AI account through the API credentials. , Configure Additional Parameters:, In the 'scrap reddit' node, modify the JSON body to adjust the search term, date, or sort method., In the 'Wait' node, alter the 'Amount' to adjust the polling interval for scraping status, it is set to 15 seconds by default., In the 'Text Classifier' node, customize the categories and descriptions to suit the sentiment analysis needs. Review categories such as 'WWDC events' to ensure relevancy., In the 'Sentiment Analysis per comment' node, modify the system prompt template to improve context. customization_options Bright Data API parameters to adjust scraping behavior. Wait node duration to optimize polling. Text Classifier categories and descriptions. Sentiment Analysis system prompt. Use Case Examples Brand Monitoring:** Track public sentiment towards Apple during and after the WWDC25 event. Product Feedback Analysis:** Gather insights into user reactions to new product announcements. Competitive Analysis:** Compare sentiment towards Apple's announcements versus competitors. Event Impact Assessment:** Measure the overall impact of the WWDC25 event on various aspects of Apple's business. Target_audiences: Marketing professionals in the tech industry, Brand managers, Product managers, Market research analysts, Social media managers Troubleshooting: Workflow fails to start. Check that all necessary credentials (Bright Data and Google Sheets API) are correctly configured and that the Bright Data API key is valid. Data scraping fails. Verify the Bright Data API key, ensure the dataset ID is correct, and inspect the Bright Data dashboard for any issues with the scraping job. Sentiment analysis is inaccurate. Refine the categories and descriptions in the 'Text Classifier' node. Check that you have the correct Google Gemini API key, as the original is a placeholder. Google Sheets are not updating. Ensure the Google Sheets API credentials have the necessary permissions to write to the specified spreadsheet and sheet. Check API usage limits. Workflow does not produce the correct output. Check the data connections, by clicking the connections, and looking at which data is being produced. Check all formulas for errors. Happy productivity!
by Jimleuk
This n8n template lets you summarize team member activity on Slack for the past week and generates a report. For remote teams, chat is a crucial communication tool to ensure work gets done but with so many conversations happening at once and in multiple threads, ideas, information and decisions usually live in the moment and get lost just as quickly - and all together forgotten by the weekend! Using this template, this doesn't have to be the case. Have AI crawl through last week's activity, summarize all threads and generate a casual and snappy report to bring the team back into focus for the current week. A project manager's dream! How it works A scheduled trigger is set to run every Monday at 6am to gather all team channel messages within the last week. Each message thread are grouped by user and data mined for replies. Combined, an AI analyses the raw messages to pull out interesting observations and highlights. The summarized threads of the user are then combined together and passed to another AI agent to generate a higher level overview of their week. These are referred to as the individual reports. Next, all individual reports are summarized together into a team weekly report. This allows understanding of group and similar activities. Finally, the team weekly report is posted back to the channel. The timing is important as it should be the first message of the week and ready for the team to glance over coffee. How to use Ideally works best per project and where most of the comms happens on a single channel. Avoid combining channels and instead duplicate this workflow for more channels. You may need to filter for specific team members if you want specific team updates. Customise the report to suit your organisation, team or the channel. You may prefer to be more formal if clients or external stakeholders are also present. Requirements Slack for chat platform Gemini for LLM (or switch for other models) Customising this workflow If the slack channel is busy enough already, consider posting the final report to email. Pull in project metrics to include in your report. As extra context, it may be interesting to tie the messages to production performance. Use an AI Agent to query for knowledgebase or tickets relevant to the messages. This may be useful for attaching links or references to add context. Channel not so busy or way too busy for 1 week? Play with the scheduled trigger and set an interval which works for your team.
by Angel Menendez
Submission Overview for Voiceflow Demo Workflow View the YouTube video for this workflow here. Who is this for? This workflow is ideal for businesses and developers using Voiceflow to power AI voice chatbots. It benefits teams that want to enhance chatbot functionality through integrations with platforms like Zendesk, Google Calendar, and Airtable. What problem is this workflow solving? The workflow addresses the need for seamless integration of chatbot interactions with backend systems. It automates customer service tasks such as ticket creation, meeting scheduling, and data reporting, reducing manual effort and enhancing efficiency. What does this workflow do? Customer Lookup:** Checks the database for existing customers and returns relevant details or a "NOT_FOUND" status. Zendesk Ticket Creation:** Automates the creation of support tickets for customer issues. Meeting Scheduling:** Integrates with Google Calendar to provide availability and schedule meetings. Transcript Reporting:** Aggregates interaction data and sends it to Airtable for analysis by the product team. Setup Configure your Voiceflow chatbot to connect to this workflow via a webhook. Set up the required integrations: Zendesk API: For ticket creation. Google Calendar API: For scheduling. Airtable API: For storing transcripts. Customize the workflow's nodes to match your use case, such as database fields or API endpoints. Deploy the workflow on your n8n instance and test the integrations. How to customize this workflow to your needs Adjust database queries to match your customer data schema. Modify the Zendesk ticket payload to include additional fields or custom formats. Update Google Calendar configurations for different scheduling requirements. Add or remove Airtable fields based on the product team's analysis needs. This template adheres to n8n’s submission guidelines, ensuring clarity, relevance, and broad applicability for users in customer service, product development, and automation.
by Mohan Gopal
🎥 AI Tour Video Generator with GPT-4o, RunwayML & ElevenLabs for Social Media' This n8n workflow generates 20-second faceless videos for social media by combining AI-generated images, audio, and video clips for a given tour destination. The output is a ready-to-publish video file, which can be pushed to social platforms and logged in a tracking sheet. ⚙️ Workflow Overview This system is divided into 4 main sections: 🧠 Generate Image Prompts 🎨 Generate Media (Images, Videos, Audio) 🛠️ Render & Upload 📈 Future Enhancements 🔌 Integration Setup Table | Integration | Service Used | Setup Instruction | |--------------------|----------------------------|------------------------------------------------------------------------------------| | OpenAI | GPT-4o (Prompt Generation) | Get API Key and configure in n8n | | Google Sheet | Idea I/O tracking | Connect Google account with OAuth/Credentials in n8n | | Piapia API | AI Image Generation | Sign up at piapia.ai and get API key | | Runway API | AI Video Generation | Register at runwayml.com for access | | ElevenLabs | AI Voice Generation | Sign up at elevenlabs.io for API key | | CreateMate API | Render Final Video | Visit createmate.ai to access API | | Google Drive | Upload/Share Final Video | Use n8n Google Drive node to configure credentials | ✅ Required Services & Tools Ensure you have active accounts with the following tools and services: ✅ OpenAI (GPT-4o + Embeddings) ✅ Google Sheets (for destination ideas and tracking) ✅ Piapia API (Image generation) ✅ RunwayML API (Video generation) ✅ ElevenLabs API (Voiceover TTS) ✅ Google Drive (Storage & Sharing) ✅ CreateMate (Video Rendering) ✅ Social Media Scheduler (Optional - Zapier, Buffer, Make.com) 🧠 1. Generate Image Prompts > Purpose: Prepares the content idea and generates visual prompts. | Step | Node Name | Function | |--------------|------------------------|-----------------------------------------------| | 🔁 Trigger | Schedule or Manual | Starts the workflow | | 📥 Grab Idea | Read Sheet | Pulls destination idea from Google Sheet | | ✍️ Set Content | Manual Input | Adds structure/narrative to the idea | | 🔀 Split | Split Out | Breaks input into chunks | | 🤖 GPT Agent | Image Prompt Agent | Uses GPT-4o to generate creative image prompts| | 🧹 Clean | Remove \n | Cleans up formatting | | 📌 Save | Set Prompts | Finalizes prompts for next stage | 🖼️ 2. Generate Media 🎨 Generate Images | Step | Function | |----------------|-----------------------------------------------------------| | Generate Image | Calls Piapia API with AI-generated prompts | | Wait | Adds delay for rendering (90 sec) | | Get Images | Retrieves final images for video | 🎥 Generate Videos | Step | Function | |----------------|-----------------------------------------------------------| | Generate Video | Calls RunwayML to generate video clips from the prompts | | Wait | 2-minute delay to allow video generation | | Get Videos | Fetches completed video clips | 🔊 Generate Audio | Step | Function | |------------------|----------------------------------------------------------| | Update Status | Logs progress in Google Sheet | | Sound Agent | Gemini or GPT generates narration text | | Set Audio | Formats narration for voice synthesis | | Generate Audio | Uses ElevenLabs for realistic voiceover | | Upload to Drive | Saves final audio to Google Drive | | Share File | Creates sharable URL for audio file | 🛠️ 3. Render & Upload > Purpose: Combines all elements (image, video, audio) into a single output and prepares for social media. | Step | Function | |-----------------|----------------------------------------------------------------| | Merge | Combines images, videos, and audio | | Split Out Parts | Breaks content for rendering | | Render Video | Uses CreateMate to render the final 20-second video | | Wait | Short delay to complete rendering | | Download Video | Saves output video locally or on Drive | | Update Sheet | Logs final video URL/status in Google Sheet | | Social Upload | (Coming Soon) Post to Instagram, YouTube Shorts, TikTok, etc. | 🧩 Pre-Conditions Before running the workflow: ✅ Google Sheet should be created with destination ideas ✅ All API keys must be configured in n8n ✅ Google Drive folder must exist for output videos ✅ Sufficient credit/quota must be available on AI platforms ✅ Internet access must be stable for external API calls 🚀 Outcome A polished 20-second travel destination video Combines AI visuals, short clips, and AI narration Ready for instant social media upload Fully automated** from idea to video file 🧠 Tech Stack Summary | Component | Tools Used | |-----------------|-------------------------------| | Language Model | GPT-4o (OpenAI), Gemini (Google) | | Image Generator | Piapia API | | Video Generator | RunwayML | | Audio Generator | ElevenLabs | | Storage | Google Drive | | Video Composer | CreateMate API | | Orchestration | n8n | 📈 Future Enhancements ✅ Smart Enhancements Dynamic hashtags & captions via AI Auto-post to TikTok, Instagram, YouTube via Buffer/Zapier Scene detection + matching B-roll Multilingual narration (e.g., Arabic, French, Malay) A/B testing of video versions to analyze performance 🧪 Testing Add-ons Add preview screen before upload Error tracking & retry flow Manual override before publishing 🧰 Customization Guide | Element | How to Customize | |----------------------|-------------------------------------------------------------------| | ✏️ Prompt Format | Change structure inside Set Content or Prompt Agent | | 🌍 Destination Ideas | Modify Google Sheet for different destinations/categories | | 🎨 Image Style | Customize prompt to Piapia (e.g., “in Pixar style”, “3D render”) | | 🎙️ Voiceover Script | Adjust tone/structure in the Sound Agent | | 📆 Posting Schedule | Use Zapier/Buffer for timed posting | | 🎯 Target Duration | Adjust number of clips or frame duration | 🙌 Community Value This workflow is ideal for: 📸 Travel content creators 🌍 Destination marketers 🏛️ Tourism boards 🧳 Travel SMEs looking for automation Feel free to fork, remix, or request a JSON export in the comments below!
by Amjid Ali
Overview This workflow template automates lead management and customer inquiry processing by integrating ERPNext, AI agents, and email notifications. It streamlines the process of capturing leads, analyzing inquiries, and generating actionable responses. The workflow uses ERPNext to capture inquiries, analyzes them with AI, and notifies the appropriate team or individual, all while maintaining a professional approach. What This Template Does ERPNext Webhook Integration: Captures leads and inquiries through ERPNext webhooks. Triggers the workflow when a new lead is created. AI-Powered Inquiry Analysis: Uses AI to extract key details from lead notes (e.g., customer name, organization, inquiry summary). Classifies inquiries as valid or invalid based on relevance to products, services, or solutions. Contact Assignment: Matches inquiries to the appropriate contact(s) using a Google Sheets database or ERPNext contact information. Handles multiple contacts if required. Email Notifications: Generates professional email notifications for valid inquiries. Sends emails to the appropriate contact(s) with inquiry details and action steps. Invalid Lead Handling: Identifies invalid inquiries (e.g., unrelated to products or services) and flags them for follow-up or dismissal. Custom Email Formatting: Converts plain text into professionally formatted HTML emails. Ensures that communication is clear, concise, and visually appealing. How It Works Step 1: Capture Lead Data Webhook in ERPNext:** Create a webhook in ERPNext for the "Lead" DocType. Set the trigger to on_insert to capture new leads in real-time. Lead Details:** The workflow fetches lead details, including notes, contact information, and the source of the lead. Step 2: Validate and Analyze Inquiry AI Agent for Analysis:** An AI agent analyzes the lead notes to extract key details and classify the inquiry as valid or invalid. The analysis includes checking the relevance of the inquiry to products, services, or solutions offered by the company. Invalid Leads:** If the inquiry is invalid, the workflow flags it and stops further processing. Step 3: Assign Contact(s) Google Sheets Integration:** Uses a Google Sheets database to map products, services, or solutions to responsible contacts. Ensures that inquiries are directed to the right person or team. Multiple Contacts:** Handles cases where multiple contacts are responsible for a particular product or service. Step 4: Generate and Send Email Notifications AI-Generated Emails:** The workflow generates a professional email summarizing the inquiry. Emails include details like customer name, organization, inquiry summary, and action steps. Custom HTML Formatting:** Emails are converted to HTML for a polished and professional appearance. Send Notifications:** Sends email notifications through Microsoft Outlook or another configured email client. Optionally, notifies via WhatsApp or SMS for urgent inquiries. Step 5: Post-Inquiry Actions ERPNext Record Updates:** Updates the lead record in ERPNext with relevant details, including inquiry status and contact information. Setup Instructions Prerequisites ERPNext: A configured ERPNext instance with lead data and a webhook for the "Lead" DocType. Google Sheets: A sheet mapping products, services, or solutions to responsible contacts. AI Integration: Credentials for OpenAI or other supported AI platforms. Email Client: Credentials for Microsoft Outlook or another email client. Step-by-Step Setup ERPNext Configuration: Create a webhook for the "Lead" DocType in ERPNext. Test the webhook with sample data to ensure proper integration. Workflow Import: Import the workflow template into n8n. Configure nodes with your API credentials for ERPNext, Google Sheets, and AI tools. Google Sheets Integration: Prepare a Google Sheet with columns for product, service, or solution and the responsible contact(s). Link the sheet to the workflow. AI Agent Configuration: Customize the AI agent’s prompts to align with your business’s products and services. Adjust criteria for valid and invalid inquiries as needed. Email Setup: Configure the email client node with your email service credentials. Customize the email template for your organization. Testing: Run the workflow with sample leads to validate the entire process. Check email notifications, contact assignments, and record updates in ERPNext. Dos and Don’ts Dos: Test Thoroughly:** Test the workflow with various scenarios before deploying in production. Secure Credentials:** Keep API and email credentials secure to avoid unauthorized access. Customize Prompts:** Tailor AI prompts to match your business needs and language style. Use Professional Email Templates:** Ensure emails are clear and well-formatted. Don’ts: Skip Validation:** Always validate inquiry data to avoid sending irrelevant notifications. Overload the Workflow:** Avoid adding unnecessary nodes that can slow down processing. Ignore Errors:** Monitor logs and address errors promptly for a smooth workflow. Resources GET n8n Now N8N COURSE n8n Book YouTube Tutorial:** Watch the full step-by-step tutorial on setting up this workflow: SyncBricks YouTube Channel Courses and Training:** Learn more about ERPNext and AI automation through my comprehensive courses: SyncBricks LMS Support and Contact:** Email: amjid@amjidali.com Website: SyncBricks LinkedIn: Amjid Ali