Workflow Templates
Discover and use pre-built workflows to automate your tasks
1728 templates found
Discover and use pre-built workflows to automate your tasks
1728 templates found
by Nick Saraev
Website AI Agent with Calendar Integration Categories: AI Agents, Website Integration, Calendar Automation This workflow creates a complete website AI agent that can be embedded on any website with just a few lines of code. The agent handles customer inquiries, provides business information, and automatically books meetings by checking calendar availability in real-time. Built for simplicity and business practicality, this system proves that effective AI agents don't need to be overcomplicated. Benefits Universal Website Integration** - Works with WordPress, Webflow, Squarespace, custom sites, or any platform that accepts HTML Intelligent Calendar Management** - Checks availability and books meetings automatically without double-booking Business-Ready Conversations** - Trained with specific business context and maintains professional, helpful interactions Real-Time Functionality** - All changes to the N8N workflow are immediately reflected on your live website No Technical Complexity** - Simple architecture that prioritizes reliability and consistent outputs over flashy features Customizable Branding** - Easy to modify appearance, messages, and behavior to match your brand How It Works Embedded Chat Interface: Generates embeddable HTML code that creates a chat widget on any website Provides both hosted and embedded modes for different use cases Handles all communication between website visitors and the AI system Intelligent Conversation Management: Uses sophisticated system prompts to maintain context about your business Handles common inquiries about services, pricing, and company information Gracefully redirects off-topic conversations back to business matters Smart Calendar Integration: Connects to Google Calendar to check real-time availability Automatically suggests meeting times based on your schedule Collects all necessary information (name, email, preferred time) before booking Meeting Booking Process: Validates meeting requests against existing calendar entries Confirms all details with users before creating calendar events Sends automatic invitations with proper timezone handling Required Setup Configuration System Message Requirements: Your AI agent needs a comprehensive system message that includes: Business Identity:** Company name, services, location, timezone Business Context:** What you offer, pricing information, key differentiators Conversation Rules:** How to handle inquiries, booking procedures, moderation guidelines Personality Instructions:** Tone of voice, response style, conversation length preferences Example System Message Structure: You are a helpful, intelligent website chatbot for [Company Name], a [business type]. The current date is [dynamic date]. You are in the [timezone] timezone. Business Context: We offer [services] with [key benefits] Our pricing is [pricing structure] We work with [target customers] Your task is answering questions about the business & booking meetings. For meetings: use calendar function to check availability, collect name/email/preferred time, confirm details. Rules: Keep responses short and conversational Stay focused on business topics Always confirm timezone when discussing meeting times Google Calendar Setup: Enable Google Calendar API in Google Cloud Console Create OAuth2 credentials for N8N Connect your business calendar in the Google Calendar nodes Set correct timezone in both nodes to match your business location Website Integration: Switch chat trigger to "embedded" mode Copy the provided CDN embed code Paste code into your website's HTML (before closing body tag) Replace webhook URL with your production URL Business Use Cases Service Businesses** - Automate initial consultations and lead qualification Agencies** - Handle project inquiries and schedule discovery calls Consultants** - Streamline the booking process for potential clients E-commerce** - Provide product support and schedule demos Any Business** - Replace contact forms with intelligent conversation Revenue Potential This system can replace expensive chatbot services that cost $100-500/month. The automated booking feature alone typically increases meeting conversion rates by 40-60% compared to traditional contact forms. Difficulty Level: Beginner Estimated Build Time: 15-20 minutes Monthly Operating Cost: ~$10 (OpenAI API usage) Watch My 13-Minute Build Want to see exactly how I built this from scratch? I walk through the complete setup process in real-time, including all the configuration, testing, and website integration. 🎥 See My Complete Build Process: "How to Build a Website AI Agent in 13 Min (Free N8N Template)" This step-by-step tutorial shows you my exact process for creating business-ready AI agents that actually make money, not just impressive demos. Set Up Steps Basic Agent Configuration: Create new N8N workflow with AI Agent node Connect OpenAI Chat Model with your API credentials Add Window Buffer Memory for conversation context System Message Setup: Configure detailed business context and operating instructions Set timezone and personality parameters for consistent responses Define conversation rules and moderation guidelines Google Calendar Integration: Set up Google Calendar credentials through Google Cloud Console Configure "Get All Events" tool for availability checking Set up "Create Event" tool for automated booking Website Embedding: Switch chat trigger to "embedded" mode for website integration Copy the provided CDN embed code Paste code into your website's HTML with your webhook URL Customization Options: Modify initial messages and branding in the embed code Adjust colors and styling using CSS variables Configure timezone settings to match your business location Testing & Optimization: Test complete conversation flows from inquiry to booking Verify calendar integration works correctly with your timezone Optimize system prompts based on actual user interactions Advanced Features Extend this system with additional capabilities: CRM Integration** - Automatically add leads to your sales pipeline Multi-language Support** - Handle conversations in different languages Custom Business Logic** - Add specific qualification questions or routing Analytics Tracking** - Monitor conversation patterns and conversion rates Check Out My Channel For more practical automation systems that generate real business value, check out my YouTube channel where I share the exact strategies I used to scale my automation agency to $72K/month.
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive voice-powered restaurant assistant that handles table reservations, food orders, and restaurant information requests through natural language processing. The system uses VAPI for voice interaction and PostgreSQL for data management, making it perfect for restaurants looking to automate customer service with voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Database operations are handled in real-time with immediate confirmations The system can handle multiple simultaneous voice requests All customer data is stored securely in PostgreSQL with proper indexing How it works Table Booking & Order Handling Workflow Voice requests are captured through VAPI triggers when customers make booking or ordering requests The system processes natural language commands and extracts relevant details (party size, time, food items) Customer data is immediately saved to the bookings and orders tables in PostgreSQL Voice confirmations are sent back through VAPI with booking details and estimated wait times All transactions are logged with timestamps for restaurant management tracking Restaurant Info Provider Workflow Info requests trigger when customers ask about hours, menu, location, or services Restaurant details are retrieved from the restaurant_info table containing current information Wait nodes ensure proper data loading before voice response generation Structured restaurant information is delivered via VAPI in natural, conversational format Database Schema Bookings Table booking_id (PRIMARY KEY) - Unique identifier for each reservation customer_name - Customer's full name phone_number - Contact number for confirmation party_size - Number of guests booking_date - Requested reservation date booking_time - Requested time slot special_requests - Dietary restrictions or special occasions status - Booking status (confirmed, pending, cancelled) created_at - Timestamp of booking creation Orders Table order_id (PRIMARY KEY) - Unique order identifier customer_name - Customer's name phone_number - Contact for order updates order_items - JSON array of food items and quantities total_amount - Calculated order total order_type - Delivery, pickup, or dine-in special_instructions - Cooking preferences or allergies status - Order status (received, preparing, ready, delivered) created_at - Order timestamp Restaurant_Info Table info_id (PRIMARY KEY) - Information entry identifier category - Type of info (hours, menu, location, contact) title - Information title description - Detailed information content is_active - Whether info is currently valid updated_at - Last modification timestamp How to use The manual trigger can be replaced with webhook triggers for integration with existing restaurant systems Import the workflow into your n8n instance and configure VAPI credentials Set up PostgreSQL database with the required tables using the schema provided above Configure restaurant information in the restaurant_info table Test voice commands such as "Book a table for 4 people at 7 PM" or "What are your opening hours?" Customize voice responses in VAPI nodes to match your restaurant's tone and branding The system can handle multiple concurrent voice requests and scales with your restaurant's needs Requirements VAPI account for voice processing and natural language understanding PostgreSQL database for storing booking, order, and restaurant information n8n instance with database and VAPI integrations enabled Customising this workflow Voice AI automation can be adapted for various restaurant types - from quick service to fine dining establishments Try popular use-cases such as multi-location booking management, dietary restriction handling, or integration with existing POS systems The workflow can be extended to include payment processing, SMS notifications, and third-party delivery platform integration
by Calistus Christian
How it works • Webhook → urlscan.io → GPT-4o mini → Gmail • Payload example: { "url": "https://example.com" } • urlscan.io returns a Scan ID and raw JSON. • AI node classifies the scan as malicious / suspicious / benign, assigns a 1-10 risk score, and writes a two-sentence summary. • Gmail sends an alert that includes the URL, Scan ID, AI verdict, screenshot link, and full report link. Set-up steps (~5 min) • Create three credentials in n8n urlscan.io API key OpenAI API key (GPT-4o mini access) Gmail OAuth (or SMTP) • Replace those fields in the nodes, or reference env vars like {{ $env.OPENAI_API_KEY }}. • Switch the Webhook to Production → copy the live URL. • Test with: curl -X POST <your-webhook-url> \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com" }'
by Ria
This workflow demonstrates how to use the workflowStaticData() function to set any type of variable that will persist within workflow executions. https://docs.n8n.io/code/cookbook/builtin/get-workflow-static-data/ This can be useful for example when working with access tokens that expire after a certain time period. Using staticData we can keep a record of that access token and the expiry time and build our workflow logic around it. Important Static Data only persists across production executions, i.e. triggered by Webhooks or Schedule Triggers (not manual executions!) For this the workflow will have to be activated. Setup configure HTTP Request node to fetch access token from your API (optional) activate workflow test the workflow with the webhook production link you can check the population of the static data in the single executions Feedback If you found this useful or want to report some missing information - I'd be happy to hear from you at ria@n8n.io
by Cameron Wills
Who is this for? Content creators, social media managers, digital marketers, and researchers who need to download original TikTok videos without watermarks for analysis, repurposing, or archiving purposes. What problem does this workflow solve? Downloading TikTok videos without watermarks typically requires using questionable third-party websites that may have limitations, ads, or privacy concerns. This workflow provides a clean, automated solution that can be integrated into your own systems and processes. What this workflow does This workflow automates the process of downloading TikTok videos without watermarks in three simple steps: Fetch the TikTok video page by providing the video URL Extract the raw video URL from the page's HTML data Download the original video file without watermark (Optional) Upload to Google Drive with public sharing link generation The workflow uses web scraping techniques to extract the original video source directly from TikTok's own servers, maintaining the highest possible quality without any added watermarks or branding. Setup (Est. time: 5-10 minutes) Before getting started, you'll need: n8n installation The URL of a TikTok you want to download (Optional) Google Drive API enabled in Google Cloud Console with OAuth Client ID and Client Secret credentials if you want to use the upload feature How to customize this workflow to your needs Replace the example TikTok URL with your desired video links Modify the file naming convention for downloaded videos Integrate with other nodes to process videos after downloading Create a webhook to trigger the workflow from external applications Set up a schedule to regularly download videos from specific accounts This workflow can be extended to support various use cases like trending content analysis, competitor research, creating compilation videos, or building a content library for inspiration. It provides a foundation that can be customized to fit into larger automated workflows for content creation and social media management.
by Jonathan | NEX
Supercharge Your Security Operations for Free Stop wasting time manually investigating suspicious IP addresses. This workflow template is your launchpad to automating real-time IP cybersecurity analysis using the NixGuard platform, which you can use for free. This is the first of a two-part system designed to integrate seamlessly into your existing security stack, especially with Wazuh. It calls our main workflow, Automate IP Reputation Checks and Get AI Risk Summaries from NixGuard, to do the heavy lifting. What This Workflow Unlocks for You Free AI-Powered Risk Summaries:** Don't just get data; get answers. NixGuard provides a clear, human-readable summary of why an IP is considered risky. Automated IP Reputation Checks:** Programmatically check any IP against a vast array of threat intelligence sources. A Foundation for Your SOC Automation:** Use the results to trigger your incident response process. The template includes a pre-built example of how to send a detailed alert to Slack, which you can easily adapt for Jira, TheHive, or any other tool. How the Two-Workflow System Works This "Dispatcher" workflow is designed for flexibility. It holds your API key and input, then calls the main analysis workflow. This allows you to easily create multiple triggers (e.g., one for Slack bots, one for webhooks) without duplicating the core logic. Critical Setup Instructions Get the Main Workflow: First, add the main analysis engine to your n8n instance from the community page: NixGuard Analysis Workflow. Add Your Free API Key: In this workflow, click the blue Set API Key & Initial Prompt node. Paste your free NixGuard API key into the apiKey value field. Connect The Workflows: Click the purple Execute NixGuard & Wazuh Workflow node. In the parameters, use the dropdown to select the main analysis workflow you added in Step 1. Ready to automate your threat intelligence? Get your free API key and learn more at; 🔗 Learn more about NixGuard: [thenex.world](thenex.world )🔗 Get started with a free security subscription: thenex.world/security/subscribe Tags: Free, IP Analysis, NixGuard, Wazuh, Security, Automation, AI, Cybersecurity, Threat Intelligence, SOC, Incident Response, IP Reputation, DevSecOps, API
by Guillaume Duvernay
Description This template provides a simple and powerful backend for adding speech-to-text capabilities to any application. It creates a dedicated webhook that receives an audio file, transcribes it using OpenAI's gpt-4o-mini model, and returns the clean text. To help you get started immediately, you'll find a complete, ready-to-use HTML code example right inside the workflow in a sticky note. This code creates a functional recording interface you can use for testing or as a foundation for your own design. Who is this for? Developers:** Quickly add a transcription feature to your application by calling this webhook from your existing frontend or backend code. No-code/Low-code builders:** Embed a functional audio recorder and transcription service into your projects by using the example code found inside the workflow. API enthusiasts:** A lean, practical example of how to use n8n to wrap a service like OpenAI into your own secure and scalable API endpoint. What problem does this solve? Provides a ready-made API:** Instantly gives you a secure webhook to handle audio file uploads and transcription processing without any server setup. Decouples frontend from backend:** Your application only needs to know about one simple webhook URL, allowing you to change the backend logic in n8n without touching your app's code. Offers a clear implementation pattern:** The included example code provides a working demonstration of how to send an audio file from a browser and handle the response—a pattern you can replicate in any framework. How it works This solution works by defining a clear API contract between your application (the client) and the n8n workflow (the backend). The client-side technique: Your application's interface records or selects an audio file. It then makes a POST request to the n8n webhook URL, sending the audio file as multipart/form-data. It waits for the response from the webhook, parses the JSON body, and extracts the value of the Transcript key. You can see this exact pattern in action in the example code provided in the workflow's sticky note. The n8n workflow (backend): The Webhook node catches the incoming POST request and grabs the audio file. The HTTP Request node sends this file to the OpenAI API. The Set node isolates the transcript text from the API's response. The Respond to Webhook node sends a clean JSON object ({"Transcript": "your text here..."}) back to your application. Setup Configure the n8n workflow: In the Transcribe with OpenAI node, add your OpenAI API credentials. Activate the workflow to enable the endpoint. Click the "Copy" button on the Webhook node to get your unique Production Webhook URL. Integrate with the frontend: Inside the workflow, find the sticky note labeled "Example Frontend Code Below". Copy the complete HTML from the note below it. ⚠️ Important: In the code you just copied, find the line const WEBHOOK_URL = 'YOUR WEBHOOK URL'; and replace the placeholder with the Production Webhook URL from n8n. Save the code as an HTML file and open it in your browser to test. Taking it further Save transcripts:* Add an *Airtable* or *Google Sheets** node to log every transcript that comes through the workflow. Error handling:** Enhance the workflow to catch potential errors from the OpenAI API and respond with a clear error message. Analyze the transcript:* Add a *Language Model** node after the transcription step to summarize the text, classify its sentiment, or extract key entities before sending the response.
by Agent Studio
Overview This workflow answers user requests sent via Mac Shortcuts Several Shortcuts call the same webhook, with a query and a type of query Types of query are: translate to english translate to spanish correct grammar (without changing the actual content) make content shorter make content longer How it works Select a text you are writing Launch the shortcut The text is sent to the webhook Depending on the type of request, a different prompt is used Each request is sent to an OpenAI node The workflow responds to the request with the response from GPT Shortcut replace the selected text with the new one For a demo and setup instructions: How to use it Activate the workflow Download this Shortcut template Install the shortcut In step 2 of the shortcut, change the url of the Webhook In Shortcut details, "add Keyboard Shortcut" with the key you want to use to launch the shortcut Go to settings, advanced, check "Allow running scripts" You are ready to use the shortcut. Select a text and hit the keyboard shortcut you just defined
by InfraNodus
Teach your AI agent HOW to think, not WHAT to think This workflow demonstrates how you can build an AI agent in n8n that uses the reasoning logic you define. So an LLM learns a way of thinking, which you can then apply to multiple problems: Make an AI chatbot that knows how to convince anybody using the "Getting to Yes" method Build an LLM workflow that uses Ray Dalio's principles to spot investment opportunities Create an AI agent crew of interdisciplinary thinkers: e.g. a specialist in psychology who gives an advice on education programmes. How it works This template uses the n8n AI agent node as an orchestrating agent that has access to a certain reasoning logic defined by an InfraNodus knowledge graph. This graph contains a list of reasoning rules (ontology), which is extracted to provide an advice that is relevant to the original prompt. It uses GraphRAG under the hood to traverse the parts of the graph relevant to the query. This advice and the reasoning logic extracted is then used by the AI agent to generate a response that is relevant to the user's query but that uses the reasoning logic provided through the graph. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, a web form that can be embedded to any website, or a webhook that can be connected to a Telegram / WhatsApp bot) The AI agent node accesses the Reasoning Logic HTTP InfraNodus nodes. The description of AI agent and the description of the reasoning InfraNodus node provides the agent with an understanding of how to rephrase the original question to retrieve relevant reasoning logic. The request is sent to the InfraNodus node. It provides a response that contains the reasoning logic needed to answer the question. This reasoning logic is then sent back to an LLM along with the original query to produce the response. InfraNodus uses GraphRAG under the hood: convert user query into graph find the overlap with the reasoning graph (using n=1 or more hops to include more relations) use similarity search to get additional parts of the graph generate a response based on this intersection as well as the context provided provide information about the underlying structure How to use You need an InfraNodus account to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for the reasoning logic Use the AI ontology creator to generate an ontology for a certain topic or text using AI. Then augment it with your own data. See our help article on creating ontologies for detailed instructions For each graph, go to the workflow, paste the name of the graph into the request JSON body name field. Change the system prompt in the AI agent node to reflect the nature of your reasoning logic. For instance, if it's an expert in interactions, you specify that, if it's a psychology expert, you need to specify that as well. Change the description of the reasoning node (HTTP tool). Use the InfraNodus summary and Project Notes > RAG prompt buttons to generate a description for the reasoning logic, which you can then reuse in your workflow. add the LLM key to the OpenAI node (or to the model of your choice) and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/21429518472988-Using-Knowledge-Graphs-as-Reasoning-Experts Also check out the video tutorial with a demo:
by Niklas Hatje
Use Case In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas and collect them in Notion. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Notion, and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Add a Database in Notion with the columns Name and Creator Add your Notion credentials and add the integration to your Notion page. Fill the setup node below Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Notion and for example, add different types of ideas or areas that they impact You might wanna add different templates in Notion to make it easier for users to fill their ideas with details Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (see this template and this template) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters (see here)
by Jaruphat J.
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package and custom command execution. Make sure to install required dependencies locally. Who is this for? This template is for developers, operations teams, and automation builders in Thailand (or any Thai-speaking environment) who regularly process PDFs or scanned documents in Thai and want to extract structured text into a Google Sheet. It is ideal for: Local government document processing Thai-language enterprise paperwork AI automation pipelines requiring Thai OCR What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text. However, integrating it into an end-to-end workflow usually requires manual scripting and data wrangling. This template solves that by: Running Typhoon OCR on PDF files Using AI to extract structured data fields Automatically storing results in Google Sheets What this workflow does Trigger: Run manually or from any automation source Read Files: Load local PDF files from a doc/ folder Execute Command: Run Typhoon OCR on each file using a Python command LLM Extraction: Send the OCR markdown to an AI model (e.g., GPT-4 or OpenRouter) to extract fields Code Node: Parse the LLM output as JSON Google Sheets: Append structured data into a spreadsheet Setup 1. Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr Install Poppler and add to system PATH (needed for pdftoppm, pdfinfo) 2. Create folders Create a folder called doc in the same directory where n8n runs (or mount it via Docker) 3. Google Sheet Create a Google Sheet with the following column headers: | book\_id | date | subject | detail | signed\_by | signed\_by2 | contact | download\_url | | -------- | ---- | ------- | ------ | ---------- | ----------- | ------- | ------------- | You can use this example Google Sheet as a reference. 4. API Key Export your TYPHOON_OCR_API_KEY and OPENAI_API_KEY in your environment (or set inside the command string in Execute Command node). How to customize this workflow Replace the LLM provider in the Basic LLM Chain node (currently supports OpenRouter) Change output fields to match your data structure (adjust the prompt and Google Sheet headers) Add trigger nodes (e.g., Dropbox Upload, Webhook) to automate input About Typhoon OCR Typhoon is a multilingual LLM and toolkit optimized for Thai NLP. It includes typhoon-ocr, a Python OCR library designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multilingual documents in Southeast Asia.
by Yang
What this workflow does This workflow automatically turns new technical video uploads into short, engaging Facebook post drafts—complete with a suggested image—and saves the results to Google Sheets for quick review or publishing. It’s designed to help you repurpose tutorial or demo videos into ready-to-use social content without any manual writing or design effort. What problem is this workflow solving? Manually writing Facebook posts for every new tutorial or product video takes time, especially when you want them to be engaging and consistent. This workflow solves that by using AI to watch for new videos, extract meaningful insights, and write posts and create visuals automatically—saving hours of work. Who is this for? This workflow is ideal for: Content creators uploading tutorial videos Marketing teams working with how-to or product videos Agencies and automation pros building scalable social workflows for clients How it works Trigger: Starts when a new video is uploaded to a specific Google Drive folder. Download & Convert: Downloads the video and converts it to base64. Extract Insights: Dumpling AI analyzes the video and extracts structured insights such as topic, tools mentioned, and key steps. Generate Post: GPT-4o creates a short, friendly Facebook post using those insights, along with an image prompt. Create Visual: Dumpling AI generates an image using the prompt. Save to Sheet: The Facebook post and image URL are saved to a Google Sheet. Setup Create a Google Sheet to store the posts and images. Connect your Google Drive, Google Sheets, Dumpling AI, and OpenAI credentials in n8n. Update the workflow with: Your Google Drive folder ID Your target Google Sheet ID (Optional) Edit the prompt used in the GPT node if you want a different tone, style, or structure for the post. How to customize the workflow Change the platform**: Replace “Facebook” in the prompt with LinkedIn, Instagram, or another platform. Use a different image tool**: You can swap Dumpling AI for any other image generation API (e.g. DALL·E, Midjourney via webhook). Add auto-publishing**: Add a Facebook or social media module to publish the generated post directly instead of just saving to Google Sheets. Tag videos by content type**: Use AI to classify videos into categories and store them in separate tabs or sheets.