by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Avkash Kakdiya
How it works This workflow captures idea submissions from a webhook and enriches them using AI. It extracts key fields like Title, Tags, Submitted By, and Created date in IST format. The cleaned data is stored in a Notion database for centralized tracking. Finally, a confirmation message is posted in Slack to notify the team. Step-by-step Step-by-step 1. Capture and process submission Webhook** – Receives idea submissions with text and user ID. AI Agent & OpenAI Model** – Enrich and structure the input into Title, Tags, Submitted By, and Created fields. Code** – Extracts clean data, formats tags, and prepares the entry for Notion. 2. Store in Notion Add to Notion** – Creates a new database entry with mapped fields: Title, Submitted By, Tags, Created. 3. Notify in Slack Send Confirmation (Slack)** – Posts a confirmation message with the submitted idea title. Why use this? Centralizes idea collection directly into Notion for better organization. Eliminates manual formatting with AI-powered data structuring. Ensures consistency in tags, submitter info, and timestamps. Provides instant team-wide visibility via Slack notifications. Saves time while keeping idea management streamlined and transparent.
by Yusuke Yamamoto
This n8n template demonstrates a multi-modal AI recipe assistant that suggests delicious recipes based on user input, delivered via Telegram. The workflow can uniquely handle two types of input: a photo of your ingredients or a simple text list. Use cases are many: Get instant dinner ideas by taking a photo of your fridge contents, reduce food waste by finding recipes for leftover ingredients, or create a fun and interactive service for a cooking community or food delivery app! Good to know This workflow uses two different AI models (one for vision, one for text generation), so costs will be incurred for each execution. See OpenRouter Pricing or your chosen model provider's pricing page for updated info. The AI prompts are in English, but the final recipe output is configured to be in Japanese. You can easily change the language by editing the prompt in the Recipe Generator node. How it works The workflow starts when a user sends a message or an image to your bot on Telegram via the Telegram Trigger. An IF node intelligently checks if the input is text or an image. If an image is sent, the AI Vision Agent analyzes it to identify ingredients. A Structured Output Parser then forces this data into a clean JSON list. If text is sent, a Set node directly prepares the user's text as the ingredient list. Both paths converge, providing a standardized ingredient list to the Recipe Generator agent. This AI acts as a professional chef to create three detailed recipes. Crucially, a second Structured Output Parser takes the AI's creative text and formats it into a reliable JSON structure (with name, difficulty, instructions, etc.). This ensures the output is always predictable and easy to work with. A final Set node uses a JavaScript expression to transform the structured recipe data into a beautiful, emoji-rich, and easy-to-read message. The formatted recipe suggestions are sent back to the user on Telegram. How to use Configure the Telegram Trigger with your own bot's API credentials. Add your AI provider credentials in the OpenAI Vision Model and OpenAI Recipe Model nodes (this template uses OpenRouter, but it can be swapped for a direct OpenAI connection). Requirements A Telegram account and a bot token. An AI provider account that supports vision and text models, such as OpenRouter or OpenAI. Customising this workflow Modify the prompt in the Recipe Generator to include dietary restrictions (e.g., "vegan," "gluten-free") or to change the number of recipes suggested. Swap the Telegram nodes for Discord, Slack, or a Webhook to integrate this recipe bot into a different platform or your own application. Connect to a recipe database API to supplement the AI's suggestions with existing recipes.
by Guillaume Duvernay
Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists:** Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts:** Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies:** Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. 💡 What problem does this solve? The Best of Both Worlds:** Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations":** Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility:* Automates the inclusion of source links from *both** your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage:** The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow:** Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! 🚀 Taking it further Automate Publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create draft posts in your CMS. Generate Content in Bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to generate a batch of articles from your content calendar. Customize the Writing Style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.
by Robert Breen
This workflow fetches deals and their notes from Pipedrive, cleans up stage IDs into names, aggregates the information, and uses OpenAI to generate a daily summary of your funnel. ⚙️ Setup Instructions 1️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 2️⃣ Connect Pipedrive In Pipedrive → Personal preferences → API → copy your API token URL shortcut: https://{your-company}.pipedrive.com/settings/personal/api In n8n → Credentials → New → Pipedrive API Company domain: {your-company} (the subdomain in your Pipedrive URL) API Token: paste the token from step 1 → Save In the Pipedrive nodes, select your Pipedrive credential and (optionally) set filters (e.g., owner, label, created time). 🧠 How It Works Trigger**: Workflow runs on manual execution (can be scheduled). Get many deals**: Pulls all deals from your Pipedrive. Code node**: Maps stage_id numbers into friendly stage names (Prospecting, Qualified, Proposal Sent, etc.). Get many notes**: Fetches notes attached to each deal. Combine Notes**: Groups notes by deal, concatenates content, and keeps deal titles. Set Field Names**: Normalizes the fields for summarization. Aggregate for Agent**: Collects data into one object. Turn Objects to Text**: Prepares text data for AI. OpenAI Chat Model + Summarize Agent: Generates a **daily natural-language summary of deals and their current stage. 💬 Example Prompts “Summarize today’s deal activity.” “Which deals are still in negotiation?” “What updates were added to closed-won deals this week?” 📬 Contact Need help extending this (e.g., send summaries by Slack/Email, or auto-create tasks in Pipedrive)? 📧 rbreen@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Shinji Watanabe
Who’s it for Learners, teachers, and content creators who track German vocabulary in Google Sheets and want automatic enrichment with synonyms, example sentences, and basic lexical info—without copy-and-paste. How it works / What it does When a new row is added to your sheet (column vocabulary), the workflow looks up the word in OpenThesaurus and checks if any entries are found. If so, an LLM generates a strict JSON object containing: natural_sentence (a clear German example), part_of_speech, translation_ja (concise Japanese gloss), and level (CEFR estimate). The JSON is parsed and written back to the same row, keeping your spreadsheet the single source of truth. If no entry is found, the workflow writes a helpful “not found” note. How to set up Connect Google Sheets and select your spreadsheet/tab. Confirm a vocabulary column exists. Configure OpenThesaurus (no API key required). Add your LLM credentials and keep the prompt’s “JSON only” constraint. Rename nodes clearly and add a yellow sticky note with this description. Requirements Access to Google Sheets LLM credentials (e.g., OpenAI) A tab containing a vocabulary column How to customize the workflow Adjust the If condition (e.g., require terms.length > 1 or fall back to the headword). Tweak the LLM prompt for tone, length, or level policy. Map extra fields in the Set node; add columns for difficulty tags or usage notes. Follow security best practices (no hardcoded secrets in HTTP nodes).
by Rahul Joshi
Description Automate your financial reporting by pulling charge and refund data from Stripe, calculating key revenue and risk metrics, and delivering professional reports directly into Slack. This workflow runs on a monthly or quarterly schedule, processes Stripe data into insights, and formats a rich Slack message with revenue breakdowns, top customers, refund analysis, and payment method insights. 📊💰💬 What This Template Does Runs automatically on a monthly (1st day) or quarterly schedule (every 3 months) at 9 AM. ⏱️ Fetches Stripe charges and refunds for the reporting period. 💳 Merges charge and refund data for a unified dataset. 🔄 Calculates financial metrics: total revenue, net revenue, average transaction value, refund rate. 📈 Estimates growth metrics: Monthly Recurring Revenue (MRR) and Annual Recurring Revenue (ARR). 🚀 Identifies top 3 customers by revenue. 🏆 Breaks down payment methods used (e.g., Visa, Mastercard, etc.). 💳 Performs risk analysis on transactions by Stripe’s risk scores. ⚠️ Analyzes refund reasons and generates insights. 🔄 Formats all results into a clear, structured Slack message with sections for finance, growth, risk, and customers. 💬 Key Benefits Eliminates manual Stripe report exports. ⚡ Ensures timely financial reporting (monthly or quarterly). 📅 Provides instant visibility of revenue, refunds, and risks in Slack. 📲 Surfaces top customers and payment methods for strategic insights. 🏅 Helps finance and ops teams catch anomalies early (high refunds or risky transactions). 🛡️ Keeps leadership and teams aligned with automated reporting. 👩💻👨💻 Features Schedule Triggers – Automates reporting on monthly or quarterly cycles. Stripe Charges & Refunds – Pulls transaction and refund data directly from Stripe API. Merge Node – Combines charges and refunds into a single dataset. Custom Code Metrics – Calculates revenue, net revenue, refund rates, and growth metrics. Top Customer Analysis – Highlights top revenue-generating customers. Payment Breakdown – Shows revenue split by card brand/payment method. Refund Analysis – Summarizes refund reasons and rates. Risk Analysis – Categorizes payments by low, medium, or high risk scores. Slack Integration – Delivers insights in a professional report format. Requirements n8n instance (cloud or self-hosted). Stripe API credentials with read access to charges and refunds. Slack Bot token with chat:write permission. Target Audience Finance teams needing automated recurring Stripe reports. 💼 SaaS companies monitoring MRR, ARR, and refunds. 🚀 Founders/Execs who want financial dashboards in Slack. 👩💼 Operations teams tracking risk and refund trends. 🛠️ Remote teams relying on Slack for reporting. 🌍 Step-by-Step Setup Instructions Connect your Stripe API credentials in n8n. 🔑 Connect your Slack API credentials and select your target channel. 💬 Adjust the schedule triggers (monthly/quarterly) if needed. ⏱️ Customize the Slack message formatting if you want branding or tone changes. 🎨 Test the workflow with sample data to confirm financial metrics. ✅
by jellyfish
Template Description This description details the template's purpose, how it works, and its key features. You can copy and use it directly. Overview This is a powerful n8n "meta-workflow" that acts as a Supervisor. Through a simple Telegram bot, you can dynamically create, manage, and delete countless independent, AI-driven market monitoring agents (Watchdogs). This template is a perfect implementation of the "Workflowception" (workflow managing workflows) concept in n8n, showcasing how to achieve ultimate automation by leveraging the the n8n API. How It Works ? Telegram Bot Interface: Execute all operations by sending commands to your own Telegram Bot: /add SYMBOL INTERVAL PROMPT: Add a new monitoring task. /delete SYMBOL: Delete an existing monitoring task. /list: List all currently running monitoring tasks. /help: Get help information. Use Telegram Bot to control The watchdog workfolw created in the below Dynamic Workflow Management: Upon receiving an /add command, the Supervisor system reads a "Watchdog" template, fills in your provided parameters (like trading pair and time interval), and then automatically creates a brand new, independent workflow via the n8n API and activates it. Persistent Storage: All monitoring tasks are stored in a PostgreSQL database, ensuring your configurations are safe even if n8n restarts. The ID of each newly created workflow is also written back to the database to facilitate future deletion operations. AI-Powered Analysis: Each created "Watchdog" workflow runs on schedule. It fetches the latest candlestick chart by calling a self-hosted tradingview-snapshot service. This service, available at https://github.com/0xcathiefish/tradingview-snapshot, works by simulating a login to your account and then using TradingView's official snapshot feature to generate an unrestricted, high-quality chart image. An example of a generated snapshot can be seen here: https://s3.tradingview.com/snapshots/u/uvxylM1Z.png. To use this, you need to download the Docker image from the packages in the GitHub repository mentioned above, and run it as a container. The n8n workflow then communicates directly with this container via an HTTP API to request and receive the chart snapshot. After obtaining the image, the workflow calls a multimodal AI model (Gemini). It sends both the chart image and your custom text-based conditions (e.g., "breakout above previous high on high volume" or "break below 4-hour MA20") to the AI for analysis, enabling truly intelligent chart interpretation and alert triggering. Key Features Workflowception: A prime example of one workflow using an API to create, activate, and delete other workflows. Full Control via Telegram: Manage your monitoring bots from anywhere, anytime, without needing to log into the n8n interface. AI Visual Analysis: Move beyond simple price alerts. Let an AI "read" the charts for you to enable complex, pattern-based, and indicator-based intelligent alerts. Persistent & Extensible: Built on PostgreSQL for stability and reliability. You can easily add more custom commands.
by Dataki
BigQuery RAG with OpenAI Embeddings This workflow demonstrates how to use Retrieval-Augmented Generation (RAG) with BigQuery and OpenAI. By default, you cannot directly use OpenAI Cloud Models within BigQuery. Try it This template comes with access to a *public BigQuery table** that stores part of the n8n documentation (about nodes and triggers), allowing you to try the workflow right away: n8n-docs-rag.n8n_docs.n8n_docs_embeddings* ⚠️ *Important:* BigQuery uses the *requester pays model.* The table is small (~40 MB), and BigQuery provides *1 TB of free processing per month**. Running 3–4 queries for testing should remain within the free tier, unless your project has already consumed its quota. More info here: BigQuery Pricing* Why this workflow? Many organizations already use BigQuery to store enterprise data, and OpenAI for LLM use cases. When it comes to RAG, the common approach is to rely on dedicated vector databases such as Qdrant, Pinecone, Weaviate, or PostgreSQL with pgvector. Those are good choices, but in cases where an organization already uses and is familiar with BigQuery, it can be more efficient to leverage its built-in vector capabilities for RAG. Then comes the question of the LLM. If OpenAI is the chosen provider, teams are often frustrated that it is not directly compatible with BigQuery. This workflow solves that limitation. Prerequisites To use this workflow, you will need: A good understanding of BigQuery and its vector capabilities A BigQuery table containing documents and an embeddings column The embeddings column must be of type FLOAT and mode REPEATED (to store arrays) A data pipeline that generates embeddings with the OpenAI API and stores them in BigQuery This template comes with a public table that stores part of the n8n documentation (about nodes and triggers), so you can try it out: n8n-docs-rag.n8n_docs.n8n_docs_embeddings How it works The system consists of two workflows: Main workflow** → Hosts the AI Agent, which connects to a subworkflow for RAG Subworkflow** → Queries the BigQuery vector table. The retrieved documents are then used by the AI Agent to generate an answer for the user.
by Julien DEL RIO
Who's it for This template is designed for content creators, podcasters, businesses, and researchers who need to transcribe long audio recordings that exceed OpenAI Whisper's 25 MB file size limit (~20 minutes of audio). How it works This workflow combines n8n, FileFlows, and OpenAI Whisper API to transcribe audio files of any length: User uploads an MP3 file through a web form and provides an email address n8n splits the file into 4 MiB chunks and uploads them to FileFlows FileFlows uses FFmpeg to segment the audio into 15-minute chunks (safely under the 25 MB API limit) Each segment is transcribed using OpenAI's Whisper API (configured for French by default) All transcriptions are merged into a single text file The complete transcription is automatically emailed to the user Processing time: Typically 10-15 minutes for a 1-hour audio file. Requirements n8n instance (self-hosted or cloud) FileFlows with Docker and FFmpeg installed OpenAI API key (Whisper API access) Gmail account for email delivery Network access between n8n and FileFlows Setup Complete setup instructions, including FileFlows workflow import, credentials configuration, and storage setup, are provided in the workflow's sticky notes. Cost OpenAI Whisper API: $0.006 per minute. A 1-hour recording costs approximately $0.36.
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Connor Provines
Analyze email performance and optimize campaigns with AI using SendGrid and Airtable This n8n template creates an automated feedback loop that pulls email metrics from SendGrid weekly, tracks performance in Airtable, analyzes trends across the last 4 weeks, and generates specific recommendations for your next campaign. The system learns what works and provides data-driven insights directly to your email creation process. Who's it for Email marketers and growth teams who want to continuously improve campaign performance without manual analysis. Perfect for businesses running regular email campaigns who need actionable insights based on real data rather than guesswork. Good to know After 4-6 weeks, expect 15-30% improvement in primary metrics Requires at least 2 weeks of historical data to generate meaningful analysis System improves over time as it learns from your audience Implementation time: ~1 hour total How it works Schedule trigger runs weekly (typically Monday mornings) Pulls previous week's email statistics from SendGrid (delivered, opens, clicks, rates) Updates the previous week's record in Airtable with actual performance data GPT-4 analyzes trends across the last 4 weeks, identifying patterns and opportunities Creates a new Airtable record for the upcoming week with specific recommendations: what to test, how to change it, expected outcome, and confidence level Your email creation workflow pulls these recommendations when generating new campaigns After sending, the actual email content is saved back to Airtable to close the loop How to set up Create Airtable base: Make a table called "Email Campaign Performance" with fields for week_ending, delivered, unique_opens, unique_clicks, open_rate, ctr, decision, test_variable, test_hypothesis, confidence_level, test_directive, implementation_instruction, subject_line_used, email_body, icp, use_case, baseline_performance, success_metric, target_improvement Configure SendGrid: Add API key to the "SendGrid Data Pull" node and test connection Set up Airtable credentials: Add Personal Access Token and select your base/table in all Airtable nodes Add OpenAI credentials: Configure GPT-4 API key in the "Previous Week Analysis" node Test with sample data: Manually add 2-3 weeks of data to Airtable or run if you have historical data Schedule weekly runs: Set workflow to trigger every Monday at 9 AM (or after your weekly campaign sends) Integrate with email creation: Add an Airtable search node to your email workflow to retrieve current recommendations, and an update node to save what was sent Requirements SendGrid account with API access (or similar ESP with statistics API) Airtable account with Personal Access Token OpenAI API access (GPT-4) Customizing this workflow Use different email platform**: Replace SendGrid node with Mailchimp, Brevo, or any ESP that provides statistics API—adjust field mappings accordingly Add more metrics**: Extend Airtable fields to track bounce rate, unsubscribe rate, spam complaints, or revenue attribution Change analysis frequency**: Adjust schedule trigger for bi-weekly or monthly analysis instead of weekly Swap AI models**: Replace GPT-4 with Claude or Gemini in the analysis node Multi-campaign tracking**: Duplicate the workflow for different campaign types (newsletters, promotions, onboarding) with separate Airtable tables