by Stephan Koning
Recruiter Mirror is a proof‑of‑concept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiter‑ready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiter‑ready magnet. Got it ✅ — based on your workflow (Webhook → LinkedIn CV/JD fetch → GhostGenius API → n8n parsing/transform → Groq LLM → Output to Webhook), here’s a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: 🔧 Tools & APIs Required 1. n8n (Automation Platform) Either n8n Cloud or self‑hosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. 2. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedIn_CV) and job posting (LinkedIn_JD) links submitted by the user. Acts as the starting point for the workflow. 3. GhostGenius API Endpoints Used: /v2/profile → Scrapes and returns structured CV/LinkedIn data. /v2/job → Scrapes and returns structured job description data. Auth**: Requires valid credentials (e.g., API key / header auth). 4. Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth**: Groq account + saved API credentials in n8n. 5. Code Node (JavaScript Transformation) Parses Groq’s JSON output safely (JSON.parse). Generates clean, recruiter‑ready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips 6. n8n Native Nodes Set & Aggregate Nodes** → Rebuild structured CV & JD objects. Merge Node** → Combine CV data with job description for comparison. If Node** → Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node** → Sends back the final recruiter‑ready insights in JSON (or HTML). ⚠️ Important Notes Credentials**: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is **not production‑ready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)
by takuma
Who's it for This template is for home cooks, small restaurant owners, or anyone who wants to streamline their meal planning, ingredient cost tracking, leftover management, nutritional analysis, and social media promotion. It's ideal for those looking to optimize their kitchen operations, reduce food waste, maintain a healthy diet, and efficiently share their culinary creations. How it works / What it does This advanced workflow acts as a comprehensive culinary assistant. Triggered by a new menu item, it performs several key functions: Cost and Ingredient Tracking:** A "Menu Agent" uses AI to analyze your input (e.g., a recipe or dish) and extract a detailed list of ingredients, their associated costs, unit prices, and total cost, then logs this into a Google Sheet as a "Recipe List." Leftover Management:** A "Leftovers Agent" identifies any unused ingredients from your planned dish and suggests three new recipes to utilize them, helping to minimize food waste. This information is also recorded in a Google Sheet. Nutritional Diary:** A "Nutritionist Agent" generates a diary-style entry with dietary advice based on the meal, highlighting key nutrients and offering personalized suggestions. This entry is appended to a "Diary" Google Sheet. Social Media Promotion:** A "Post Agent" takes the nutritional diary entry and transforms it into an engaging social media post (specifically for X/Twitter in this template), which is then sent as a direct message, ready for you to share with your followers. How to set up Webhook Trigger: The workflow starts with a Webhook. Copy the webhook URL from the "Webhook" node. You will send your menu item input to this URL. Google Sheets Integration: You need to set up a Google Sheets credential for your n8n instance. Create a Google Sheet document (e.g., "Recipe List"). Within this document, create three sheets: "Recipe: This sheet will store your menu items, ingredients, costs, etc. Ensure it has columns for Date, Item, Ingredients, Ingredient Cost, Unit Price, Quantity, Total, Cost, and Leftover Ingredients. "leftovers" (Leftovers): This sheet will store suggested recipes for leftover ingredients. Ensure it has columns for Date and Ingredients. "diary" (Diary): This sheet will store your nutritional diary entries. Ensure it has a column for Diary. In the "Append row in sheet", "Append row in sheet1", and "Append row in sheet2" nodes, replace the Document ID with the ID of your Google Sheet. For "Sheet Name," ensure you select the correct sheet (e.g., "レシピ", "diary", "leftovers") from the dropdown. OpenRouter Chat Model: Set up your OpenRouter credentials in the "OpenRouter Chat Model" nodes. You will need your OpenRouter API key. Twitter Integration: Set up your Twitter credentials for the "Create Direct Message" node. In the "Create Direct Message" node, specify the User (username) to whom the direct message should be sent. This is typically your own Twitter handle or a test account. Requirements An n8n instance. A Google account with Google Sheets enabled. An OpenRouter API key. A Twitter (X) account with developer access to send Direct Messages. How to customize the workflow Input Data:** The initial input to the "Webhook" node is expected to be the name of a dish or recipe. You can modify the "Menu Agent" to accept more detailed inputs if needed. Google Sheets Structure:** Adjust the column mappings in the Google Sheets nodes if your spreadsheet column headers differ. AI Agent Prompts:** Customize the System Message in each AI Agent node (Menu Agent, Leftovers Agent, Nutritionist Agent, Post Agent) to refine their behavior and the kind of output they generate. For example, you could ask the Nutritionist Agent to focus on specific dietary needs. Social Media Platform:** The "Create Direct Message" node is configured for Twitter. You can swap this with another social media node (e.g., Mastodon, Discord) if you prefer to post elsewhere, remembering to adjust the "Post Agent" system message accordingly. Output Parser:** The "Structured Output Parser" is configured for a specific JSON structure. If you change the "Menu Agent" to output a different structure, you'll need to update this parser.
by Jimleuk
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
by Budi SJ
Automated Brand DNA Generator Using JotForm, Google Search, AI Extraction & Notion The Brand DNA Generator workflow automatically scans and analyzes online content to build a company’s Brand DNA profile. It starts with input from a form, then crawls the company’s website and Google search results to gather relevant information. Using AI-powered extraction, the system identifies insights such as value propositions, ideal customer profiles (ICP), pain points, proof points, brand tone, and more. All results are neatly formatted and automatically saved to a Notion database as a structured Brand DNA report, eliminating the need for manual research. 🛠️ Key Features Automated data capture, collects company data directly from form submissions and Google search results. Uses AI-powered insight extraction with LLMs to extract and summarize brand-related information from website content. Fetches clean text from multiple web pages using HTTP requests and a content extractor. Merges extracted data from multiple sources into a single Brand DNA JSON structure. Automatically creates a new page in Notion with formatted sections (headings, paragraphs, and bullet points). Handles parsing failures and processes multiple pages efficiently in batches. 🔧 Requirements JotForm API Key, to capture company data from form submissions. SerpAPI Key, to perform automated Google searches. OpenRouter / LLM API, for AI-based language understanding and information extraction. Notion Integration Token & Database ID, to save the final Brand DNA report to Notion. 🧩 Setup Instructions Connect your JotForm account and select the form containing the fields Company Name and Company Website. Add your SerpAPI Key. Configure the AI model using OpenRouter or LLM. Enter your Notion credentials and specify the databaseId in the Create a Database Page node. Customize the prompt in the Information Extractor node to modify the tone or structure of AI analysis (Optional). Activate the workflow, then submit data through the JotForm to test automatic generation and Notion integration. 💡 Final Output A complete Brand DNA Report containing: Company Description Ideal Customer Profile Pain Points Value Proposition Proof Points Brand Tone Suggested Keywords All generated automatically from the company’s online presence and stored in Notion with no manual input required.
by Naitik Joshi
🚀 AI-Powered LinkedIn Post Generator with Automated Image Creation 📋 Overview Transform any topic into professional LinkedIn posts with AI-generated content and custom images! This workflow automates the entire process from topic input to published LinkedIn post, including professional image generation using Google's Imagen 4 API. ✨ Key Features 🤖 AI Content Generation: Uses Google Gemini to create engaging LinkedIn posts 🎨 Professional Image Creation: Automatically generates images using Google Imagen 4 📱 Direct LinkedIn Publishing: Posts content and images directly to your LinkedIn feed 🔄 Form-Based Input: Simple web form to submit topics 📝 Content Formatting: Converts markdown to LinkedIn-friendly format 🔧 What This Workflow Does 📝 Form Submission: User submits a topic through a web form 🗺️ Data Mapping: Maps the topic for AI processing 🧠 AI Content Generation: Google Gemini creates post content and image prompt 🎯 Content Normalization: Cleans and formats the AI output 🖼️ Image Generation: Creates professional images using Google Imagen 4 📤 LinkedIn Registration: Registers image upload with LinkedIn API 🔄 Binary Conversion: Converts base64 image to binary buffer ⬆️ Image Upload: Uploads image to LinkedIn 📋 Content Curation: Converts markdown to LinkedIn format ⏳ Processing Wait: Ensures image is fully processed 🚀 Post Publishing: Publishes the complete post to LinkedIn 🛠️ Prerequisites & Setup 🔑 Required Credentials 1. LinkedIn OAuth 2.0 Setup 🔗 You'll need to create a LinkedIn app with the following OAuth 2.0 scopes: ✅ openid - Use your name and photo ✅ profile - Use your name and photo ✅ w_member_social - Create, modify, and delete posts, comments, and reactions on your behalf ✅ email - Use the primary email address associated with your LinkedIn account Steps to get LinkedIn credentials: Go to LinkedIn Developer Portal Create a new app or use existing one Configure OAuth 2.0 settings with the scopes above Get your access token from the authentication flow 2. Google Cloud Platform Setup ☁️ Required GCP Services to Enable: 🎯 Vertex AI API - For Imagen 4 image generation 🔐 Cloud Resource Manager API - For project management 🛡️ IAM Service Account Credentials API - For authentication Steps to get GCP token: Install Google Cloud SDK Authenticate: gcloud auth login Set project: gcloud config set project YOUR_PROJECT_ID Get access token: gcloud auth print-access-token > 💡 Note: The access token expires after 1 hour. For production use, consider using service account credentials. 🔧 n8n Node Credentials Setup LinkedIn OAuth2 API: Configure with your LinkedIn app credentials HTTP Bearer Auth (LinkedIn): Use your LinkedIn access token HTTP Bearer Auth (Google Cloud): Use your GCP access token Google Gemini API: Configure with your Google AI API key 📊 Workflow Structure graph LR A[📝 Form Trigger] --> B[🗺️ Mapper] B --> C[🤖 AI Agent] C --> D[🎯 Normalizer] D --> E[🖼️ Text to Image] E --> F[📤 Register Upload] F --> G[🔄 Binary Converter] G --> H[⬆️ Upload Image] H --> I[📋 Content Curator] I --> J[⏳ Wait] J --> K[🚀 Publish to LinkedIn] 🎨 Image Generation Details The workflow uses Google Imagen 4 with these parameters: 📐 Aspect Ratio: 1:1 (perfect for LinkedIn) 🎯 Sample Count: 1 options generated 🛡️ Safety Setting: Block few (content filtering) 💧 Watermark: Enabled 🌍 Language: Auto-detect 📝 Content Processing The AI generates content in this JSON structure: { "post_content": { "text": "Your engaging LinkedIn post content with hashtags" }, "image_prompt": { "description": "Professional image generation prompt" } } 🔄 LinkedIn API Integration Image Upload Process: Register Upload: Creates upload session with LinkedIn Binary Upload: Uploads image as binary data Post Creation: Creates post with text and image reference API Endpoints Used: 📤 POST /v2/assets?action=registerUpload - Register image upload 📝 POST /v2/ugcPosts - Create LinkedIn post ⚠️ Important Notes 🕐 Rate Limits: LinkedIn has API rate limits - monitor your usage ⏱️ Processing Time: Image generation can take 10-30 seconds 🔄 Token Refresh: GCP tokens expire hourly in development 📏 Content Length: LinkedIn posts have character limits 🖼️ Image Size: Generated images are optimized for LinkedIn 🚀 Getting Started Import the workflow into your n8n instance Configure all credentials as described above Enable required GCP services in your project Test the form trigger with a sample topic Monitor the execution for any errors Adjust the AI prompt if needed for your content style 🛠️ Customization Options 🎨 Modify image style in the system prompt 📝 Adjust content tone in the AI agent configuration 🔄 Change wait time between upload and publish 🎯 Add content filters for brand compliance 📊 Include analytics tracking for post performance 💡 Tips for Best Results 🎯 Be specific with your topic inputs 🏢 Use professional language for business content 🔍 Review generated content before publishing 📈 Monitor engagement to refine your prompts 🔄 Test thoroughly before production use 🐛 Troubleshooting Common Issues: ❌ "Invalid credentials": Check token expiration ❌ "Image upload failed": Verify LinkedIn API permissions ❌ "Content generation error": Check Gemini API quota ❌ "Post creation failed": Ensure proper wait time after image upload 📚 Additional Resources 📖 LinkedIn Marketing API Documentation 🤖 Google Vertex AI Imagen Documentation 🔧 n8n Documentation 🚀 Google Gemini API Guide 💬 Need Help? Join the n8n community forum or check the troubleshooting section above! 🌟 Found this useful? Give it a star and share your improvements with the community!
by noda
Price Anomaly Detection & News Alert (Marketstack + HN + DeepL + Slack) Overview This workflow monitors a stock’s closing price via Marketstack. It computes a 20-day moving average and standard deviation (±2σ). If the latest close is outside ±2σ, it flags an anomaly, fetches related headlines from Hacker News, translates them to Japanese with DeepL, and posts both original and translated text to Slack. When no anomaly is detected, it sends a concise “normal” report. How it works 1) Daily trigger at 09:00 JST 2) Marketstack: fetch EOD data 3) Code: compute mean/σ and classify (normal/high/low) 4) IF: anomaly? → yes = news path / no = normal report 5) Hacker News: search related items 6) DeepL: translate EN → JA 7) Slack: send bilingual notification Requirements Marketstack API key DeepL API key Slack OAuth2 (bot token / channel permission) Notes Edit the ticker in Get Stock Data. Adjust N (days) and k (sigma multiplier) in Calculate Deviation. Keep credentials out of HTTP nodes (use n8n Credentials).
by Rahul Joshi
Description Automatically detect customer churn risks from Zendesk tickets, log them into Google Sheets for tracking, and send instant Slack alerts to your customer success team. This workflow helps you spot unhappy customers early and take proactive action to reduce churn. 🚨📊💬 What This Template Does Fetches Zendesk tickets daily on schedule (8:00 PM). ⏰ Processes and formats ticket data into clean JSON (priority, age, urgency). 🧠 Identifies churn risks based on negative satisfaction ratings. ⚠️ Logs churn risk tickets into Google Sheets for analysis and reporting. 📈 Sends formatted Slack alerts with ticket details to the CS team channel. 📢 Key Benefits Detects unhappy customers before they churn. 🚨 Centralized churn tracking for reporting and team reviews. 🧾 Proactive alerts to reduce response delays. ⏱️ Clean, structured ticket data for analytics and filtering. 🔄 Strengthens customer success strategy with real-time visibility. 🌐 Features Schedule Trigger – Runs every weekday at 8:00 PM. 🗓️ Zendesk Integration – Fetches all tickets automatically. 🎫 Smart Data Processing – Adds ticket age, urgency, and priority mapping. 🧮 Churn Risk Filter – Flags tickets with negative satisfaction scores. 🚩 Google Sheets Logging – Saves churn risk details with metadata. 📊 Slack Alerts – Sends formatted messages with ID, subject, rating, and action steps. 💬 Requirements n8n instance (cloud or self-hosted). Zendesk API credentials with ticket read access. Google Sheets OAuth2 credentials with write permissions. Slack Bot API credentials with channel posting permissions. Pre-configured Google Sheet for churn risk logging. Target Audience Customer Success teams monitoring churn risk. 👩💻 SaaS companies tracking customer health. 🚀 Support managers who want proactive churn alerts. 🛠️ SMBs improving retention through automation. 🏢 Remote CS teams needing instant notifications. 🌐 Step-by-Step Setup Instructions Connect your Zendesk, Google Sheets, and Slack credentials in n8n. 🔑 Update the Schedule Trigger (default: daily at 8:00 PM) if needed. ⏰ Replace the Google Sheet ID with your churn risk tracking sheet. 📊 Confirm the Slack channel ID for alerts (default: zendesk-churn-alerts). 💬 Adjust churn filter logic (default: satisfaction_score = "bad"). 🎯 Run a test to fetch Zendesk tickets and validate Sheets + Slack outputs. ✅
by Sk developer
🚀 All-In-One Video Downloader to Google Drive (via RapidAPI best All-In-One Video Downloader) Description: This n8n workflow automates the process of downloading videos from any supported platform (like LinkedIn, Facebook, or Instagram) using the RapidAPI best All-In-One Video Downloader. It then uploads the video to your Google Drive and shares it publicly, while logging any failures in Google Sheets for tracking. 📦 Node-by-Node Breakdown | 🧩 Node Name | 📝 One‑Line Explanation | |-------------------------------|-------------------------------------------------------------------------------| | On form submission | Triggers the workflow when a user submits a video URL through a web form. | | All in one video downloader | Sends a POST request to RapidAPI best All-In-One Video Downloader to fetch downloadable video links. | | If | Checks whether the API response includes an error and routes accordingly. | | Download mp4 | Downloads the video using the direct media URL received from the API. | | Upload To Google Drive | Uploads the MP4 file to a designated folder in your Google Drive. | | Google Drive Set Permission | Makes the uploaded file publicly shareable with a viewable link. | | Wait | Adds a short delay before logging errors to prevent duplicate entries. | | Google Sheets Append Row | Logs failed download attempts with the original URL and status as N/A. | ✅ Benefits of This Flow 🔁 End-to-End Automation: From user input to shareable video link—no manual steps required. 🌐 Supports Multiple Platforms: The RapidAPI best All-In-One Video Downloader supports sites like Instagram, Facebook, Twitter, LinkedIn, and more. ⚠️ Smart Error Handling: Automatically logs failed download attempts into Google Sheets for retry or audit. ☁️ Cloud Ready: Videos are stored in Google Drive with instant public access. 📊 Trackability: Logs failures, timestamps, and source URLs for easy debugging or analytics. 🧩 Modular Setup: Easily expand this in n8n to include Slack notifications, email alerts, or tagging. 🔁 Use Cases 🎬 Social Media Video Archiving: Download and store content (Reels, posts, stories) into Drive for future use. 🧑🏫 Educational Sharing: Teachers can collect useful videos and share links with students. 📚 Content Curation: Bloggers or content managers can create a media archive from multiple platforms. 🤝 Team Automation: Teams submit links, and the workflow handles download + Drive share link generation. 📉 Error Tracking for Ops: Failed URLs are tracked in Google Sheets for retry, monitoring, or debugging. 🧠 Final Thoughts This workflow leverages the power of n8n and RapidAPI best All-In-One Video Downloader to create a fully automated pipeline for capturing video content from across the web. It’s ideal for educators, marketers, content curators, or developers who want to streamline video storage and access using Google Drive. 🔑 How to Get API Key from RapidAPI Best All-In-One Video Downloader Follow these steps to get your API key and start using it in your workflow: Visit the API Page 👉 Click here to open Best All-In-One Video Downloader on RapidAPI Log in or Sign Up Use your Google, GitHub, or email account to sign in. If you're new, complete a quick sign-up. Subscribe to a Pricing Plan Go to the Pricing tab on the API page. Select a plan (free or paid, depending on your needs). Click Subscribe. Access Your API Key Navigate to the Endpoints tab. Look for the X-RapidAPI-Key under Request Headers. Copy the value shown — this is your API key. Use the Key in Your Workflow In your n8n workflow (HTTP Request node), replace: "x-rapidapi-key": "your key" with: "x-rapidapi-key": "YOUR_ACTUAL_API_KEY" ✅ You’re now ready to use the Best All-In-One Video Downloader with your automated workflows!
by Sabrina Ramonov 🍄
Description This AI Agent Carousel Maker uses ChatGPT and Blotato to write, generate, and auto-post social media carousels to 5 social platforms: Instagram, Tiktok, Facebook, Twitter, and Pinterest. Simply chat with the AI agent, confirm which prebuilt viral carousel template you want to use, then the AI Agent populates the template with your personalized information and quotes, and posts to social media on autopilot. Who Is This For? This is perfect for entrepreneurs, small businesses, content creators, digital marketing agencies, social media marketing agencies, and influencers. How It Works 1. Chat: AI Agent Carousel Maker Chat with AI agent about your desired carousel Confirm quotes and carousel template to use 2. Carousel Generation AI agent calls corresponding Blotato tool to generate carousel Wait and fetch completed carousel 3. Publish to Social Media via Blotato Choose your social accounts Either post immediately or schedule for later Setup Sign up for OpenAPI API access and create credential Sign up for Blotato.com Generate Blotato API Key by going to Settings > API > Generate API Key (paid feature only) Create Blotato credential If you're using n8n, ensure you have ""Verified Community Nodes"" enabled in your n8n Admin Panel. Then, install ""Blotato"" verified community node. Click ""Open chat"" to test workflow Complete SETUP sticky notes in BROWN in this template Tips & Tricks AFTER your first successful run, open each carousel template tool call (i.e. pink nodes attached to AI Agent Carousel Maker) and tweak the parameters, but DO NOT change the ""quotes"" parameter unless you're an n8n expert. DO NOT edit the 'quotes' parameter unless you're an n8n expert. When adding a new template, DO NOT duplicate an existing node. Instead, click '+ Tool' > Blotato Tool > Video > Create > select new template. This ensures template parameters are correctly loaded. While testing: enable only 1 social platform, and deactivate the rest. Add optional parameter 'scheduledTime' so that you don't accidentally post to social media. Check your content calendar here: https://my.blotato.com/queue/schedules 📄 Documentation Full Tutorial Troubleshooting Check Blotato API Dashboard and logs to review requests, responses, and errors. Verify template parameters and n8n node configuration if runs fail. You can also: View all video/carousel templates available Check how your carousels look Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions. Connect with me: Linkedin | Youtube
by Zain Khan
AI-Powered Customer Feedback: Triage and Insight-Driven Chat This n8n workflow creates a two-phase system for handling customer feedback received via a Jotform submission. The first agent quickly triages the issue, and the second agent engages in a persistent, conversational exchange over email to collect the information necessary for a resolution. Phase 1: Triage and Initial Action (AI Agent) This phase is triggered by a new submission on the Jotform. The goal is to immediately categorize the feedback and take the appropriate initial action. Jotform Trigger: The workflow starts instantly when a user submits your designated feedback form. AI Agent (Triage): This agent (powered by Google Gemini) is tasked with two primary jobs: Sentiment Analysis and Response Drafting: It reads the feedback (q6_typeA6) and the user's name (q3_name.first). If Positive: It uses the Send a message in Gmail tool to send a concise, appreciative thank you note. If Negative: It uses the Send a message in Gmail tool to send an initial, empathetic response acknowledging the issue and stating that a team member will follow up with questions. It also uses the Create an issue in Jira Software tool to log the bug or issue immediately. Data Structuring: It uses the Structured Output Parser to extract key data points, most importantly the threadId of the initial email, which is crucial for the follow-up conversation agent. Phase 2: Conversational Insight Gathering (AI Agent (Chat)) This phase takes over for all negative feedback, engaging the customer in a back-and-forth exchange to collect essential details required for the development or support team. Gmail Trigger: This node is set to poll for new, unread emails (which are expected to be replies from the customer). Simple Memory: This node is vital for the conversational aspect. It is configured to use the unique email threadId as its session key, allowing the AI Agent to remember the entire history of the conversation (previous questions asked and details provided) across multiple emails. AI Agent (Chat): This is the 2nd agent and the core of the conversational process. Role: It acts as a dedicated feedback assistant. Goal: Its instruction is to reply and ask for specific, missing information needed for the ticket, such as: what device they were using, if they know the steps to reproduce the issue, and to confirm that the team will send a free coupon for credits as a thank you for their help. Tool: It uses the Reply to a message in Gmail tool to continue the conversation directly within the original email thread. Resolution: The agent is trained to look for confirmation that all necessary information has been provided. Once it determines the issue details are complete, it will send a final thank you email and automatically use the Jira tool to summarize and update the existing Jira issue with the new insights, closing the loop on the data collection process. Requirements To implement and run this automated customer feedback workflow, the following accounts and credentials are required: 1. Automation Platform n8n Instance:** A running instance of n8n (Cloud or self-hosted) to host and execute the workflow. Sign up for n8n using: https://n8n.partnerlinks.io/pe6gzwqi3rqw 2. Service Credentials You must set up and connect the following credentials within your n8n instance: Google Gemini API Key:* Required to power both *AI Agent** nodes for sentiment analysis and conversational follow-up. Gmail OAuth2/API Key:** Required for: The Send a message in Gmail tool (for initial replies). The Gmail Trigger (to detect new replies). The Reply to a message in Gmail tool (for the ongoing conversation). Jotform API Key:* Required for the *Jotform Trigger** node to instantly receive and process new form submissions. Sign up for Jotform using: https://www.jotform.com/?partner=zainurrehman Jira Software Credentials:* Required for the *Create an issue in Jira Software* tool (for the first agent) and the *Jira Tool** (for the second agent to update the ticket). 3. External Configurations Jotform Setup:** A live Jotform must be configured with specific fields to capture the user's name, email, and the feedback text. Jira Setup:* You need a designated *Jira Project* and a defined *Issue Type** for the workflow to create and update tickets.
by Omer Fayyaz
This workflow automates multi-platform content creation by transforming form submissions into tailored blog posts, LinkedIn posts, and Facebook posts using AI and web research What Makes This Different: Multi-Platform Content Generation** - Creates optimized content for blog, LinkedIn, and Facebook simultaneously AI-Powered Content Adaptation** - Uses OpenAI to tailor content for each platform's unique audience and format Web Research Integration** - Leverages Tavily API to gather relevant, up-to-date information on any topic Form-Based Input** - Simple form interface for content subject and target audience specification Automated Workflow** - End-to-end automation from form submission to content delivery Slack Integration** - Delivers all generated content via Slack notification for easy review and sharing Key Benefits of Automated Content Creation: Time Efficiency** - Generates three different content pieces in one workflow execution Platform Optimization** - Each content piece is specifically crafted for its intended platform Research-Backed Content** - Incorporates current web information for accurate, relevant content Consistent Brand Voice** - AI ensures consistent tone and messaging across all platforms Scalable Content Production** - Handles multiple content requests without manual intervention Centralized Delivery** - All content delivered to one location for easy management Who's it for This template is designed for content marketers, social media managers, small business owners, marketing agencies, and content creators who need to produce consistent, high-quality content across multiple platforms. It's perfect for businesses that want to streamline their content creation process, maintain a consistent brand voice, and leverage AI to create platform-specific content that resonates with their target audience. How it works / What it does This workflow creates an automated content creation system that transforms form submissions into multi-platform content. The system: Receives form submissions with content subject and target audience through n8n form trigger Extracts search parameters from form data to prepare for web research Searches the web using Tavily API to gather relevant, current information on the topic Processes search results by splitting and aggregating content for AI processing Generates platform-specific content using OpenAI agents for LinkedIn, Facebook, and blog formats Aggregates all content into a single output with all three platform versions Sends Slack notification with all generated content for review and distribution Key Innovation: Multi-Platform AI Content Generation - Unlike traditional content tools that create one piece of content, this system automatically generates three different versions optimized for each platform's unique audience, format requirements, and engagement patterns, all based on current web research and AI-powered adaptation. How to set up 1. Configure Form Trigger Set up n8n form trigger with "Content Subject" and "Target Audience" fields Configure form settings and validation rules Test form submission functionality Ensure proper data flow to subsequent nodes 2. Configure OpenAI API Set up OpenAI API credentials in n8n Ensure proper API access and quota limits Configure the OpenAI Chat Model node for content generation Test AI model connectivity and response quality 3. Configure Tavily API Get your API key**: Sign up at tavily.com and obtain your API key from the dashboard Add API key to workflow**: In the "Search Web" HTTP Request node, replace "ADD YOU API KEY HERE" with your actual Tavily API key Example configuration**: { "api_key": "your-actual-api-key-here", "query": "{{ $json.query.replace(/\"/g, '\\\"') }}", "search_depth": "basic", "include_answer": true, "topic": "news", "include_raw_content": true, "max_results": 3 } Configure search parameters**: Ensure max_results is set to 3 and search_depth to "basic" for optimal performance Test API connectivity**: Run a test execution to verify search results are returned correctly 4. Configure Slack Integration Set up Slack API credentials in n8n Configure Slack channel ID for content delivery Set up proper message formatting for content display Test Slack notification delivery 5. Test the Complete Workflow Submit test form with sample content subject and target audience Verify web search returns relevant results Check that AI generates appropriate content for all three platforms Confirm Slack notification contains all generated content Requirements n8n instance** with form trigger and HTTP request capabilities OpenAI API** access for AI-powered content generation Tavily API** credentials for web search functionality Slack workspace** with API access for content delivery Active internet connection** for real-time API interactions How to customize the workflow Modify Content Generation Parameters Adjust the number of web search results (currently set to 3) Add more search depth options (basic, advanced, comprehensive) Implement content length controls for different platforms Add content tone and style preferences Enhance AI Capabilities Customize AI prompts for specific industries or niches Add support for multiple languages Implement brand voice consistency across all platforms Add content quality scoring and optimization Expand Content Sources Integrate with additional research APIs (Google Search, Bing, etc.) Add support for internal knowledge base integration Implement trending topic detection Add competitor content analysis Improve Content Delivery Add email notifications alongside Slack Implement content scheduling capabilities Add content approval workflows Implement content performance tracking Business Features Add content analytics and performance metrics Implement A/B testing for different content versions Add content calendar integration Implement team collaboration features Key Features Multi-platform content generation** - Creates optimized content for blog, LinkedIn, and Facebook AI-powered content adaptation** - Tailors content for each platform's unique requirements Web research integration** - Incorporates current, relevant information from web searches Form-based input** - Simple interface for content subject and target audience specification Automated workflow** - End-to-end automation from form submission to content delivery Platform-specific optimization** - Each content piece follows platform best practices Slack integration** - Centralized delivery of all generated content Scalable content production** - Handles multiple content requests efficiently Technical Architecture Highlights AI-Powered Content Generation OpenAI integration** - Advanced language model for content creation Platform-specific prompts** - Tailored AI instructions for each social platform Content optimization** - AI ensures platform-appropriate formatting and tone Quality consistency** - Maintains brand voice across all generated content Web Research Integration Tavily API** - Comprehensive web search with content extraction Real-time data** - Access to current, relevant information Content aggregation** - Combines multiple sources for comprehensive coverage Search optimization** - Efficient query construction for better results Form-Based Input System n8n form trigger** - Simple, user-friendly input interface Data validation** - Ensures required fields are properly filled Parameter extraction** - Converts form data to search and generation parameters Error handling** - Graceful handling of incomplete or invalid inputs Multi-Platform Output LinkedIn optimization** - Professional tone with industry-specific formatting Facebook adaptation** - Engaging, shareable content with appropriate length Blog formatting** - Comprehensive, SEO-friendly long-form content Unified delivery** - All content delivered through single Slack notification Use Cases Content marketing agencies** needing efficient multi-platform content creation Small businesses** requiring consistent social media presence across platforms Marketing teams** looking to streamline content production workflows Solo entrepreneurs** needing professional content without hiring writers E-commerce brands** requiring product-focused content for multiple channels Professional services** needing thought leadership content across platforms Event organizers** requiring promotional content for different social channels Educational institutions** needing content for student engagement and recruitment Business Value Time Efficiency** - Reduces content creation time from hours to minutes Cost Savings** - Eliminates need for multiple content creators or agencies Consistency** - Maintains brand voice and messaging across all platforms Scalability** - Handles unlimited content requests without additional resources Quality Assurance** - AI ensures professional-quality content every time Multi-Platform Reach** - Maximizes content distribution across key social channels Research Integration** - Incorporates current information for relevant, timely content This template revolutionizes content creation by combining AI-powered writing with real-time web research, creating an automated system that produces high-quality, platform-optimized content for blog, LinkedIn, and Facebook from a simple form submission.