by Kev
Important: This workflow uses the Autype and SerpAPI Official community nodes and requires a self-hosted n8n instance. Submit a simple form with your product name, industry, and description. The workflow automatically researches your market via Google Trends and Google Search (SerpAPI), conducts deep analysis with Perplexity AI (via OpenRouter), writes a structured report with Anthropic Claude (via OpenRouter), and renders a professionally styled PDF using Autype Extended Markdown. No manual competitor input required -- everything is discovered automatically. Who is this for? Product managers, startup founders, strategists, and consultants who need quick market research reports for investor decks, board meetings, competitive positioning, or strategic planning. Instead of spending hours compiling data from multiple sources, this workflow automates the entire research-to-PDF pipeline from a single form submission. Concrete example: A SaaS startup preparing for a Series A fundraise needs a market research report on the document automation space. They fill in their product name and industry, describe their product, and submit the form. In under two minutes they get a polished PDF with current market trends, auto-discovered competitor comparisons, SWOT analysis, and strategic recommendations -- ready to attach to their pitch deck. What this workflow does When a user submits the form, the workflow sends parallel requests to Google Trends (12-month interest data) and Google Search (competitor discovery) via SerpAPI, and downloads Autype's extended markdown syntax reference. All data is merged and passed to an AI Research Agent powered by Perplexity Sonar Pro (via OpenRouter) for deep market and competitor analysis with real-time web citations. The research output is then handed to an AI Report Writer (Anthropic Claude via OpenRouter) that writes a structured market research report in Autype Extended Markdown. The markdown is rendered to a styled PDF via Autype's Render from Markdown operation, and the final report is saved to Google Drive. How it works Market Research Form -- An n8n Form Trigger collects product name, industry, product description, and report language. Google Trends -- SerpAPI Official node fetches 12 months of search interest data for the industry. Search Competitors -- SerpAPI Google Search automatically discovers competitors and market leaders. Download Markdown Syntax -- Fetches Autype's extended markdown syntax reference so the report writer knows all formatting options. Prepare Research Context -- A Code node merges trends data, competitor search results, and syntax reference into a single context. AI Research Agent -- An AI Agent with OpenRouter (Perplexity Sonar Pro) conducts deep market research: market overview, competitor profiles, trends, and product positioning. Prepare Report Input -- A Code node combines the research output with the markdown syntax reference and form data. AI Report Writer -- An AI Agent (Anthropic Claude via OpenRouter) writes the final report in Autype Extended Markdown. The prompt includes a title page template. Prepare Render Payload -- A Code node cleans the AI output and sets title/filename. Render Report PDF -- Autype renders the extended markdown to a professionally styled PDF with Open Sans font, heading hierarchy (28/22/18pt), automatic page breaks before h1/h2, chart color palette, header with company name and logo, footer with page numbers, and generous spacing. Save Report to Drive -- The PDF is uploaded to Google Drive. Setup Install community nodes via Settings > Community Nodes: n8n-nodes-autype and n8n-nodes-serpapi. Create an Autype API credential with your API key from app.autype.com. See API Keys in Settings. Create a SerpAPI credential with your API key from serpapi.com (free tier: 250 searches/month). Create two OpenRouter API credentials with your key(s) from openrouter.ai. One is used for Perplexity Sonar Pro (research), the other for Anthropic Claude (report writing). You can use the same API key for both. Create a Google Drive OAuth2 credential and connect your Google account. Import this workflow and assign your credentials to each node. Set YOUR_FOLDER_ID in the "Save Report to Drive" node to your target Google Drive folder. Activate the workflow and open the form URL to generate a report. Note: You need a self-hosted n8n instance to use the Community Nodes. Requirements Self-hosted n8n instance (community nodes are not available on n8n Cloud) Autype account with API key (free tier available) n8n-nodes-autype community node installed n8n-nodes-serpapi community node installed (verified) OpenRouter API key (for Perplexity Sonar Pro and Anthropic Claude models) SerpAPI account (free tier: 100 searches/month) Google Drive account with OAuth2 credentials (optional, can replace with other output) How to customize Add more data sources:** Insert additional HTTP Request or SerpAPI nodes before the merge to pull from Google News, Google Scholar, or other engines. Use a different research model:** Swap the OpenRouter Perplexity model for any other OpenRouter model (e.g. Gemini) or replace the sub-node entirely. Use a different report writer:** Swap the Anthropic Claude model for OpenAI, Google Gemini, or any other OpenRouter-compatible model. Customize header/footer:** Edit the defaults JSON in the Render Report PDF node to change the company name, logo URL, or footer text. Customize title page:** Edit the title page template in the AI Report Writer's user prompt to change the logo, layout, or metadata fields. Change report structure:** Edit the system prompt in the AI Report Writer node to add or remove sections, change the tone, or adjust the word count. Customize PDF styling:** Edit the defaults JSON in the Render Report PDF node to change fonts, colors, spacing, and heading styles. See the Autype defaults schema for all options. Generate DOCX instead of PDF:** Change the output format in the Render Report PDF node from PDF to DOCX. Schedule automatic reports:** Add a Schedule Trigger alongside the Form Trigger for recurring market monitoring. Change output destination:** Replace the Google Drive node with Email (SMTP), S3, Slack, or any other n8n output node. Add more languages:** Edit the dropdown options in the Market Research Form node.
by giangxai
Overview This workflow automatically creates short-form AI videos using Sora 2 Cameos, powered by n8n and AI agents. It connects viral content collection, AI script and prompt generation, video rendering, video merging, and multi-platform publishing into a single fully automated system. Once configured, the workflow runs end to end without manual editing or intervention. The workflow is designed for creators, marketers, and affiliate builders who want to scale faceless or avatar-style AI videos consistently using viral ideas and automated publishing. What can this workflow do? Automatically collect viral content ideas from external sources Analyze viral ideas and generate structured Sora 2–ready prompts Create AI videos using pre-selected Sora 2 Cameos characters Merge multiple AI video clips into a single final video Publish videos automatically to TikTok, Facebook, and Instagram Track publishing status, success, and errors in Google Sheets This workflow reduces manual video production work while keeping the process structured, scalable, and repeatable. How it works The workflow starts by automatically collecting viral content ideas on a schedule and storing them in Google Sheets as a content backlog. Before prompt generation, a Cameos character is selected and configured manually on Sora2.com. This selected avatar is used consistently across all generated videos. Each viral idea is then analyzed to extract hooks, themes, and video direction. An AI Agent generates structured scripts and Sora 2–ready prompts based on both the viral content and the selected Cameos character. These prompts are sent to the Sora 2 Cameos video generation API to render short video clips. The workflow monitors rendering status and retries if needed. Once all clips are ready, they are automatically merged into a single final video. The finished video is published to social platforms such as TikTok, Facebook, and Instagram. Publishing results and errors are logged back to Google Sheets for monitoring and optimization. Setup steps Connect an AI model (Gemini or compatible LLM) for script and prompt generation Select and configure a Cameos character on Sora2.com Add Sora 2 Cameos API credentials for AI video generation Configure the video merge step to combine multiple clips into one final video Connect social publishing APIs (TikTok, Facebook, Instagram) Connect Google Sheets for content intake and status tracking Once configured, the workflow runs automatically on a schedule without manual input. Documentation For a full walkthrough, optimization tips, and scaling strategies, watch the detailed tutorial on YouTube.
by Davide
This workflow creates an AI-powered Telegram bot that allows users to generate and modify images using Grok Imagine models via the Kie AI API. Key Advantages 1. ✅ Multi-Modal Input Support Users can interact using: Text Voice Images This makes the bot highly flexible and user-friendly. 2. ✅ Intelligent AI Orchestration Instead of directly calling APIs, the workflow uses an AI agent that: Understands intent Enhances prompts Chooses the correct tool automatically This dramatically improves output quality. 3. ✅ Fully Automated Image Pipeline From user input to final image delivery: Upload Processing Generation Result retrieval Delivery via Telegram Everything is automated end-to-end. 4. ✅ Asynchronous & Scalable Architecture The use of: Webhook callbacks Wait nodes Task polling Prevents timeouts and supports longer image generation tasks. 5. ✅ Secure Access Control The Telegram ID validation ensures: Only authorized users can access the workflow. 6. ✅ Modular & Extendable Design The workflow is built with: Tool-based architecture Separate image generation workflows Clear orchestration logic This makes it easy to extend with: Video generation Style presets Advanced editing tools Multi-user support 7. ✅ Production-Ready Structure The workflow includes: Error handling guidelines Structured system prompts Memory handling Clear separation of concerns This makes it suitable for: Creative agencies AI SaaS products Marketing automation Telegram-based AI services How it works This workflow creates a Telegram bot that uses AI to generate and transform images through Grok Imagine models, with support for text, voice, and image inputs. Telegram Input Handling: Users interact with the bot by sending messages, voice notes, or images. The workflow authenticates users based on their Telegram ID. Input Processing: Text messages → Directly sent to the AI agent Voice messages → Transcribed using OpenAI Whisper, then converted to text Images → Downloaded from Telegram, uploaded to an FTP server (BunnyCDN), and the public URL is generated AI Agent Decision Making: The "Grok Imagine Agent" (powered by Grok 4.1 Fast model) analyzes user input and determines whether to: Generate a new image from text description (text-to-image) Transform an existing image using a prompt (image-to-image) Tool Execution: The agent calls specialized workflow tools that trigger image generation via Kie.ai API: Text-to-Image: Creates images from text prompts using "grok-imagine/text-to-image" model Image-to-Image: Modifies existing images using "grok-imagine/image-to-image" model Async Processing: The workflow uses Wait nodes to handle asynchronous image generation, polling Kie.ai for results via task IDs. Result Delivery: Once images are generated, they're sent back to the user through Telegram messages. Setup Steps Telegram Configuration: Create a Telegram Bot via BotFather to get a bot token Add your Telegram user ID in the "Code" node (replace XXX) Configure Telegram credentials in n8n with your bot token API Credentials: OpenRouter: Sign up at OpenRouter.ai, get API key for Grok 4.1 Fast access Kie.ai: Register at Kie.ai for free API key to access image generation models OpenAI: Set up OpenAI API key for voice transcription (Whisper model) FTP Server Setup: Configure FTP server (BunnyCDN recommended) for image hosting Update FTP credentials in n8n Set the public URL path in the "Set Image Url" node (replace XXX) Workflow Configuration: Import the JSON workflow into n8n Update all credential references to match your accounts Verify webhook URLs are properly configured for callback handling Test the workflow and activate when ready Optional Customizations: Adjust the system prompt in the "Grok Imagine Agent" node for different behavior Modify image aspect ratios or other parameters in the HTTP Request nodes Add additional tools for more functionality 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Joe V
🔧 Setup Guide - Hiring Bot Workflow 📋 Prerequisites Before importing this workflow, make sure you have: ✅ n8n Instance (cloud or self-hosted) ✅ Telegram Bot Token (from @BotFather) ✅ OpenAI API Key (with GPT-4 Vision access) ✅ Gmail Account (with OAuth setup) ✅ Google Drive (to store your resume) ✅ Redis Instance (free tier available at Redis Cloud) 🚀 Step-by-Step Setup 1️⃣ Upload Your Resume to OpenAI First, you need to upload your resume to OpenAI's Files API: Upload your resume to OpenAI curl https://api.openai.com/v1/files \ -H "Authorization: Bearer YOUR_OPENAI_API_KEY" \ -F purpose="assistants" \ -F file="@/path/to/your/resume.pdf" Important: Save the file_id from the response (looks like file-xxxxxxxxxxxxx) Alternative: Use OpenAI Playground or Python: from openai import OpenAI client = OpenAI(api_key="YOUR_API_KEY") with open("resume.pdf", "rb") as file: response = client.files.create(file=file, purpose="assistants") print(f"File ID: {response.id}") 2️⃣ Upload Your Resume to Google Drive Go to Google Drive Upload your resume PDF Right-click → "Get link" → Copy the file ID from URL URL format: https://drive.google.com/file/d/FILE_ID_HERE/view Example ID: 1h79U8IFtI2dp_OBtnyhdGaarWpKb9qq9 3️⃣ Create a Telegram Bot Open Telegram and message @BotFather Send /newbot Choose a name and username Save the Bot Token (looks like 123456789:ABCdefGHIjklMNOpqrsTUVwxyz) (Optional) Set bot commands: /start - Start the bot /help - Get help 4️⃣ Set Up Redis Option A: Redis Cloud (Recommended - Free) Go to Redis Cloud Create a free account Create a database Note: Host, Port, Password Option B: Local Redis Docker docker run -d -p 6379:6379 redis:latest Or via package manager sudo apt-get install redis-server 5️⃣ Import the Workflow to n8n Open n8n Click "+" → "Import from File" Select Hiring_Bot_Anonymized.json Workflow will import with placeholder values 6️⃣ Configure Credentials A. Telegram Bot Credentials In n8n, go to Credentials → Create New Select "Telegram API" Enter your Bot Token from Step 3 Test & Save B. OpenAI API Credentials Go to Credentials → Create New Select "OpenAI API" Enter your OpenAI API Key Test & Save C. Redis Credentials Go to Credentials → Create New Select "Redis" Enter: Host: Your Redis host Port: 6379 (default) Password: Your Redis password Test & Save D. Gmail Credentials Go to Credentials → Create New Select "Gmail OAuth2 API" Follow OAuth setup flow Authorize n8n to access Gmail Test & Save E. Google Drive Credentials Go to Credentials → Create New Select "Google Drive OAuth2 API" Follow OAuth setup flow Authorize n8n to access Drive Test & Save 7️⃣ Update Node Values A. Update OpenAI File ID in "PayloadForReply" Node Double-click the "PayloadForReply" node Find this line in the code: const resumeFileId = "YOUR_OPENAI_FILE_ID_HERE"; Replace with your actual OpenAI file ID from Step 1: const resumeFileId = "file-xxxxxxxxxxxxx"; Save the node B. Update Google Drive File ID (Both "Download Resume" Nodes) There are TWO nodes that need updating: Node 1: "Download Resume" Double-click the node In the "File ID" field, click "Expression" Replace YOUR_GOOGLE_DRIVE_FILE_ID with your actual ID Update "Cached Result Name" to your resume filename Save Node 2: "Download Resume1" (same process) Double-click the node Update File ID Update filename Save 8️⃣ Assign Credentials to Nodes After importing, you need to assign your credentials to each node: Nodes that need credentials: | Node Name | Credential Type | |-----------|----------------| | Telegram Trigger | Telegram API | | Generating Reply | OpenAI API | | Store AI Reply | Redis | | GetValues | Redis | | Download Resume | Google Drive OAuth2 | | Download Resume1 | Google Drive OAuth2 | | Schedule Email | Gmail OAuth2 | | SendConfirmation | Telegram API | | Send a message | Telegram API | | Edit a text message | Telegram API | | Send a text message | Telegram API | | Send a chat action | Telegram API | How to assign: Click on each node In the "Credentials" section, select your saved credential Save the node 🧪 Testing the Workflow 1️⃣ Activate the Workflow Click the "Active" toggle in the top-right Workflow should now be listening for Telegram messages 2️⃣ Test with a Job Post Find a job post online (LinkedIn, Indeed, etc.) Take a screenshot Send it to your Telegram bot Bot should respond with: "Analyzing job post..." (typing indicator) Full email draft with confirmation button 3️⃣ Test Email Sending Click "Send The Email" button Check Gmail to verify email was sent Check if resume was attached 🐛 Troubleshooting Issue: "No binary image found" Solution:** Make sure you're sending an image file, not a document Issue: "Invalid resume file_id" Solution:** Check OpenAI file_id format (starts with file-) Verify file was uploaded successfully Make sure you updated the code in PayloadForReply node Issue: "Failed to parse model JSON" Solution:** Check OpenAI API quota/limits Verify model name is correct (gpt-5.2) Check if image is readable Issue: Gmail not sending Solution:** Re-authenticate Gmail OAuth Check Gmail permissions Verify "attachments" field is set to "Resume" Issue: Redis connection failed Solution:** Test Redis connection in credentials Check firewall rules Verify host/port/password Issue: Telegram webhook not working Solution:** Deactivate and reactivate workflow Check Telegram bot token is valid Make sure bot is not blocked 🔐 Security Best Practices Never share your credentials - Keep API keys private Use environment variables in n8n for sensitive data Set up Redis password - Don't use default settings Limit OAuth scopes - Only grant necessary permissions Rotate API keys regularly Monitor usage - Check for unexpected API calls 🎨 Customization Ideas Change AI Model In the PayloadForReply node, update: const MODEL = "gpt-5.2"; // Change to gpt-4, claude-3-opus, etc. Adjust Email Length Modify the system prompt: // From: Email body: ~120–180 words unless INSIGHTS specify otherwise. // To: Email body: ~100–150 words for concise applications. Add More Languages Update language detection logic in the system prompt to support more languages. Custom Job Filtering Edit the system prompt to target specific roles: // From: Only pick ONE job offer to process — the one most clearly related to Data roles // To: Only pick ONE job offer to process — the one most clearly related to [YOUR FIELD] Add Follow-up Reminders Add a "Wait" node after email sends to schedule a reminder after 7 days. 📊 Workflow Structure Telegram Input ↓ Switch (Route by type) ↓ ├─ New Job Post │ ↓ │ Send Chat Action (typing...) │ ↓ │ PayloadForReply (Build AI request) │ ↓ │ Generating Reply (Call OpenAI) │ ↓ │ FormatAiReply (Parse JSON) │ ↓ │ Store AI Reply (Redis cache) │ ↓ │ SendConfirmation (Show preview) │ └─ Callback (User clicked "Send") ↓ GetValues (Fetch from Redis) ↓ Format Response ↓ Download Resume (from Drive) ↓ ├─ Path A: Immediate Send │ ↓ │ Send Confirmation Message │ ↓ │ Edit Message (update status) │ └─ Path B: Scheduled Send ↓ Wait (10 seconds) ↓ Download Resume Again ↓ Schedule Email (Gmail) ↓ Send Success Message 💡 Tips for Best Results High-Quality Resume: Upload a well-formatted PDF resume Clear Screenshots: Take clear, readable job post screenshots Use Captions: Add instructions via Telegram captions Example: "make it more casual" Example: "send to recruiter@company.com" Review Before Sending: Always read the draft before clicking send Update Resume Regularly: Keep your Google Drive resume current Test First: Try with a few test jobs before mass applying 🆘 Need Help? 📚 n8n Documentation 💬 n8n Community Forum 📺 n8n YouTube Channel 🤖 OpenAI Documentation 📱 Telegram Bot API Docs 📝 Checklist Use this checklist to verify your setup: [ ] OpenAI resume file uploaded (got file_id) [ ] Google Drive resume uploaded (got file ID) [ ] Telegram bot created (got bot token) [ ] Redis instance created (got credentials) [ ] All n8n credentials created and tested [ ] PayloadForReply node updated with OpenAI file_id [ ] Both Download Resume nodes updated with Drive file_id [ ] All nodes have credentials assigned [ ] Workflow activated [ ] Test message sent successfully [ ] Test email received successfully 🎉 You're all set! Start applying to jobs in 10 seconds! Made with ❤️ and n8n
by Blaine Holt
WhatsApp Booking Flow | Consultation Scheduling This n8n template automates appointment booking via WhatsApp Flows with real-time calendar availability, AI-powered intent classification, and CRM synchronization. It transforms manual booking conversations into a seamless self-service experience directly within WhatsApp. Who is this for? Service businesses wanting WhatsApp-based appointment booking Consultants and agencies offering scheduled calls or consultations Teams using Airtable for CRM and Google Calendar for scheduling Businesses looking to reduce booking friction with conversational commerce What problem does this workflow solve? Appointment booking typically involves back-and-forth messaging to find available times, collect customer details, and confirm bookings. This workflow eliminates that friction by providing an interactive booking flow within WhatsApp that checks real-time calendar availability, collects customer information, and automatically syncs bookings to your CRM and calendar. How it works Webhook Entry Point: A single webhook handles both GET (Meta verification) and POST (messages/flows) requests - required due to Meta's single-webhook-per-app restriction. Message Routing: Incoming requests are classified as either regular WhatsApp messages or encrypted Flow requests, then routed accordingly. WhatsApp Flow Handling: Flow requests are decrypted using RSA/AES-GCM encryption. The workflow handles multiple Flow actions: INIT: Returns consultation types and date constraints SERVICE_SELECTION: Processes service and customer details DATE_TIME_SELECTION: Queries calendar availability for selected date CONFIRMATION: Displays booking summary COMPLETE_BOOKING: Finalizes the booking AI Agent: For regular text messages, an OpenAI-powered agent classifies user intent. When booking intent is detected, it triggers the consultation template with the WhatsApp Flow. Booking Process: On completion, the workflow: Creates or updates customer in Airtable Creates booking record with event details Creates Google Calendar event Sends WhatsApp confirmation message Good to Know Verify Token: The WHATSAPP_VERIFY_TOKEN environment variable is required for Meta webhook verification. Set this to any secure string and use the same value in Meta Developer Portal. Cloud vs Self-hosted: Cloud n8n instances are easier to configure as they have public URLs. Self-hosted instances require additional setup for public accessibility. Hostinger/Docker Setup: For self-hosted instances, configure public webhook access in your docker-compose.yml or reverse proxy configuration. Meta Prerequisites: You'll need: Facebook Account Meta Developer App WhatsApp Business Account (linked to Developer App) WhatsApp Business Phone Number Health Checks: Meta sends periodic ping requests to verify webhook availability. The workflow handles these automatically with a 200 response. Single Webhook Restriction: Meta only allows one webhook URL per WhatsApp app, which is why all message types flow through a single endpoint with routing logic. Encryption: WhatsApp Flows use end-to-end encryption. The workflow handles RSA decryption (for AES key exchange) and AES-128-GCM encryption/decryption for Flow data. Requirements Meta Business Account with WhatsApp Business API access WhatsApp Business App in Meta Developer Portal n8n instance (cloud or self-hosted with public URL) OpenAI API key (for intent classification with GPT-4o) Airtable account with base containing: Customers, Services, Bookings tables Google Calendar with OAuth credentials Environment Variables | Variable | Description | |----------|-------------| | WHATSAPP_VERIFY_TOKEN | Webhook verification token (match in Meta Developer Portal) | | WHATSAPP_PRIVATE_KEY | RSA private key for Flow encryption (PEM format) | | WHATSAPP_PRIVATE_KEY_PASSPHRASE | Passphrase for the RSA private key | | GOOGLE_CALENDAR_ID | Calendar ID for availability checks and event creation | Setup Meta Business Setup Create a Meta Developer App at developers.facebook.com Add WhatsApp product to your app Set up a WhatsApp Business Account Generate a permanent access token Import Workflow Import personal-booking-whatsapp-flow.json into n8n Replace placeholder credential IDs with your actual credentials Configure Credentials WhatsApp: HTTP Bearer Auth with your access token OpenAI: API key for GPT-4o Airtable: OAuth2 authentication Google Calendar: OAuth2 authentication Set Environment Variables Configure all variables listed above in n8n settings Configure Webhook in Meta Navigate to WhatsApp > Configuration in Developer Portal Set webhook URL to your n8n webhook endpoint Enter your verify token Subscribe to messages webhook field Create WhatsApp Flow In WhatsApp Manager, create a new Flow Use the JSON from whatsapp-flow.json as your Flow definition Publish the Flow and note the Flow ID Create Message Template Create a template with a Flow button component Link it to your published Flow Submit for approval Airtable Schema Customers Table: customer_name (text) customer_email (email) phone_number (text) created_at (date) Services Table: service_name (text) service_key (text) - e.g., "30_min", "60_min" duration_minutes (number) Bookings Table: customer_email (text) service_type (linked to Services) event_date (date) event_time (text) booking_status (select: Pending, Confirmed, Cancelled) calendar_event_id (text) created_at (date) Customizing this workflow Consultation Types: Modify the INIT handler code node to add/change consultation options and durations. Business Hours: Adjust the calendar availability logic in the date refresh handler to match your working hours. AI Agent Prompts: Customize the system prompt in the AI Agent node to match your business context and available services. Messaging Templates: Create additional WhatsApp templates for different services (quotes, information requests) and add corresponding tools to the AI agent. CRM Fields: Extend the Airtable schema and update the booking creation nodes to capture additional customer data. Made by www.fenrirlabs.nl
by Incrementors
This workflow automates the complete blog publishing process. It removes manual work from content creation, image generation, category management, and WordPress publishing by using AI and n8n. It helps agencies, SEO teams, and content creators manage blogs at scale. Key Features Scheduled or manual blog publishing Automated topic research and content writing AI-generated featured and in-content images using Ideogram Dynamic WordPress category detection and creation Automatic media upload with SEO-friendly alt text Internal linking using sitemap data Google Sheets logging for published URLs Error notifications for failed executions What This Workflow Does Input Blog topics or keywords stored in Google Sheets Target WordPress site details Publishing rules and schedule Processing Triggers the workflow on a schedule or manual run Fetches blog posting data from Google Sheets Validates active projects or websites Performs topic and SEO research Writes long-form, SEO-optimized blog content Generates image prompts and creates images using Ideogram Uploads images to WordPress with alt text Detects or creates blog categories dynamically Publishes the blog post to WordPress Output Live published blog post URL Updated Google Sheet with publishing details Notification alerts if any step fails Setup Instructions Prerequisites n8n instance (cloud or self-hosted) WordPress site with REST API access Google Sheets access AI model credentials (Google Gemini, OpenAI, or DeepSeek) Ideogram API access Notification service (Discord or Slack) Step 1: Import the Workflow Download or copy the workflow JSON In n8n, go to Workflows → Import from file / JSON Import the workflow Step 2: Configure Credentials Set up the required credentials inside n8n's credential manager: Google Sheets OAuth**: For reading posting data and saving URLs WordPress API**: For publishing posts and uploading media AI Model**: Connect Google Gemini, OpenAI, or DeepSeek Ideogram API**: For AI image generation Discord/Slack Webhook**: For error notifications Important: No credentials are hardcoded. All must be connected via n8n's credential manager. Step 3: Configure Google Sheets Prepare a Google Sheet containing: Blog topic or keyword Target website or domain Publishing status fields Domain ID for tracking Update the Sheet ID inside the Get_Post_Data node after import. Step 4: Configure Website Access Update the PBN_Website_Access node with your WordPress site access endpoint or API. This node should return: Complete WordPress URL Basic authentication token Sitemap post URL Step 5: Configure Publishing & Schedule Adjust the Schedule Trigger if auto-publishing is required Modify publishing frequency or time zone Review WordPress post status (draft or publish) Step 6: Test & Activate Add one test row in Google Sheets Run the workflow manually Verify: Content creation Image generation WordPress publishing Sheet updates Activate the workflow Usage Guide Adding New Blog Posts Add a new row in the connected Google Sheet with the required blog topic and website details. The workflow will automatically process and publish the post on the next execution. Understanding the Output After execution, the workflow: Publishes a complete blog post on WordPress Attaches featured and in-content images Assigns the correct category Logs the live URL back to Google Sheets Workflow Node Breakdown Get_Post_Data Fetches blog posting details from Google Sheets based on the current day. It pulls keywords, landing pages, domain IDs, and posting websites. get_client_status Checks the client's project status from the project sheet. It verifies whether the client is active or inactive before proceeding further. This prevents publishing content for paused or stopped clients. PBN_Website_Access Fetches WordPress website access details such as site URL, authentication token, and sitemap URL. These details are required for publishing posts, uploading images, and managing categories. Do the Research on the Topic Performs deep SEO research on the target keyword. It analyzes search intent, content gaps, and audience needs. This ensures the generated content is informative, relevant, and SEO-optimized. sitemap_crawl (internal_linking) Crawls the website sitemap to collect internal URLs. These URLs are later used for internal linking inside the blog content. Internal links help improve SEO and site structure. write_content Uses AI to write an 800-1000 word SEO-optimized blog article based on research data. The content includes proper HTML formatting, internal links, and anchor keyword placement. extract_title_body Separates the H1 title from the blog body content for proper WordPress publishing format. classify_category Automatically determines the most suitable category for the blog post by analyzing the blog title and content context. This keeps the website's category structure clean and relevant. get_category & create_category Checks if the determined category exists in WordPress. If not, it creates a new category automatically. generate_image_prompt Analyzes the blog content and generates AI prompts for creating relevant images including thumbnail and in-content images. Thumbnail Image Generator & Blog Image Generator Generate high-quality images using Ideogram API based on AI-generated prompts. Images are created with proper resolution and rendering settings. Thumbnail Uploading & Blog Image Uploading Upload generated images to WordPress media library and retrieve media IDs for post attachment. Add Alt Text in Images Adds SEO-friendly alt text to uploaded images to improve accessibility and search engine optimization. Blog and Photo Merge Merges the generated images into the blog content at appropriate positions within the article. publish_blog Publishes the complete blog post to WordPress with title, content, category, featured image, and publish status. save_live_url Saves the live published blog URL back into Google Sheets along with keyword, website URL, and timestamp for tracking and reporting. If Error Existed Then Get Notified Sends instant Discord or Slack notifications when any error occurs during workflow execution, ensuring no failure goes unnoticed. Customization Options Change blog length or tone in the content generation node Modify image style or resolution in Ideogram nodes Add multi-site publishing using Switch nodes Replace notification channel (Discord to Slack or Email) Extend workflow to social media posting Troubleshooting Blog not published Check WordPress credentials and REST API permissions. Images not generated Verify Ideogram API credentials and prompt formatting. Sheet not updating Ensure correct Sheet ID and OAuth permissions. Workflow stopped Review execution logs and error notification messages. Use Cases SEO blog automation for agencies Content publishing for niche websites Scalable blog management AI-assisted content operations Hands-free WordPress publishing Final Notes This workflow is designed to be reusable, scalable, and creator-friendly. It follows n8n best practices, avoids hardcoded credentials, and is suitable for public sharing as a workflow template. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Roshan Ramani
Who's it for This workflow is ideal for: Content creators who want to replicate successful LinkedIn strategies Social media managers monitoring competitor content performance Marketing teams analyzing trending topics in their industry Personal brands looking to create data-driven content Agencies managing multiple LinkedIn accounts What it does This comprehensive workflow automates the entire LinkedIn content lifecycle: it scrapes viral posts from target accounts, analyzes engagement patterns, identifies trending topics, generates original AI-powered content based on those trends, creates accompanying images, and automatically publishes to your LinkedIn profile or company page. How it works Phase 1: Data Collection (Runs every 12 hours) Scheduler triggers the workflow twice daily Fetches LinkedIn profile URLs from Google Sheets Processes profiles in batches of 3 to respect API limits Uses Apify API to scrape recent posts from each profile Adds 3-second delays between requests to avoid rate limiting Filters for high-engagement posts (20+ likes, comments, or reposts) Saves viral posts to Google Sheets with full metadata Phase 2: Content Generation (Triggered by new data) Monitors Google Sheets for new viral posts every minute Filters posts published within the last 3 days that haven't been analyzed Aggregates trending content into a single dataset Analyzes patterns using Google Gemini AI to identify: Common themes and topics Engagement triggers and hooks Successful content structures Trending hashtags and formats Generates original LinkedIn post with proper formatting Creates AI image prompt optimized for minimal text Generates professional image using Google Imagen Publishes complete post to your LinkedIn account Marks analyzed posts as complete to prevent duplication Setup steps 1. Configure Google Sheets Create a new Google Sheet with two tabs: Tab 1: "usernames & links" - Add LinkedIn profile URLs you want to monitor Tab 2: "scrape data" - Leave empty (auto-populated by workflow) Connect your Google Sheets credentials in both nodes Replace all instances of YOUR_GOOGLE_SHEET_ID with your actual sheet ID Replace SHEET_GID values with your actual sheet GIDs 2. Set up Apify API Sign up for Apify account and get API token Replace YOUR_APIFY_API_TOKEN in "Scrape LinkedIn Posts API" node Note: Apify has free tier with limited requests 3. Configure Google Gemini credentials Obtain Google PaLM API credentials Add credentials to both "Google Gemini Chat Model" and "Generate an image" nodes 4. Set up LinkedIn publishing Connect your LinkedIn credentials in "Publish to LinkedIn" node If posting as organization, replace YOUR_LINKEDIN_ORGANIZATION_ID with your company page ID If posting as individual, change "postAs" parameter to "person" 5. Configure scheduling Default schedule: every 12 hours Adjust "LinkedIn Content Automation Scheduler" trigger if needed Consider your API rate limits when changing frequency 6. Test the workflow Manually trigger Phase 1 to scrape posts Verify data appears in Google Sheets "scrape data" tab Wait for Phase 2 trigger or manually activate it Check that content is generated and published correctly Verify posts are marked as analyzed in Google Sheets Requirements Google Sheets API access (free) Google Sheets Trigger OAuth2 (free) Apify API token (free tier available, $49/month for more) Google PaLM/Gemini API key (pay-per-use pricing) LinkedIn OAuth credentials (free) How to customize Adjust scraping targets: Add more LinkedIn profile URLs to your Google Sheets Change batch size in "Process Profiles in Batches" (default: 3) Modify post limit per profile in Apify API call (default: 1 post) Modify engagement filters: Edit "Filter High-Engagement Posts" node thresholds Default: 20+ likes OR 20+ comments OR 20+ reposts Adjust based on your niche's typical engagement rates Add additional criteria like views or impressions Customize content analysis window: Change "Filter Recent Posts (3 Days)" to analyze different timeframes Options: 24 hours for fast-moving trends, 7 days for broader patterns Balance between recency and data volume Refine AI content generation: Edit system prompt in "LinkedIn Content Strategy AI" node Adjust content length, tone, or style preferences Add industry-specific guidelines Include brand voice requirements Modify hashtag strategy Customize image generation: Edit image prompt structure in AI prompt Change visual style, colors, or composition Adjust for brand guidelines Modify dimensions or aspect ratios Change posting schedule: Adjust "LinkedIn Content Automation Scheduler" frequency Consider optimal posting times for your audience Balance between content quality and posting frequency Coordinate with other marketing activities Enhance data collection: Increase posts per profile in Apify settings Add more profile URLs to monitor Implement competitor tracking Track additional metrics like impressions or click-through rates Add notifications: Connect Slack/Email nodes after successful posts Set up alerts for high-performing content Create reports of analyzed trends Monitor API usage and errors
by khaled
What Problem Does It Solve? Business owners, managers, and accountants waste valuable time manually entering daily expenses, supplier payments, and employee advances into Odoo. Getting quick balance reports usually requires logging into the ERP, navigating multiple menus, and generating complex reports. Managing post-dated checks often relies on manual tracking, leading to missed due dates. This workflow solves these by: -- Allowing users to record financial transactions simply by sending a natural language message (e.g., via Telegram or Botpress). -- Automatically fetching real-time account balances and supplier statements, returning them instantly in the chat. -- Setting up automated calendar reminders for post-dated check due dates. -- Handling the entire double-entry accounting process in the background without human intervention. How to Configure It Chat Platform Setup** -- Add the webhook URL from this workflow to your Telegram Bot, Botpress, or preferred chat interface. Odoo Setup** -- Connect your Odoo credentials in n8n. -- Open the "Build [X] Entry" code nodes and replace the placeholder journal_id and currency_id with your actual Odoo system IDs. AI Setup** -- Add your OpenAI API key (or swap the node for Google Gemini/Anthropic). -- Open the "AI Financial Agent" node and update the # ACCOUNT MAPPING section with your specific Odoo Chart of Accounts codes. Calendar Setup (Optional)** -- Connect your Google Calendar credentials if you want the workflow to automatically schedule reminders for check due dates. How It Works Webhook catches the new text message from your chat platform. An AI Agent analyzes the Arabic natural language and extracts the intent, amount, date, check details, and specific account categories. Routing: -- For Expenses, Payments, or Advances → The workflow searches Odoo for the correct IDs, builds a balanced double-entry journal record, creates it, and posts it. -- For Post-Dated Checks → Extracts the due date and creates a Google Calendar event before posting the entry to Odoo. -- For Balance Inquiries → Fetches the relevant ledger lines, calculates total debits/credits, and formats a clean Arabic text summary. A success confirmation or the requested financial report is instantly sent back to the user in the chat. Customization Ideas Expand the AI prompt and routing switch to handle Customer Invoices or internal Petty Cash transfers. Add an approval step (e.g., sending a Slack/Email button) before the workflow officially "posts" large transactions in Odoo. Change the AI prompt to support multiple languages or different regional dialects. Log a backup of all financial chat requests into Google Sheets or a Notion database for auditing. For more info Contact Me
by Yash Choudhary
Problem: 🚨It is difficult to manually track changing flight prices and quickly identify the best time to book a ticket. Many travelers miss deals or spend too much time monitoring fares for their specific routes and travel dates. Prerequisites: An active SerpAPI account (for flight search API access) Gmail or another email service account (for email alerts) This would be helpful for: Frequent flyers wanting to book flights at the lowest price Budget travelers planning trips in advance Corporate travelers managing travel expenses Travel agencies monitoring deals for clients Step-by-step workflow: Takes 5-10 minutes to set up Set your preferred flight route and travel date Choose the price alert threshold Automatically monitor flight prices at your selected interval Get notified by email when a price drop is detected Sample Query Input: Origin: “JFK” (New York) Destination: “SEA” (Seattle) Outbound Date: “2025-09-06” Price Threshold: $250 Notification Email: your@email.com Output: If flight from JFK to SEA on 2025-09-06 drops to $250 or below, you’ll receive an email notification: “Hi! The flight price to Seattle just dropped to $242. Book your ticket now!”
by Yaron Been
This workflow provides automated access to the Settyan Flash V2.0.0 Beta.4 AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Settyan Flash V2.0.0 Beta.4 model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Settyan/flash-v2.0.0-beta.4 AI model Settyan Flash V2.0.0 Beta.4**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Yaron Been
Description This workflow monitors Bitcoin prices across multiple exchanges and sends you alerts when significant price drops occur. It helps crypto traders and investors identify buying opportunities without constantly watching the markets. Overview This workflow monitors Bitcoin prices across multiple exchanges and sends you alerts when significant price drops occur. It uses Bright Data to scrape real-time price data and can be configured to notify you through various channels. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping cryptocurrency exchange data without getting blocked. Notification Services:** Email, SMS, Telegram, or other messaging platforms. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Notifications: Configure your preferred notification method. Customize: Set your price thresholds, monitoring frequency, and which exchanges to track. Use Cases Crypto Traders:** Get notified of buying opportunities during price dips. Investors:** Monitor your crypto investments and make informed decisions. Financial Analysts:** Track Bitcoin price movements for market analysis. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #bitcoin #cryptocurrency #brightdata #pricealerts #cryptotrading #bitcoinalerts #cryptoalerts #cryptomonitoring #n8nworkflow #workflow #nocode #cryptoinvesting #bitcoinprice #cryptomarket #tradingalerts #cryptotools #bitcointrading #pricemonitoring #cryptoautomation #bitcoininvestment #cryptotracker #marketalerts #tradingopportunities #cryptoprices
by JustinLee
This workflow demonstrates a simple Retrieval-Augmented Generation (RAG) pipeline in n8n, split into two main sections: 🔹 Part 1: Load Data into Vector Store Reads files from disk (or Google Drive). Splits content into manageable chunks using a recursive text splitter. Generates embeddings using the Cohere Embedding API. Stores the vectors into an In-Memory Vector Store (for simplicity; can be replaced with Pinecone, Qdrant, etc.). 🔹 Part 2: Chat with the Vector Store Takes user input from a chat UI or trigger node. Embeds the query using the same Cohere embedding model. Retrieves similar chunks from the vector store via similarity search. Uses Groq-hosted LLM to generate a final answer based on the context. 🛠️ Technologies Used: 📦 Cohere Embedding API ⚡ Groq LLM for fast inference 🧠 n8n for orchestrating and visualizing the flow 🧲 In-Memory Vector Store (for prototyping) 🧪 Usage: Upload or point to your source documents. Embed them and populate the vector store. Ask questions through the chat trigger node. Receive context-aware responses based on retrieved content.