by Hardikkumar
This workflow automates the entire process of creating SEO-optimized meta titles and descriptions. It analyzes your webpage, spies on top-ranking competitors for the same keywords, and then uses a multi-step AI process to generate compelling, length-constrained meta tags. 🤖 How It Works This workflow operates in a three-phase process for each URL you provide: Phase 1: Self-Analysis When you add a URL to a Google Sheet with the status "New", the workflow scrapes your page's content. The first AI then performs a deep analysis to identify the page's primary keyword, semantic keyword cluster, search intent, and target audience. Phase 2: Competitor Intelligence The workflow takes your primary keyword and performs a live Google search. A custom code block intelligently filters the search results to identify true competitors. A second AI analyzes their meta titles and descriptions to find common patterns and successful strategies. Phase 3: Master Generation & Update The final AI synthesizes all gathered intelligence—your page's data and the competitor's winning patterns—to generate a new, optimized meta title and description. It then writes this new data back to your Google Sheet and updates the status to "Generated". ⚙️ Setup Instructions You should be able to set up this workflow in about 10-15 minutes ⏱️. 🔑 Prerequisites You will need the following accounts and API keys: A Google Account with access to Google Sheets. A Google AI / Gemini API key. A SerpApi key for Google search data. A ScrapingDog API key for reliable website scraping. 🛠️ Configuration Google Sheet Setup: Create a new Google Sheet. The workflow requires the following columns: URL, Status, Current Meta Title, Current Meta Description, Generated Meta Title, Generated Meta Description, and Ranking Factor. Add Credentials: Google Sheets Nodes: Connect your Google account credentials to the Google Sheets Trigger & Google Sheets nodes. Google Gemini Nodes: Add your Google Gemini API key to the credentials for all three Google Gemini Chat Model nodes. Scrape Website Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your ScrapingDog API key. Googl SERP Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your SerpApi API key. Configure Google Sheets Nodes: Copy the Document ID from your Google Sheet's URL. Paste this ID into the "Document ID" field in the following nodes: Google Sheets Trigger, Get row(s) in sheet1, and Update row in sheet. In each of those nodes, select the correct sheet name from the "Sheet Name" dropdown. ✅ Activate Workflow Save and activate the workflow. To run it, simply add a new row to your Google Sheet containing the URL you want to process and set the "Status" column to New.
by Joseph LePage
Empower Your AI Chatbot with Long-Term Memory and Dynamic Tool Routing This n8n workflow equips your AI agent with long-term memory and a dynamic tools router, enabling it to provide intelligent, context-aware responses while managing tasks across multiple tools. By combining persistent memory and modular task routing, this workflow makes your AI smarter, more efficient, and highly adaptable. 👥 Who Is This For? AI Developers & Automation Enthusiasts: Integrate advanced AI features like long-term memory and task routing without coding expertise. Businesses & Teams: Automate tasks while maintaining personalized, context-aware interactions. Customer Support Teams: Improve user experience with chatbots that remember past interactions. Marketers & Content Creators: Streamline communication across platforms like Gmail and Telegram. AI Researchers: Experiment with persistent memory and multi-tool integration. 🚀 What Problem Does This Solve? This workflow simplifies the creation of intelligent AI systems that retain memory, manage tasks dynamically, and automate notifications across tools like Gmail and Telegram—saving time and improving efficiency. 🛠️ What This Workflow Does Save & Retrieve Memories**: Uses Google Docs for long-term storage to recall past interactions or user preferences. Dynamic Task Routing**: Routes tasks to the right tools (e.g., saving/retrieving memories or sending notifications). AI-Powered Context Understanding**: Combines OpenAI GPT-based short-term memory with long-term memory for smarter responses. Multi-Channel Notifications**: Sends updates via Gmail or Telegram. 🔧 Setup API Credentials: Connect to OpenAI (AI processing), Google Docs (memory storage), Gmail/Telegram (notifications). Customize Parameters: Adjust the AI agent's system message for your use case. Define task-routing rules in the tools router node. Test & Deploy: Verify memory saving/retrieval, task routing, and notification delivery. 💡 How to Customize Modify the system message in the OpenAI node to tailor your agent’s behavior. Add or adjust routing rules for additional tools. Update notification settings to match your communication preferences.
by Jitesh Dugar
Overview Advanced AI-powered stock analysis workflow that combines multi-timeframe technical analysis with real-time news sentiment to generate actionable BUY/SELL/HOLD recommendations. Uses sophisticated algorithms to process price data, news sentiment, and market context for informed trading decisions. Core Features Multi-Timeframe Technical Analysis 4-Hour Charts** - Intraday trend analysis and entry timing Daily Charts** - Primary trend identification and key levels Weekly Charts** - Long-term context and major trend direction Moving Average Analysis** - 5, 10, and 20-period trend indicators Support/Resistance Levels** - Dynamic price level identification Volume Analysis** - Trading activity and momentum confirmation AI-Powered News Sentiment Analysis Real-Time News Processing** - Latest market-moving headlines Sentiment Scoring** - Numerical sentiment rating (-1 to +1 scale) Impact Assessment** - News relevance to stock performance Multi-Source Analysis** - Comprehensive news coverage evaluation Context-Aware Processing** - Financial market-specific sentiment analysis Intelligent Recommendation Engine Professional Trading Logic** - Multi-timeframe alignment analysis Risk/Reward Calculations** - Minimum 1:2 ratio requirements Entry/Exit Price Targets** - Specific actionable price levels Stop-Loss Recommendations** - Risk management guidelines Confidence Scoring** - Recommendation strength assessment Technical Capabilities Data Sources & APIs TwelveData API** - Professional-grade price and volume data NewsAPI Integration** - Comprehensive news coverage Perplexity AI** - Additional sentiment context and analysis Chart-Img API** - Visual chart generation for analysis Real-Time Processing** - Live market data integration AI Models & Analysis GPT-4 Integration** - Advanced natural language processing Custom Sentiment Engine** - Financial market-tuned sentiment analysis Multi-Model Approach** - Cross-validation of recommendations Algorithmic Trading Logic** - Professional-grade decision frameworks Visual Analysis Tools Interactive Charts** - TradingView-style chart generation Technical Indicators** - Visual representation of analysis Dark Theme Support** - Professional trading interface Multiple Timeframes** - Comprehensive visual analysis Use Cases & Applications Individual Traders Day Trading Signals** - Short-term entry/exit recommendations Swing Trading Analysis** - Multi-day position guidance Risk Management** - Stop-loss and position sizing advice Market Timing** - Optimal entry point identification Investment Research Due Diligence** - Comprehensive stock analysis Sentiment Monitoring** - News impact assessment Technical Screening** - Multi-criteria stock evaluation Portfolio Optimization** - Individual stock recommendations Automated Trading Systems Signal Generation** - Systematic buy/sell/hold alerts Risk Controls** - Automated stop-loss calculations Multi-Asset Analysis** - Scalable across stock universe Backtesting Support** - Historical recommendation validation Financial Advisors & Analysts Client Reporting** - Professional analysis documentation Research Automation** - Streamlined analysis workflow Decision Support** - Data-driven recommendation framework Market Commentary** - AI-generated insights and rationale Key Benefits Professional-Grade Analysis Institutional Quality** - Bank-level analytical frameworks Multi-Dimensional** - Technical + fundamental + sentiment analysis Real-Time Processing** - Live market data integration Objective Decision Making** - Removes emotional bias from analysis Time Efficiency Instant Analysis** - Seconds vs hours of manual research Automated Processing** - Continuous market monitoring Scalable Operations** - Analyze multiple stocks simultaneously 24/7 Availability** - Round-the-clock market analysis Risk Management Built-in Stop Losses** - Automatic risk level calculation Position Sizing** - Risk-appropriate recommendation sizing Multi-Timeframe Validation** - Reduces false signals Conservative Approach** - Defaults to HOLD when uncertain Setup Requirements API Keys Needed TwelveData API - Free tier available at twelvedata.com NewsAPI Key - Free tier available at newsapi.org OpenAI API - For GPT-4 analysis capabilities Perplexity API - Additional sentiment analysis Chart-Img API - Optional chart visualization (chart-img.com) Configuration Steps API Integration - Add your API keys to respective nodes Symbol Format - Supports company names or stock symbols Risk Parameters - Customize stop-loss and target calculations Notification Setup - Configure alert delivery methods Testing & Validation - Verify API connections and data flow Advanced Features Natural Language Processing Company Name Recognition** - Automatic symbol conversion Context Understanding** - Market-aware news interpretation Multi-Language Support** - Global news source analysis Entity Extraction** - Key information identification Error Handling & Reliability API Failure Recovery** - Graceful degradation strategies Data Validation** - Input/output quality checks Rate Limit Management** - Automatic throttling controls Backup Data Sources** - Redundant information feeds Customization Options Timeframe Selection** - Adjustable analysis periods Risk Tolerance** - Configurable risk/reward ratios Sentiment Weighting** - Balance technical vs fundamental analysis Alert Thresholds** - Custom trigger conditions Important Disclaimers This tool provides educational and informational analysis only. All trading decisions should: Consider your personal risk tolerance and financial situation Be validated with additional research and professional advice Account for market volatility and potential losses Follow proper risk management principles Performance Optimization Speed Enhancements Parallel Processing** - Simultaneous data retrieval Caching Strategies** - Reduced API call frequency Efficient Algorithms** - Optimized calculation methods Memory Management** - Scalable resource usage Accuracy Improvements Multi-Source Validation** - Cross-reference data points Historical Backtesting** - Performance validation Continuous Learning** - Algorithm refinement Market Adaptation** - Evolving analysis criteria Transform your investment research with AI-powered analysis that combines the speed of automation with the depth of professional-grade financial analysis.
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This advanced automation builds a fully autonomous SEO blog writer using n8n, Scrapeless, LLMs, and Pinecone vector database. It’s powered by a Retrieval-Augmented Generation (RAG) system that collects high-performing blog content, stores it in a vector store, and then generates new blog posts based on that knowledge—endlessly. Part 1: Build a Knowledge Base from Popular Blogs Scrape existing articles** from a well-established writer (in this case, Mark Manson) using the Scrapeless node. Extract content from blog pages* and store it in *Pinecone**, a powerful vector database that supports similarity search. Use Gemini Embedding 001** or any other supported embedding model to encode blog content into vectors. Result**: You’ll have a searchable vector store of expert-level content, ready to be used for content generation and intelligent search. Part 2: SERP Analysis & AI Blog Generation Use Scrapeless' SERP node to fetch search results based on your keyword and search intent. Send the results to an LLM (like Gemini, OpenRouter, or OpenAI) to generate a keyword analysis report in Markdown → then converted to HTML. Extract long-tail keywords, search intent insights, and content angles from this report. Feed everything into another LLM with access to your Pinecone-stored knowledge base, and generate a fully SEO-optimized blog post. Set up steps Prerequisites Scrapeless API key Pinecone account and index setup An embedding model (Gemini, OpenAI, etc.) n8n instance with Community Node: n8n-nodes-scrapeless installed Credential Configuration Add your Scrapeless and Pinecone credentials in n8n under the "Credentials" tab Choose embedding dimensions according to the model you use (e.g., 768 for Gemini Embedding 001) Key Highlights Clones a real content creator**: Replicates knowledge and writing style from top-performing blog authors. Auto-scrapes hundreds of blog posts** without being blocked. Stores expert content** in a vector DB to build a reusable knowledge base. Performs real-time SERP analysis** using Scrapeless to fetch and analyze search data. Generates SEO blog drafts** using RAG with detailed keyword intelligence. Output includes**: blog title, HTML summary report, long-tail keywords, and AI-written article body. RAG + SEO: The Future of Content Creation This template combines: AI reasoning** from large language models Reliable data scraping** from Scrapeless Scalable storage** via Pinecone vector DB Flexible orchestration** using n8n nodes This is not just an automation—it’s a full-stack SEO content machine that enables you to: Build a domain-specific knowledge base Run intelligent keyword research Generate traffic-ready content on autopilot 💡 Use Cases SaaS content teams cloning competitor success Affiliate marketers scaling high-traffic blog production Agencies offering automated SEO content services AI researchers building personal knowledge bots Writers automating first-draft generation with real-world tone
by Angel Menendez
n8n Workflow: Automate SIEM Alert Enrichment with MITRE ATT&CK & Qdrant Who is this for? This workflow is ideal for: Cybersecurity teams & SOC analysts* who want to automate *SIEM alert enrichment**. IT security professionals* looking to integrate *MITRE ATT&CK intelligence** into their ticketing system. Organizations using Zendesk for security incidents* who need enhanced *contextual threat data**. Anyone using n8n and Qdrant* to build *AI-powered security workflows**. What problem does this workflow solve? Security teams receive large volumes of raw SIEM alerts that lack actionable context. Investigating every alert manually is time-consuming and can lead to delayed response times. This workflow solves this problem by: ✔ Automatically enriching SIEM alerts with MITRE ATT&CK TTPs. ✔ Tagging & classifying alerts based on known attack techniques. ✔ Providing remediation steps to guide the response team. ✔ Enhancing security tickets in Zendesk with relevant threat intelligence. What this workflow does 1️⃣ Ingests SIEM alerts (via chatbot or ticketing system like Zendesk). 2️⃣ Queries a Qdrant vector store containing MITRE ATT&CK techniques. 3️⃣ Extracts relevant TTPs (Tactics, Techniques, & Procedures) from the alert. 4️⃣ Generates remediation steps using AI-powered enrichment. 5️⃣ Updates Zendesk tickets with threat intelligence & recommended actions. 6️⃣ Provides structured alert data for further automation or reporting. Setup Guide Prerequisites n8n instance** (Cloud or Self-hosted). Qdrant vector store** with MITRE ATT&CK data embedded. OpenAI API key** (for AI-based threat processing). Zendesk account** (for ticket enrichment, if applicable). Clean Mitre Data Python Script Cleaned Mitre Data Full Mitre Data Steps to Set Up 1️⃣ Embed MITRE ATT&CK data into Qdrant This workflow pulls MITRE ATT&CK data from Google Drive and loads it into Qdrant. The data is vectorized using OpenAI embeddings for fast retrieval. 2️⃣ Deploy the n8n Chatbot The chatbot listens for SIEM alerts and sends them to the AI processing pipeline. Alerts are analyzed using an AI agent trained on MITRE ATT&CK. 3️⃣ Enrich Zendesk Tickets The workflow extracts MITRE ATT&CK techniques from alerts. It updates Zendesk tickets with contextual threat intelligence. The remediation steps are included as internal notes for SOC teams. How to Customize This Workflow 🔧 Modify the chatbot trigger: Adapt the chatbot node to receive alerts from Slack, Microsoft Teams, or any other tool. 🔧 Change the SIEM input source: Connect your workflow to Splunk, Elastic SIEM, or Chronicle Security. 🔧 Customize remediation steps: Use a custom AI model to tailor remediation responses based on organization-specific security policies. 🔧 Extend ticketing integration: Modify the Zendesk node to also work with Jira, ServiceNow, or another ITSM platform. Why This Workflow is Powerful ✅ Saves time: Automates alert triage & classification. ✅ Improves security posture: Helps SOC teams act faster on threats. ✅ Leverages AI & vector search: Uses LLM-powered enrichment for real-time context. ✅ Works across platforms: Supports n8n Cloud, Self-hosted, and Qdrant. 🚀 Get Started Now! 📖 Watch the Setup Video 💬 Have Questions? Join the Discussion in the YouTube Comments!
by Huzaifa Tahir
🎬 What it does This workflow creates an engaging YouTube Short with a single click — from script to voiceover, to visuals and background music. It combines several AI tools to automate content creation and final video assembly. ⚙️ How it works Accepts an input prompt or topic Generates script using GPT Converts script to voiceover using ElevenLabs Generates b-roll style images via Leonardo.Ai Matches background music Assembles a vertical 1080×1920 MP4 video using JSON render config Optionally uploads to YouTube or saves to Cloudinary 🧰 Setup steps Add your credentials: Leonardo API (image generation) ElevenLabs (voiceover) Cloudinary (upload destination) Any GPT-based text generator Drop your audio/music file in the right node Replace API expressions with your own credentials > 🟨 Full step-by-step instructions are in sticky notes inside the workflow.
by David Ashby
Complete MCP server exposing 8 Bulk WHOIS API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bulk WHOIS API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bulk WHOIS API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://localhost:5000 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (8 total) 🔧 Batch (4 endpoints) • GET /batch: Get your batches • POST /batch: Create batch. Batch is then being processed until all provided items have been completed. At any time it can be get to provide current status with results optionally. • DELETE /batch/{id}: Delete batch • GET /batch/{id}: Get batch 🔧 Db (1 endpoints) • GET /db: Query domain database 🔧 Domains (3 endpoints) • GET /domains/{domain}/check: Check domain availability • GET /domains/{domain}/rank: Check domain rank (authority). • GET /domains/{domain}/whois: WHOIS query for a domain 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bulk WHOIS API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
🛠️ SIGNL4 Tool MCP Server Complete MCP server exposing all SIGNL4 Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every SIGNL4 Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n SIGNL4 Tool tool with full error handling 📋 Available Operations (2 total) Every possible SIGNL4 Tool operation is included: 🔧 Alert (2 operations) • Send an alert • Resolve an alert 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native SIGNL4 Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every SIGNL4 Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Shannon Atkinson
Template Description WDF Top Keywords: This workflow is designed to streamline keyword research by automating the process of generating, filtering, and analyzing Google and YouTube keyword data. Ensure compliance with local regulations and API terms of service when using this workflow. 📌 Purpose The WDF Top Keywords workflow automates collecting, processing, and managing keyword data for both Google and YouTube platforms. Leveraging multiple data sources and APIs ensures an efficient and scalable approach to identifying high-impact keywords for SEO, content creation, and marketing campaigns. Key Features Automates the generation of keyword suggestions using autocomplete APIs. Integrates with NocoDB to store and manage keyword data. Filters keywords based on monthly search volume and cost-per-click (CPC). Supports bulk import of keyword data into structured databases. Outputs both Google and YouTube keyword insights, enabling informed decision-making. 🎯 Target Audience This workflow is ideal for: Digital marketers aiming to optimize ad campaigns with data-driven insights. SEO specialists looking to identify high-potential keywords efficiently. Content creators seeking trending and relevant topics for their platforms. Agencies managing keyword research for multiple clients. ⚙️ How It Works Trigger: The workflow runs on-demand or at scheduled intervals. Keyword Generation: Retrieves base keywords from NocoDB. Generates autocomplete suggestions for Google and YouTube. Data Processing: Filters and formats keyword data based on specific criteria (e.g., search volume, CPC). Consolidates results for efficient storage and analysis. Storage and Output: Saves data into structured NocoDB tables for tracking and reuse. Bulk imports monthly search volume statistics for detailed analysis. 🛠️ Key APIs and Tools Used NocoDB**: Stores and organizes base and processed keyword data. DataForSEO API**: Provides search volume and keyword performance metrics. Google Autocomplete API**: Suggests relevant Google search terms. YouTube Autocomplete API**: Suggests trending YouTube keywords. Social Flood Docker Instance**: Serves as the local integration hub. Setup Instructions Required Tools: NocoDB n8n DataForSEO Account Social Flood Docker Instance Create the following NocoDB tables: Base Keyword Search Second Order Google Keywords Second Order YouTube Keywords Search Volume This template empowers users to handle complex keyword research tasks effortlessly, saving time and providing actionable insights. Share this template to enhance your workflow efficiency!
by Onur
Automated AI Content Creation & Instagram Publishing from Google Sheets This n8n workflow automates the creation and publishing of social media content directly to Instagram, using ideas stored in a Google Sheet. It leverages AI (Google Gemini and Replicate Flux) to generate concepts, image prompts, captions, and the final image, turning your content plan into reality with minimal manual intervention. Think of this as the execution engine for your content strategy. It assumes you have a separate process (whether manual entry, another workflow, or a different tool) for populating the Google Sheet with initial post ideas (including Topic, Audience, Voice, and Platform). This workflow takes those ideas and handles the rest, from AI generation to final publication. What does this workflow do? This workflow streamlines the content execution process by: Automatically fetching** unprocessed content ideas from a designated Google Sheet based on a schedule. Using Google Gemini to generate a platform-specific content concept (specifically for a 'Single Image' format). Generating two distinct AI image prompt options based on the concept using Gemini. Writing an engaging, platform-tailored caption (including hashtags) using Gemini, based on the first prompt option. Generating a visual image using the first prompt option via the Replicate API (using the Flux model). Publishing* the generated image and caption directly to a connected *Instagram Business account**. Updating the status** in the Google Sheet to mark the idea as completed, preventing reprocessing. Who is this for? Social Media Managers & Agencies:** Automate the execution of your content calendar stored in Google Sheets. Marketing Teams:** Streamline content production from planned ideas and ensure consistent posting schedules. Content Creators & Solopreneurs:** Save significant time by automating the generation and publishing process based on your pre-defined ideas. Anyone** using Google Sheets to plan social media content and wanting to automate the creative generation and posting steps with AI. Benefits Full Automation:** From fetching planned ideas to Instagram publishing, automate the entire content execution pipeline. AI-Powered Generation:** Leverage Google Gemini for creative concepts, prompts, and captions, and Replicate for image generation based on your initial topic. Content Calendar Execution:** Directly turn your Google Sheet plan into published posts. Time Savings:** Drastically reduce the manual effort involved in creating visuals and text for each planned post. Consistency:** Maintain a regular posting schedule by automatically processing your queue of ideas. Platform-Specific Content:** AI prompts are designed to tailor concepts, prompts, and captions for the platform specified in your sheet (e.g., Instagram or LinkedIn). How it Works Scheduled Trigger: The workflow starts automatically based on the schedule you set (e.g., every hour, daily). Fetch Idea: Reads the next row from your Google Sheet where the 'Status' column indicates it's pending (e.g., '0'). It only fetches one idea per run. Prepare Inputs: Extracts Topic, Audience, Voice, and Platform from the sheet data. AI Concept Generation (Gemini): Creates a single content concept suitable for a 'Single Image' post on the target platform. AI Prompt Generation (Gemini): Develops two detailed, distinct image prompt options based on the concept. AI Caption Generation (Gemini): Writes a caption tailored to the platform, using the first image prompt and other context. Image Generation (Replicate): Sends the first prompt to the Replicate API (Flux model) to generate the image. Prepare for Instagram: Formats the generated image URL and caption. Publish to Instagram: Uses the Facebook Graph API in three steps: Creates a media container by uploading the image URL and caption. Waits for Instagram to process the container. Publishes the processed container to your feed. Update Sheet: Changes the 'Status' in the Google Sheet for the processed row (e.g., to '1') to mark it as complete. n8n Nodes Used Schedule Trigger Google Sheets (Read & Update operations) Set (Multiple instances for data preparation) Langchain Chain - LLM (Multiple instances for Gemini calls) Langchain Chat Model - Google Gemini (Multiple instances) Langchain Output Parser - Structured (Multiple instances) HTTP Request (for Replicate API call) Wait Facebook Graph API (Multiple instances for Instagram publishing steps) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account** with access to Google Sheets. Google Sheets API Credentials (OAuth2):** Configured in n8n. A Google Sheet** structured with columns like Topic, Audience, Voice, Platform, Status (or similar). Ensure your 'pending' and 'completed' statuses are defined (e.g., '0' and '1'). Google Cloud Project** with the Vertex AI API enabled. Google Gemini API Credentials:** Configured in n8n (usually via Google Vertex AI credentials). Replicate Account** and API Token. Replicate API Credentials (Header Auth):** Configured in n8n. Facebook Developer Account**. Instagram Business Account** connected to a Facebook Page. Facebook App** with necessary permissions: instagram_basic, instagram_content_publish, pages_read_engagement, pages_show_list. Facebook Graph API Credentials (OAuth2):** Configured in n8n with the required permissions. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired frequency (e.g., every 30 minutes, every 4 hours) for checking new ideas in the sheet. Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for both Google Sheets nodes. In 1. Get Next Post Idea..., enter your Spreadsheet ID and Sheet Name. Verify the Status filter matches your 'pending' value (e.g., 0). In 7. Update Post Status..., enter the same Spreadsheet ID and Sheet Name. Ensure the Matching Columns (e.g., Topic) and the Status value to update match your 'completed' value (e.g., 1). Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in all Google Gemini Chat Model nodes. Configure Replicate Node (4. Generate Image...): Select your Replicate Header Auth credentials. The workflow uses black-forest-labs/flux-1.1-pro-ultra by default; you can change this if needed. Configure Facebook Graph API Nodes (6a, 6c): Select your Facebook Graph API OAuth2 credentials. Crucially, update the Instagram Account ID in the Node parameter of both Facebook Graph API nodes (6a and 6c). The template uses a placeholder (17841473009917118); replace this with your actual Instagram Business Account ID. Adjust Wait Node (6b): The default wait time might be sufficient, but if you encounter errors during publishing (especially with larger images/videos in the future), you might need to increase the wait duration. Activate the workflow. Populate your Google Sheet: Ensure you have rows with your content ideas and the correct 'pending' status (e.g., '0'). The workflow will pick them up on its next scheduled run. This workflow transforms your Google Sheet content plan into a fully automated AI-powered Instagram publishing engine. Start automating your social media presence today!
by Wayne Simpson
Automate your email management with this workflow, designed for freelancers and business professionals who receive high volumes of emails. By leveraging AI-powered categorisation and dynamic email processing, this template helps you organise your inbox and streamline communication for better efficiency and productivity. Check out the YouTube video for step-by-step set up instructions! How it works: Fetch & Filter Emails: The workflow retrieves emails from your Microsoft Outlook account, filtering out flagged emails and those already categorised. Content Preparation: Each email is cleaned up and converted to a structured format using Markdown, making it easier for AI processing. AI Categorization: The content is analysed using an AI model, which categorises the emails into predefined categories (e.g., Action, Junk, Business, SaaS) based on the context and content. Email Categorization & Folder Management: The categorised emails are updated in Microsoft Outlook and moved to respective folders such as "Junk Email" or "Receipts" based on the AI's classification. Conditional Processing & Final Checks: Additional checks and conditions ensure that only unread emails are processed, and errors are gracefully managed to maintain workflow stability. Set up steps: Connect Microsoft Outlook: Link your Microsoft Outlook account using the built-in credentials node to enable email fetching, updating, and folder management. Configure AI Model (Ollama API): Set up the AI model by connecting to the Ollama API and choosing your desired language model for categorisation. Modify Email Categories (Optional): Customize the categories and subcategories within the workflow to suit your unique email management needs. Set Up Error Handling: Review the error handling node settings to ensure smooth workflow execution. This template offers a robust solution for managing and organising your inbox, helping you save time and keep your focus on important emails.
by David Ashby
Complete MCP server exposing 4 AWS Cost and Usage Report Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add AWS Cost and Usage Report Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Cost and Usage Report Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://cur.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Deletereportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DeleteReportDefinition: Deletes the specified report. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Describereportdefinitions (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DescribeReportDefinitions: Lists the AWS Cost and Usage reports available to this account. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Modifyreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.ModifyReportDefinition: Allows you to programatically update your report preferences. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Putreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.PutReportDefinition: Creates a new report using the description that you provide. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Cost and Usage Report Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.