by Gloria
What if AI didn't just write content—but actually thought about how to write it? This n8n workflow revolutionizes content creation by deploying multiple specialized AI agents that handle every aspect of blog writing—from research to publication-ready posts. The breakthrough? AI that creates its own prompts, dynamically adapting to different topics and business needs without constant human intervention. Plus, it doesn’t stop at writing — the system also scans all your website URLs, finds the best internal matches for each topic, and inserts SEO-optimized anchor links automatically. 🤖 The Self-Thinking Content System 🧠 Cognitive Content Creation: Unlike basic AI writing tools, this system thinks through the entire content strategy 🔄 Fully Automated Workflow: Set it up once and produce complete, SEO-optimized blog posts with minimal oversight 🎯 Industry-Adaptive Intelligence: Automatically tailors content approach based on your specific niche and audience 📊 Data-Driven Decisions: AI agents analyze search intent and competitive landscape before writing a single word ⚡ Modular AI Architecture: Specialized agents handle different stages of content creation for superior quality ⚙️ Workflow Breakdown 1️⃣ Keyword Research & Content Ideation Identifies high-value topics with traffic potential Analyzes search intent to match user expectations Maps content opportunities your competitors are missing 2️⃣ Dynamic Prompt Generation Revolutionary feature: AI creates its own writing instructions Customizes prompts based on topic complexity and audience needs Eliminates the need for prompt engineering expertise 3️⃣ Modular AI Writing Process Research Agent gathers comprehensive topic information Structure Agent creates logical, engaging outlines Writing Agent produces in-depth, authoritative content Each agent specializes in its role for maximum quality 4️⃣ Final Refinements Editing Agent polishes grammar, tone, and readability SEO Agent optimizes for search engines without sacrificing quality Fact-checking protocols ensure accuracy and credibility 5️⃣ Publication-Ready Output Delivers complete, formatted blog posts ready for immediate use Includes meta descriptions, title suggestions, and header structure Automatically organizes content in your preferred format 6️⃣ Smart Internal Linking Collects all URLs from your website automatically Identifies the best matching pages for each topic Inserts optimized anchor words with hyperlinks for SEO boost 🎯 Perfect For: Content Marketers who need consistent, high-quality output Business Owners looking to maintain an authoritative blog without the time investment Digital Agencies scaling content creation for multiple clients SEO Professionals focusing on quality content that ranks Solopreneurs who can't afford to spend days writing blog posts 📦 What's Included: ⚙️ Complete n8n Workflow: Pre-configured with all necessary nodes and connections 🌐 Internal Linking Function: Automatically matches your content with existing URLs and inserts SEO-friendly anchor words 🔌 AI Agent Configuration: Ready-to-use AI agent settings for each content creation stage 📊 Airtable Database Structure: Pre-built tables and fields specifically designed for content writing. Requires minimal human editing 🚀 Getting Started: Import the provided workflow into your n8n instance Configure your preferred AI provider credentials Set your content parameters and business information Trigger the workflow and watch as complete blog posts are generated Use immediately or make minor adjustments to match your style perfectly ⚡ Technical Requirements: Access to n8n (self-hosted or cloud) OpenAI API account (or alternative AI provider) Basic familiarity with workflow automation Optional: WordPress, Shopify or similar CMS for direct publishing integration This template is based on a video of Wayne Eagle 💡 You can also connect this workflow with my SEO Keyword Research Automation using DataForSEO and Airtable and my SEO-Based Keyword Categorization & Clustering Strategy Workflow with Airtable, both available on my profile, to build a fully automated, end-to-end SEO content machine. Stop spending countless hours writing blog posts or wasting money on mediocre content writers. Upgrade to an AI system that thinks for itself and produces truly exceptional content at scale.
by Trung Tran
Beginner’s Tutorial: Manage Google Cloud Storage Buckets and Objects with n8n Watch the demo video below: Who’s it for Beginners who want to learn how to automate Google Cloud Storage (GCS) operations with n8n. Developers who want to combine AI image generation with cloud storage management. Anyone looking for a simple introduction to working with Buckets and Objects in GCS. How it works / What it does This workflow demonstrates end-to-end usage of Google Cloud Storage with AI integration: Trigger: Start manually by clicking Execute Workflow. Edit Fields: Provide input values (e.g., bucket name or image description). List Buckets: Retrieve all existing buckets in the project (branch: view only). Create Bucket: If needed, create a new bucket to store objects. Prompt Generation Agent: Use an AI model to generate a creative text prompt. Generate Image: Convert the AI-generated prompt into an image. Upload Object: Store the generated image as an object in the selected bucket. Delete Object: Clean up by removing the uploaded object if required. This shows the full lifecycle: Bucket → Object (Create/Upload/Delete) combined with AI image generation. How to set up Trigger the workflow: Use the When clicking Execute workflow node to start manually. Provide inputs: In Edit Fields, specify details such as bucket name or description text for the image. List buckets: Use the List Buckets node to see what exists. Create a bucket: Use Create Bucket if you want a new storage bucket. Generate prompt & image: The Prompt Generation Agent uses an OpenAI Chat Model to create an image prompt. The Generate an Image node turns this prompt into an actual image. Upload to bucket: Use Create Object to upload the generated image into your GCS bucket. Delete object (optional): Use Delete Object to remove the file from the bucket as a cleanup step. Requirements An active Google Cloud account with Cloud Storage API enabled. A Service Account Key (JSON) credential added in n8n for GCS. An OpenAI API Key configured in n8n for the prompt and image generation nodes. Basic familiarity with running workflows in n8n. How to customize the workflow Different object types:** Instead of images, upload PDFs, logs, or text files. Automatic cleanup:** Skip the delete step if you want objects to persist. Schedule trigger:** Replace manual execution with a weekly or daily schedule. Dynamic prompts:** Accept user input from a form or webhook to generate images. Multi-bucket management:** Extend the logic to manage multiple buckets across projects. Notifications:** Add a Slack/Email step after upload to notify your team with the object URL. ✅ By the end of this tutorial, you’ll understand how to: Work with Buckets (list, create). Work with Objects (upload, delete). Integrate AI image generation with Google Cloud Storage.
by Rully Saputra
Automated SEO Watchlist: Continuous Audits Powered by Decodo, Gemini and Google Sheets Automate continuous SEO audits with Decodo and Gemini AI — live data, smart insights, and Google Sheets tracking with team alerts. Who’s it for This workflow is designed for SEO specialists, marketing teams, agencies, and website owners who want an effortless, automated way to monitor SEO health. It’s perfect for ongoing audits, content monitoring, and proactive SEO management — without the manual workload. How it works / What it does Every five days, the workflow: Reads a list of URLs from Google Sheets. Uses Decodo to fetch live on-page data — titles, meta descriptions, headings, schema, links, and Core Web Vitals. Passes that data to Gemini AI for an advanced SEO analysis and scoring based on key factors (content, metadata, links, speed, and structure). Parses results via a Structured Output Parser for clean JSON output. Stores findings in Google Sheets and sends a Telegram alert when the audit completes. Why Decodo matters Decodo is the backbone of this workflow. It powers the real-time page inspection, ensuring Gemini AI has complete, accurate data to analyze. Decodo transforms static audits into live, intelligent monitoring — making your SEO insights far more actionable and reliable. How to set up Connect your Decodo API credentials. Add your Google Sheets URL list. Configure your Telegram bot credentials. Enable the workflow — it runs automatically every 5 days. Requirements Decodo API credentials Google Sheets OAuth connection Telegram Bot token n8n instance (Cloud or Self-hosted) How to customize the workflow Change the trigger interval in the Schedule Trigger node. Modify the SEO Analyzer (LLM Chain) weights for different scoring. Extend the Store Result node to integrate with dashboards or databases. Adjust the AI prompt for additional SEO checks (e.g., backlinks, readability, image optimization). ✅ Highlights Automated SEO auditing Real-time data from Decodo Smart analysis powered by Gemini AI Structured reporting in Google Sheets Team notifications via Telegram
by Ruben AI
LinkedIn DM Automation Overview Effortlessly scale personalized LinkedIn outreach using a no-code automation stack. This template provides a powerful, user-friendly system for harvesting leads from LinkedIn posts and managing outreach—all within Airtable and n8n. Features & Highlights Actionable Input:** Simply enter a LinkedIn post URL to kickstart the engine—no browser scraping or manual work needed. Lead Harvesting:** Automatically scrape commenters, likers, and profile data using Unipile’s API access. Qualification Hub:** Easily review and qualify leads right inside Airtable using custom filters and statuses. Automated Campaign Flow:** n8n handles the sequence—from sending connection requests (adhering to LinkedIn limits) to delivering personalized DMs upon acceptance. Unified Dashboard:** Monitor campaign progress, connection status, and messaging performance in real time. Flexible & Reusable:** Fully customizable for your own messaging, filters, or UD campaigns—clone, adapt, and deploy. Why Use This Template? ++Zero-code friendly:++ Ideal for entrepreneurs, sales professionals, and growth teams looking for streamlined, scalable outreach. ++Transparent and compliant:++ Built with Airtable UI and compliant API integration—no reliance on browser automation or unofficial methods. ++Rapid Deployment:++ Clone and launch your automation in under 30 minutes—no dev setup required. Setup Instructions Import the template into your n8n workspace. Connect your Airtable and Unipile credentials. Configure LinkedIn post input, filters, and DM templates in Airtable. Run the workflow and monitor results directly from Airtable or n8n. Use Cases Capture inbound leads from your viral LinkedIn posts. Qualify and nurture prospects seamlessly without manual follow-ups. Scale outreach with precision and personalization. YouTube Explanation You can access the video explanation of how to use the workflow: Explanation Video
by Trung Tran
Free PDF Generator in n8n – No External Libraries or Paid Services > A 100% free n8n workflow for generating professionally formatted PDFs without relying on external libraries or paid converters. It uses OpenAI to create Markdown content, Google Docs to format and convert to PDF, and integrates with Google Drive and Slack for archiving and sharing, ideal for reports, BRDs, proposals, or any document you need directly inside n8n. Watch the demo video below: Who’s it for Teams that need auto-generated documents (reports, guides, checklists) in PDF format. Operations or enablement teams who want files archived in Google Drive and shared in Slack automatically. Anyone experimenting with LLM-powered document generation integrated into business workflows. How it works / What it does Manual trigger starts the workflow. LLM generates a sample Markdown document (via OpenAI Chat Model). Google Drive folder is configured for storage. Google Doc is created from the generated Markdown content. Document is exported to PDF using Google Drive. (Sample PDF generated from comprehensive markdown) PDF is archived in a designated Drive folder. Archived PDF is downloaded for sharing. Slack message is sent with the PDF attached. How to set up Add nodes in sequence: Manual Trigger OpenAI Chat Model (prompt to generate sample Markdown) Set/Manual input for Google Drive folder ID(s) HTTP Request or Google Drive Upload (convert to Google Docs) Google Drive Download (PDF export) Google Drive Upload (archive PDF) Google Drive Download (fetch archived file) Slack Upload (send message with attachment) Configure credentials for OpenAI, Google Drive, and Slack. Map output fields: data.markdown → Google Docs creation docId → PDF export fileId → Slack upload Test run to ensure PDF is generated, archived, and posted to Slack. Requirements Credentials**: OpenAI API key (or compatible LLM provider) Google Drive (OAuth2) with read/write permissions Slack bot token with files:write permission Access**: Write access to target Google Drive folders Slack bot invited to the target channel How to customize the workflow Change the prompt** in the OpenAI Chat Model to generate different types of content (reports, meeting notes, checklists). Automate triggering**: Replace Manual Trigger with Cron for scheduled document generation. Use Webhook Trigger to run on-demand from external apps. Modify storage logic**: Save both .md and .pdf versions in Google Drive. Use separate folders for drafts vs. final versions. Enhance distribution**: Send PDFs to multiple Slack channels or via email. Integrate with project management tools for automated task creation.
by Dean Pike
Convert any website into a searchable vector database for AI chatbots. Submit a URL, choose scraping scope, and this workflow handles everything: scraping, cleaning, chunking, embedding, and storing in Supabase. What it does Scrapes websites using Apify (3 modes: full site unlimited, full site limited, single URL) Cleans content (removes navigation, footer, ads, cookie banners, etc) Chunks text (800 chars, markdown-aware) Generates embeddings (Google Gemini, 768 dimensions) Stores in Supabase vector database Requirements Apify account + API token Supabase database with pgvector extension Google Gemini API key Setup Create Supabase documents table with embedding column (vector 768). Run this SQL query in your Supabase project to enable the vector store setup Add your Apify API token to all three "Run Apify Scraper" nodes Add Supabase and Gemini credentials Test with small site (5-10 pages) or single page/URL first Next steps Connect your vector store to an AI chatbot for RAG-powered Q&A, or build semantic search features into your apps. Tip: Start with page limits to test content quality before full-site scraping. Review chunks in Supabase and adjust Apify filters if needed for better vector embeddings. Sample Outputs Apify actor "runs" in Apify Dashboard from this workflow Supabase docuemnts table with scraped website content ingested in chunks with vector embeddings
by Khairul Muhtadin
This Workflow auto-ingests Google Drive documents, parses them with LlamaIndex, and stores Azure OpenAI embeddings in an in-memory vector store—cutting manual update time from ~30 minutes to under 2 minutes per doc. Why Use This Workflow? Cost Reduction: Eliminates pays monthly fee on cloud just for store knowledge Ideal For Knowledge Managers / Documentation Teams:** Automatically keep product docs and SOPs in sync when source files change on Google Drive. Support Teams:** Ensure the searchable KB is always up-to-date after doc edits, speeding agent onboarding and resolution time. Developer / AI Teams:** Populate an in-memory vector store for experiments, rapid prototyping, or local RAG demos. How It Works Trigger: Google Drive Trigger watches a specific document or folder for updates. Data Collection: The updated file is downloaded from Google Drive. Processing: The file is uploaded to LlamaIndex cloud via an HTTP Request to create a parsing job. Intelligence Layer: Workflow polls LlamaIndex job status (Wait + Monitor loop). If parsing status equals SUCCESS, the result is retrieved as markdown. Output & Delivery: Parsed markdown is loaded into LangChain's Default Data Loader, passed to Azure OpenAI embeddings (deployment "3small"), then inserted into an in-memory vector store. Storage & Logging: Vector store holds embeddings in memory (good for prototyping). Optionally persist to an external vector DB for production. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute and import the workflow — use the n8n instance | | Google Drive OAuth2 | Essential | Watch and download documents from Google Drive | | LlamaIndex Cloud API | Essential | Parse and convert documents to structured markdown | | Azure OpenAI Account | Essential | Generate embeddings (deployment configured to model name "3small") | | Persistent Vector DB (e.g., Pinecone) | Optional | Persist embeddings for production-scale search | Installation Steps Import the workflow JSON into your n8n instance: open your n8n instance and import the file. Configure credentials: Azure OpenAI: Provide Endpoint, API Key and set deployment name. LlamaIndex API: Create an HTTP Header Auth credential in n8n. Header Name: Authorization. Header Value: Bearer YOUR_API_KEY. Google Drive OAuth2: Create OAuth 2.0 credentials in Google Cloud Console, enable Drive API, and configure the Google Drive OAuth2 credential in n8n. Update environment-specific values: Replace the workflow's Google Drive fileId with the GUID or folder ID you want to watch (do not commit public IDs). Customize settings: Polling interval (Wait node): adjust for faster or slower job status checks. Target file or folder: toggled on the Google Drive Trigger node. Embedding model: change Azure OpenAI deployment if needed. Test execution: Save changes and trigger a sample file update on Drive. Verify each node runs and the vector store receives embeddings. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Knowledge Base Updated Trigger (Google Drive Trigger) | Triggers on file/folder changes | Set trigger type to specific file or folder; configure OAuth2 credential | | Download Knowledge Document (Google Drive) | Downloads file binary | Operation: download; ensure OAuth2 credential is selected | | Parse Document via LlamaIndex (HTTP Request) | Uploads file to LlamaIndex parsing endpoint | POST multipart/form-data to /parsing/upload; use HTTP Header Auth credential | | Monitor Document Processing (HTTP Request) | Polls parsing job status | GET /parsing/job/{{jobId}}; check status field | | Check Parsing Completion (If) | Branches on job status | Condition: {{$json.status}} equals SUCCESS | | Retrieve Parsed Content (HTTP Request) | Fetches parsed markdown result | GET /parsing/job/{{jobId}}/result/markdown | | Default Data Loader (LangChain) | Loads parsed markdown into document format | Use as document source for embeddings | | Embeddings Azure OpenAI | Generates embeddings for documents | Credentials: Azure OpenAI; Model/Deployment: 3small | | Insert Data to Store (vectorStoreInMemory) | Stores documents + embeddings | Use memory store for prototyping; switch to DB for persistence | Workflow Logic On Drive change, the file binary is downloaded and sent to LlamaIndex. Workflow enters a monitor loop: Monitor Document Processing fetches job status, If node checks status. If not SUCCESS, Wait node delays before re-check. When parsing completes, the workflow retrieves markdown, loads documents, creates embeddings via Azure OpenAI, and inserts data into an in-memory vector store. Customization Options Basic Adjustments: Poll Delay: Set Wait node (default: every minute) to balance speed vs. API quota. Target Scope: Switch the trigger from a single file to a folder to auto-handle many docs. Embedding Model: Swap Azure deployment for a different model name as needed. Advanced Enhancements: Persistent Vector DB Integration: Replace vectorStoreInMemory with Pinecone or Milvus for production search. Notification: Add Slack or email nodes to notify when parsing completes or fails. Summarization: Add an LLM summarization step to generate chunk-level summaries. Scaling option: Batch uploads and chunking to reduce embedding calls; use a queue (Redis or n8n queue patterns) and horizontal workers for high throughput. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time (per doc) | ~10s–2min (depends on file size & LlamaIndex processing) | Chunk large docs; run embeddings in batches | | API calls (per doc) | 3–8 (upload, poll(s), retrieve, embedding calls) | Increase poll interval; consolidate requests | | Error handling | Retries via Wait loop and If checks | Add exponential backoff, failure notifications, and retry limits | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | Authentication errors | Invalid/missing credentials | Reconfigure n8n Credentials; do not paste API keys directly into nodes | | File not found | Incorrect fileId or permissions | Verify Drive fileId and OAuth scopes; share file with the service account if needed | | Parsing stuck in PENDING | LlamaIndex processing delay or rate limit | Increase Wait node interval, monitor LlamaIndex dashboard, add retry limits | | Embedding failures | Model/deployment mismatch or quota limits | Confirm Azure deployment name (3small) and subscription quotas | Created by: khmuhtadin Category: Knowledge Management Tags: google-drive, llamaindex, azure-openai, embeddings, knowledge-base, vector-store Need custom workflows? Contact us
by Frederik Duchi
This n8n template demonstrates how to automatically process feedback on tasks and procedures using an AI agent. Employees provide feedback after completing a task, which is then analyzed by the AI to suggest improvements to the underlying procedures. Improvements can be to update how to execute a single tasks or to split or merge tasks within a procedure. The management reviews decides whether to implement those improvements. This makes it easy to close the loop between execution, feedback, and continuous process improvement. Use cases are many: Marketing (improve the process of approving advertising content) Finance (optimize the process of expense reimbursement) Operations (refine the process of equipment maintenance) Good to know The automation is based on the Baserow template for handling Standard Operating Procedures. However, it can also be implemented in other databases. Baserow authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) The Baserow template for handling Standard Operating Procedures or a similar database with the following tables and fields: Procedures table with general procedure information like to name or description . Procedures steps table with all the steps associated with a procedure. Tasks table that contains the actual tasks based on the procedure steps. must have a field to capture Feedback must have a boolean field to indicate if the feedback has been processed or not. This to avoid that the same feedback keeps getting used. Improvement suggestions table to store the suggestions that were made by the AI agent. How it works Set table and field ids** Stores the ids of the involved Baserow database and tables, together with the information to make requests to the Baserow API Feedback processing agent** The prompt contains a small instruction to check the feedback and suggest improvements to the procedures. The system message is much more extensive to provide as much details and guidance to the agent as possible. It contains the following sections: Role: giving the agent a clear professional perspective Goals: allowing the agent to focus on clarity, efficiency and actionable improvements. Instructions: guiding the agent to a step-by-step flow Output: showing the agent the expected format and details Notes: setting guardrails for the agent to make justified and practical suggestions. The agent uses the following nodes: OpenAI Chat Model (Model): the template uses by default the gpt-4.1 model from OpenAI. But you can replace this with any LLM. current_procedures (Tool): provides information about all available procedures to the agent current_procedure steps (Tool): provides information about every step in the procedures to the agent tasks_feedback (Tool): provides the feedback of the employees to the agent. Required output schema (Output parser): forces the agent to use a JSON schema that matches the Improvement suggestions table structure for the output. This allows to easily add them to the database in the next step. Create improvement suggestions** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the Improvement suggestions table. The inserted records is the output generated by the AI agent. Check the Baserow API documentation for further details. Get non-processed feedback** Gets all records from the Tasks table that contain feedback but that are not marked as processed yet. Set feedback to processed** Updates the boolean field for each task to true to indicate that the feedback has been processed Aggregate records for input** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Update tasks to processed feedback** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to update multiple records at once in the Tasks table. The updated records will have their processed field set to true. Check the Baserow API documentation for further details. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Set the corresponding ids in the Configure settings and ids node. Check if the field names for the filters in the tasks_feedback tool node matches with the ones in your Tasks table. Check if the field names for the filters in the Get non-processed feedback node matches with the ones in your Tasks table. Check if the property name in the Set feedback to processed node matches with the ones in your Tasks table. Customising this workflow You can add a new workflow that updates the procedures based on the acceptance or rejection by the management There is a lot of customization possible in the system prompt. For example: change the goal to prioritize security, cost savings or customer experience
by Xiaoyuan Zhang
Description This workflow transforms your quick event notes into polished LinkedIn posts. Simply send a message via Telegram with your event name and personal notes, and the system will match it with your calendar events, generate a professional LinkedIn post. And even if you don't feel like posting it on LinkedIn, it still serves you because it saves everything to your database for future reference. In this way you can build a personal library of your professional networking activities and insights! Who Is This For? Professional Networkers: Business professionals who attend events regularly and want to share insights on LinkedIn without spending time on content creation. Event Enthusiasts: Conference attendees, meetup participants, and workshop goers who want to document and share their experiences professionally. Busy Professionals: Anyone who wants to maintain an active LinkedIn presence but lacks time to craft posts from scratch after events. What Problem Does This Workflow Solve? After attending events, I struggle with several challenges: Time Constraints: Writing thoughtful LinkedIn posts takes time. Writer's Block: Difficulty transforming my raw notes and experiences into engaging social media content. Data Organization: Keeping track of event details, personal insights, and networking opportunities in one place. How It Works Telegram Input: Send a message to your Telegram bot with the format "Event Name: Your personal notes" Message Parsing: The system extracts the event name and your personal notes from the message Calendar Matching: Searches your Google Calendar for events from the past 7 days that match the event name Data Enrichment: Combines your personal notes with event details (date, location, attendees) from your calendar AI Content Generation: Uses Claude Opus 4 to transform your notes into a professional LinkedIn post with relevant hashtags Database Storage: Saves the complete event information and generated LinkedIn post to Supabase Ready to Post: Provides you with a polished LinkedIn post ready for publication Setup Instructions n8n (Cloud or self-hosted) Telegram Bot (Create via @BotFather) Google Calendar API (OAuth2 credentials) Anthropic API (Claude access) Supabase (Database and API credentials) My Supabase table consists these columns: Event Date (datetime) Event Title (text) Location (text) Personal Notes (text) LinkedIn Post (text) Created Date (datetime) This workflow transforms the tedious task of creating LinkedIn content into an automated, intelligent system that helps you maintain an active professional presence while building a valuable archive of your networking and learning experiences.
by Feras Dabour
AI LinkedIn Content Bot with Approval Loop This n8n workflow transforms your Telegram messenger into a personal assistant for creating and publishing LinkedIn posts. You can simply send an idea as a text or voice message, collaboratively edit the AI's suggestion in a chat, and then publish the finished post directly to LinkedIn just by saying "Okay." What You'll Need to Get Started Before you can use this workflow, you'll need a few prerequisites set up. This workflow connects three different services, so you will need API credentials for each: Telegram Bot API Key: You can get this by talking to the "BotFather" on Telegram. It will guide you through creating your new bot and provide you with the API token. New Chat with Telegram BotFather OpenAI API Key: This is required for the "Speech to Text" and "AI Agent" nodes. You'll need an account with OpenAI to generate this key. [OpenAI API Platform](https://platform.openai.com ) Blotato API Key: This service is used to publish the final post to LinkedIn. You'll need a Blotato account and to connect your LinkedIn profile there to get the key. Blotato platform for social media publishing Once you have these keys, you can add them to the corresponding credentials in your n8n instance. How the Workflow Operates, Step-by-Step Here is a detailed breakdown of how the workflow processes your request and handles the publishing. 1\. Input & Initial Processing This phase captures your idea and converts it into usable text. | Node Name | Role in Workflow | | :--- | :--- | | Start: Telegram Message | This Telegram Trigger node initiates the entire process upon receiving any message from you in the bot. | | Prepare Input | Consolidates the message content, ensuring the AI receives only one clean text input. | | Check: ist it a Voice? | Checks the incoming message for text. If text is empty, it proceeds to voice handling. | | Get Voice File | If a voice note is detected, this node downloads the raw audio file from Telegram. | | Speech to Text | This node uses the OpenAI Whisper API to convert the downloaded audio file into a text string. | 2\. AI Core & Iteration Loop This is the central dialogue system where the AI drafts the content and engages in the feedback loop. | Node Name | Role in Workflow | | :--- | :--- | | AI: Draft & Revise Post | The main logic agent. It analyzes your request, applies the "System Prompt" rules, drafts the post, and handles revisions based on your feedback. | | OpenAI Chat Model | Defines the large language model (LLM) used for generating and revising the post. | | Window Buffer Memory | A memory buffer that stores the last turns of the conversation, allowing the AI to maintain context when you request changes (e.g., "Make it shorter"). | | Check if Approved | This crucial node detects the specific JSON structure the AI outputs only when you provide an approval keyword (like "ok" or "approved"). | | Post Suggestion Or Ask For Approval | Sends the AI's post draft back to your Telegram chat for review and feedback. | AI Agent System Prompt (Internal Instructions - English) The agent operates under a strict prompt that dictates its behavior and formatting (found within the AI: Draft & Revise Post node): > You are a LinkedIn Content Creator Agent for Telegram. > Keep the confirmation process, but change the output format as follows: > > Your Task > Analyze the user's message: > > * Topic > * Goal (e.g., reach, show expertise, recruiting, personal branding, leads) > * Target Audience > * Tonality (e.g., factual, personal, bold, inspiring) > > Create a LinkedIn post as ONE continuous text: > > * Strong hook in the first 1–2 lines. > * Clear main part with added value, story, example, or insight. > * Optional Call-to-Action (e.g., question to the community, invitation to exchange). > * Integrate hashtags at the end of the post (5–12 suitable hashtags, mix of niche + somewhat broader). > * Readable on LinkedIn: short paragraphs, emojis only sparingly. > > Present the suggestion to the user in the following format: > > Headline: Post Proposal: > Below that, the complete LinkedIn post (incl. hashtags at the end in the same text). > > Ask for feedback: > For example: > "Any changes? (Tone, length, formality, personal vs. professional, more technical content, different hashtags?)" > > If the user requests changes: > Adjust the post specifically based on the feedback. > Again, output only: > Post Proposal: > the revised complete post. > > If the user says “approved”, “ok”, “sounds good”, or similar: > Return exclusively this JSON, without additional text, without Markdown: > > > { > "Post": "The final LinkedIn post as one text, including hashtags at the end" > } > > > Important: > > * Never output JSON before approval, only normal suggestion text. > * The final output after approval consists of only one field: Post. 3\. Publishing & Status Check Once approved, the workflow handles the publication and monitors the post's status in real-time. | Node Name | Role in Workflow | | :--- | :--- | | Approval: Extract Final Post Text | Parses the incoming JSON, extracting only the clean text ready for publishing. | | Create post with Blotato | Uses the Blotato API to upload the finalized content to your connected LinkedIn account. | | Give Blotat 5s :) | A brief pause to allow the publishing service to start processing the request. | | Check post status | Checks back with Blotato to determine if the post is published, in progress, or failed. | | Published? | Checks if the status is "published" to send the success message. | | In Progress? | Checks if the post is still being processed. If so, it loops back to the next wait period. | | Give Blotat other 5s :) | Pauses the workflow before re-checking the post status, preventing unnecessary API calls. | 4\. Final Notification | Node Name | Role in Workflow | | :--- | :--- | | Send a confirmation message | Sends a confirmation message and the direct link to the published LinkedIn post. | | Send an error message | Sends a notification if the post failed to upload or encountered an error during processing. | 🛠️ Personalizing Your Content Bot The true power of this n8n workflow lies in its flexibility. You can easily modify key components to match your unique brand voice and technical preferences. 1\. Tweak the Content Creator Prompt The personality, tone, and formatting rules for your LinkedIn content are all defined in the System Prompt. Where to find it: Inside the AI: Draft & Revise Post node, under the System Message setting. What to personalize: Adjust the tone, change the formatting rules (e.g., number of hashtags, required emojis), or insert specific details about your industry or target audience. 2\. Switch the AI Model or Provider You can easily swap the language model used for generation. Where to find it: The OpenAI Chat Model node. What to personalize: Model: Swap out the default model for a more powerful or faster alternative (e.g., gpt-4 family, or models from other providers if you change the node). Provider: You can replace the entire Langchain block (including the AI Model and Window Buffer Memory nodes) with an equivalent block using a different provider's Chat/LLM node (e.g., Anthropic, Cohere, or Google Gemini), provided you set up the corresponding credentials and context flow. 3\. Modify Publishing Behavior (Schedule vs. Post) The final step is currently set to publish immediately, but you might prefer to schedule posts. Where to find it: The Create post with Blotato node. What to personalize: Consult the Blotato documentation for alternative operations. Instead of choosing the "Create Post" operation (which often posts immediately), you can typically select a "Schedule Post" or "Add to Queue" operation within the Blotato node. If scheduling, you will need to add a step (e.g., a Set node or another agent prompt) before publishing to calculate and pass a Scheduled Time parameter to the Blotato node.
by vinci-king-01
Social Media Sentiment Analysis Dashboard with AI and Real-time Monitoring 🎯 Target Audience Social media managers and community managers Marketing teams monitoring brand reputation PR professionals tracking public sentiment Customer service teams identifying trending issues Business analysts measuring social media ROI Brand managers protecting brand reputation Product managers gathering user feedback 🚀 Problem Statement Manual social media monitoring is overwhelming and often misses critical sentiment shifts or trending topics. This template solves the challenge of automatically collecting, analyzing, and visualizing social media sentiment data across multiple platforms to provide actionable insights for brand management and customer engagement. 🔧 How it Works This workflow automatically monitors social media platforms using AI-powered sentiment analysis, processes mentions and conversations, and provides real-time insights through a comprehensive dashboard. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain real-time monitoring AI-Powered Sentiment Analysis - Uses advanced NLP to analyze sentiment, emotions, and topics Multi-Platform Integration - Monitors Twitter, Reddit, and other social platforms Real-time Alerting - Sends notifications for critical sentiment changes or viral content Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the mention was recorded | "2024-01-15T10:30:00Z" | | platform | String | Social media platform | "Twitter" | | username | String | User who posted the content | "@john_doe" | | content | String | Full text of the post/comment | "Love the new product features!" | | sentiment_score | Number | Sentiment score (-1 to 1) | 0.85 | | sentiment_label | String | Sentiment classification | "Positive" | | emotion | String | Primary emotion detected | "Joy" | | topics | Array | Key topics identified | ["product", "features"] | | engagement | Number | Likes, shares, comments | 1250 | | reach_estimate | Number | Estimated reach | 50000 | | influence_score | Number | User influence metric | 0.75 | | alert_priority | String | Alert priority level | "High" | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Slack workspace for notifications (optional) Social media API access (Twitter, Reddit, etc.) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for sentiment analysis data Configure the sheet name (default: "Sentiment Analysis") 4. Configure Social Media Monitoring Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for social media platforms you want to monitor Customize the user prompt to extract specific sentiment data Set up keywords, hashtags, and brand mentions to track 5. Set up Notification Channels Configure Slack webhook or API credentials Set up email service credentials for alerts Define sentiment thresholds for different alert levels Test notification delivery 6. Configure Schedule Trigger Set monitoring frequency (every 15 minutes, hourly, etc.) Choose appropriate time zones for your business hours Consider social media platform rate limits 7. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test sentiment analysis with sample content 🔄 Workflow Customization Options Modify Monitoring Targets Add or remove social media platforms Change keywords, hashtags, or brand mentions Adjust monitoring frequency based on platform activity Extend Sentiment Analysis Add more sophisticated emotion detection Implement topic clustering and trend analysis Include influencer identification and scoring Customize Alert System Set different thresholds for different sentiment levels Create tiered alert systems (info, warning, critical) Add sentiment trend analysis and predictions Output Customization Add data visualization and reporting features Implement sentiment trend charts and graphs Create executive dashboards with key metrics Add competitor sentiment comparison 📈 Use Cases Brand Reputation Management**: Monitor and respond to brand mentions Crisis Management**: Detect and respond to negative sentiment quickly Customer Feedback Analysis**: Understand customer satisfaction and pain points Product Launch Monitoring**: Track sentiment around new product releases Competitor Analysis**: Monitor competitor sentiment and engagement Influencer Identification**: Find and engage with influential users 🚨 Important Notes Respect social media platforms' terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update your monitoring keywords and parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider privacy implications and data protection regulations 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Sentiment analysis errors: Review the Code node's JavaScript logic Rate limiting: Adjust monitoring frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Social media platform API documentation Sentiment analysis best practices and guidelines
by Jose Bossa
Automated Social Media Video Posting 👥 Who's it for This workflow is perfect for content creators, social media managers, and businesses who want to schedule and automatically post videos 📹 to multiple platforms (Instagram, LinkedIn, TikTok) without manual effort. Save hours every week! ⏰ 🤖 What it does It automatically reads scheduled posts from Google Sheets, checks if it's the right time to post, downloads videos from Google Drive, uploads them to multiple social media platforms simultaneously, updates the posting status, and sends you a Telegram notification with the results. Complete hands-free social media management! 🚀 ⚙️ How it works ⏰ Schedule Trigger – Runs twice daily at 9 AM and 9 PM 📊 Google Sheets (Read) – Fetches posts with status "Listo para postear" (Ready to post) ⚙️ Code Node – Converts trigger time to readable Spanish format (e.g., "16 de Octubre a las 9 am") 🔍 If Condition – Checks if current time matches the scheduled post time in the sheet 📝 Format Drive Content – Extracts and organizes post data (Title, Copy, Video URL) 🆔 Social Media Account IDs – Prepares account identifiers (can be customized for specific accounts) 🎬 Upload a video – Posts video simultaneously to Instagram, LinkedIn, and TikTok using UploadPost API 📊 Google Sheets (Update) – Changes post status to "Posteado" (Posted) to avoid duplicates 📱 Telegram Notification – Sends detailed success report with URLs for each platform 📋 Requirements Google Sheets** with your content calendar Google Drive** to store your videos UploadPost API account** (supports Instagram, LinkedIn, TikTok): Click aquí 👉 UploadPost Telegram Bot** for notifications n8n instance** with required node packages Google Sheets Structure Your spreadsheet should have these columns: Title – Post title Copy – Post caption/description Video Link – Google Drive download URL Status – Post status ("Listo para postear" or "Posteado") Fecha.Hora – Scheduled time (format: "16 de Octubre a las 9 am") row_number – Auto-generated row identifier 🛠️ How to set up Create your Google Sheets calendar: Set up columns as specified above Use status "Listo para postear" for scheduled posts Format dates as "DD de Mes a las HH am/pm" (Spanish format) Upload videos to Google Drive: Get shareable download links (format: https://drive.google.com/uc?export=download&id=FILE_ID) Ensure videos meet platform requirements (duration, format, size) Configure UploadPost API: Create account and get API credentials Connect your Instagram, LinkedIn, and TikTok accounts Add credentials to the "Upload a video" node Set up Google Sheets credentials: Connect OAuth2 for both read and update operations Update documentId with your spreadsheet ID Verify sheet name matches (default: "Video") Configure Telegram notifications: Create a Telegram bot via @BotFather Get your chat ID Add credentials to "Send a text message" node Customize posting times: Modify Schedule Trigger hours (default: 9 AM and 9 PM) Times are in Santiago, Chile timezone (America/Santiago) Test the workflow: Create a test entry with current time Run manually to verify all connections work Check Telegram for success notification Activate the workflow ✅ 🎨 How to customize Change posting schedule:** Modify triggerAtHour values in Schedule Trigger (add more times if needed) Add more platforms:** Extend the platform array in "Upload a video" node (supports YouTube, Facebook, Twitter) Customize notification format:** Edit the Telegram message template to include/exclude information Change timezone:** Modify the timeZone parameter in the Code node (default: "America/Santiago") Filter by platform:** Add a filter node before upload to post only to specific platforms on certain days Add approval workflow:** Insert an approval step before posting using Telegram or Slack Multiple accounts per platform:** Modify "Social Media Account IDs" node to specify different account IDs Error handling:** Add error notification paths to alert you if uploads fail Batch posting:** Remove returnFirstMatch option to post multiple videos at once 💡 Pro Tips Time format must match exactly** between Schedule Trigger and Google Sheets for the workflow to trigger Videos should be optimized for each platform before upload (aspect ratio, length, file size) Test with a private account first before going live Keep video files under 100MB for best performance across platforms Use the row_number column to track and update specific posts The workflow runs twice daily, so schedule posts accordingly (9 AM or 9 PM slots) ⚠️ Important Notes Posts marked as "Posteado" won't be processed again (prevents duplicates) Video must be publicly accessible from the Google Drive link UploadPost API has rate limits depending on your plan Telegram notifications show success status and post URLs for each platform The Code node converts times to Spanish format - modify if you need different language/format