by Jitesh Dugar
Tired of juggling maintenance calls, lost requests, and slow vendor responses? This workflow streamlines the entire property maintenance process — from tenant request to vendor dispatch — powered by AI categorization and automated communication. Cut resolution time from 5–7 days to under 24 hours and boost tenant satisfaction by 85% with zero manual follow-up. What This Workflow Does Transforms chaotic maintenance management into seamless automation: 📝 Captures Requests – Tenants submit issues via JotForm with unit number, issue description, urgency, and photos. 🤖 AI Categorization – OpenAI (GPT-4o-mini) analyzes and classifies issues (plumbing, HVAC, electrical, etc.). ⚙️ Smart Prioritization – Flags emergencies (leak, electrical failure) and assigns priority. 📬 Vendor Routing – Routes issue to the correct contractor or vendor based on AI category. 📧 Automated Communication – Sends acknowledgment to tenant and work order to vendor via Gmail. 📊 Audit Trail Logging – Optionally logs requests in Google Sheets for performance tracking and reporting. Key Features 🧠 AI-Powered Categorization – Intelligent issue type and priority detection. 🚨 Emergency Routing – Automatically escalates critical issues. 📤 Automated Work Orders – Sends detailed emails with property and tenant info. 📈 Google Sheets Logging – Transparent audit trail for compliance and analytics. 🔄 End-to-End Automation – From form submission to vendor dispatch in seconds. 💬 Sticky Notes Included – Every section annotated for easy understanding. Perfect For Property management companies Real estate agencies and facility teams Smart building operators Co-living and rental startups Maintenance coordinators managing 50–200+ requests monthly What You’ll Need Required Integrations: JotForm – Maintenance request form Create your form for free on JotForm using this link OpenAI (GPT-4o-mini) – Categorization and prioritization Gmail – Automated email notifications (Optional) Google Sheets – Logging and performance tracking Quick Start Import Template – Copy JSON into n8n and import. Create JotForm – Include fields: Tenant name, email, unit number, issue description, urgency, photo upload. Add Credentials – Configure JotForm, Gmail, and OpenAI credentials. Set Vendor Emails – Update “Send to Contractor” Gmail node with vendor email IDs. Test Workflow – Submit sample maintenance requests for AI categorization and routing. Activate Workflow – Go live and let your tenants submit maintenance issues. Expected Results ⏱️ 24-hour average resolution time (vs 5–7 days). 😀 85% higher tenant satisfaction with instant communication. 📉 Zero lost requests – every issue logged automatically. 🧠 AI-driven prioritization ensures critical issues handled first. 🕒 10+ hours saved weekly for property managers. Pro Tips 🧾 Add Google Sheets logging for a complete audit trail. 🔔 Include keywords like “leak,” “no power,” or “urgent” in AI prompts for faster emergency detection. 🧰 Expand vendor list dynamically using a Google Sheet lookup. 🧑🔧 Add follow-up automation to verify task completion from vendors. 📊 Create dashboards for monthly maintenance insights. Learning Resources This workflow demonstrates: AI categorization using OpenAI’s Chat Model (GPT-4o-mini) Multi-path routing logic (emergency vs. normal) Automated communication via Gmail Optional data logging in Google Sheets Annotated workflow with Sticky Notes for learning clarity
by Jeff Huera
Who's it for This workflow is perfect for n8n users and teams who want to stay up-to-date with the latest n8n releases without manually checking GitHub. Get AI-powered summaries of new features and bug fixes delivered straight to your inbox. What it does This workflow automatically monitors the n8n GitHub releases page and sends you smart email notifications when new updates are published. It fetches release notes, filters them based on your schedule (daily, weekly, etc.), and uses OpenAI to generate concise summaries highlighting the most important bug fixes and features. The summaries are then formatted into a clean HTML email and sent via Gmail. How to set up Configure the Schedule Trigger - Set how often you want to check for updates (daily, weekly, etc.) Add OpenAI credentials - Connect your OpenAI API key or use a different LLM Add Gmail credentials - Connect your Google account Set recipient email - Update the "To" email address in the Gmail node Activate the workflow and you're done! Requirements OpenAI API account (or alternative LLM) Gmail account with n8n credentials configured How to customize Adjust the schedule trigger to match your preferred notification frequency The filtering logic automatically adapts to your schedule (24 hours for daily, 7 days for weekly, etc.) Modify the AI prompt to focus on different aspects of the release notes Customize the HTML email template to match your preferences
by Oneclick AI Squad
This automated n8n workflow distributes school notices to stakeholders (students, parents, and staff) via WhatsApp, email, and other channels. It streamlines the process of scheduling, validating, and sending notices while updating distribution status. System Architecture Notice Distribution Pipeline**: Daily Notice Check - 9 AM: Triggers the workflow daily at 9 AM via Cron. Read Notices getAll worksheet: Retrieves notice data from a spreadsheet. Validation Flow**: Validate Notice Data: Validates and formats notice data. Distribution Flow**: Process Notice Distribution: Prepares notices for multiple channels. Prepare Email Content: Generates personalized email content. Send Email Notice: Delivers emails to recipients. Prepare WhatsApp Content: Formats notices for WhatsApp. Send WhatsApp Notice: Sends notices via WhatsApp Business API. Status Update**: Update Notice Status: Updates the distribution status in the spreadsheet. Implementation Guide Import Workflow**: Import the JSON file into n8n. Configure Cron Node**: Set to trigger daily at 9 AM (e.g., 0 9 * * *). Set Up Credentials**: Configure SMTP and WhatsApp Business API credentials. Prepare Spreadsheet**: Create a Google Sheet with notice_id, recipient_name, email, phone, notice_text, distribution_date, and status columns. Test Workflow**: Run manually to verify notice distribution and status updates. Adjust Thresholds**: Modify validation rules or content formatting as needed. Technical Dependencies Cron Service**: For scheduling the workflow. Google Sheets API**: For reading and updating notice data. SMTP Service**: For email notifications (e.g., Gmail, Outlook). WhatsApp Business API**: For sending WhatsApp messages. n8n**: For workflow automation and integration. Database & Sheet Structure Notice Tracking Sheet** (e.g., Notices): Columns: notice_id, recipient_name, email, phone, notice_text, distribution_date, status Example: | notice_id | recipient_name | email | phone | notice_text | distribution_date | status | |-----------|----------------|-------------------|-------------|------------------------------|-------------------|-----------| | 001 | John Doe | john@example.com | +1234567890 | School closed tomorrow | 2025-08-07 | Pending | | 002 | Jane Smith | jane@example.com | +0987654321 | Parent-teacher meeting | 2025-08-08 | Sent | Customization Possibilities Adjust Cron Schedule**: Change to hourly or weekly as needed. Add Channels**: Integrate additional notification channels (e.g., Slack, SMS). Customize Content**: Modify email and WhatsApp message templates. Enhance Validation**: Add rules for data validation (e.g., email format). Dashboard Integration**: Connect to a dashboard tool for real-time status tracking. Notes The workflow assumes a Google Sheet as the data source. Replace spreadsheet_id and range with your actual values. Ensure WhatsApp Business API is properly set up with a verified phone number and token. Test the workflow with a small dataset to confirm delivery and status updates.
by Khairul Muhtadin
This Workflow auto-ingests Google Drive documents, parses them with LlamaIndex, and stores Azure OpenAI embeddings in an in-memory vector store—cutting manual update time from ~30 minutes to under 2 minutes per doc. Why Use This Workflow? Cost Reduction: Eliminates pays monthly fee on cloud just for store knowledge Ideal For Knowledge Managers / Documentation Teams:** Automatically keep product docs and SOPs in sync when source files change on Google Drive. Support Teams:** Ensure the searchable KB is always up-to-date after doc edits, speeding agent onboarding and resolution time. Developer / AI Teams:** Populate an in-memory vector store for experiments, rapid prototyping, or local RAG demos. How It Works Trigger: Google Drive Trigger watches a specific document or folder for updates. Data Collection: The updated file is downloaded from Google Drive. Processing: The file is uploaded to LlamaIndex cloud via an HTTP Request to create a parsing job. Intelligence Layer: Workflow polls LlamaIndex job status (Wait + Monitor loop). If parsing status equals SUCCESS, the result is retrieved as markdown. Output & Delivery: Parsed markdown is loaded into LangChain's Default Data Loader, passed to Azure OpenAI embeddings (deployment "3small"), then inserted into an in-memory vector store. Storage & Logging: Vector store holds embeddings in memory (good for prototyping). Optionally persist to an external vector DB for production. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute and import the workflow — use the n8n instance | | Google Drive OAuth2 | Essential | Watch and download documents from Google Drive | | LlamaIndex Cloud API | Essential | Parse and convert documents to structured markdown | | Azure OpenAI Account | Essential | Generate embeddings (deployment configured to model name "3small") | | Persistent Vector DB (e.g., Pinecone) | Optional | Persist embeddings for production-scale search | Installation Steps Import the workflow JSON into your n8n instance: open your n8n instance and import the file. Configure credentials: Azure OpenAI: Provide Endpoint, API Key and set deployment name. LlamaIndex API: Create an HTTP Header Auth credential in n8n. Header Name: Authorization. Header Value: Bearer YOUR_API_KEY. Google Drive OAuth2: Create OAuth 2.0 credentials in Google Cloud Console, enable Drive API, and configure the Google Drive OAuth2 credential in n8n. Update environment-specific values: Replace the workflow's Google Drive fileId with the GUID or folder ID you want to watch (do not commit public IDs). Customize settings: Polling interval (Wait node): adjust for faster or slower job status checks. Target file or folder: toggled on the Google Drive Trigger node. Embedding model: change Azure OpenAI deployment if needed. Test execution: Save changes and trigger a sample file update on Drive. Verify each node runs and the vector store receives embeddings. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Knowledge Base Updated Trigger (Google Drive Trigger) | Triggers on file/folder changes | Set trigger type to specific file or folder; configure OAuth2 credential | | Download Knowledge Document (Google Drive) | Downloads file binary | Operation: download; ensure OAuth2 credential is selected | | Parse Document via LlamaIndex (HTTP Request) | Uploads file to LlamaIndex parsing endpoint | POST multipart/form-data to /parsing/upload; use HTTP Header Auth credential | | Monitor Document Processing (HTTP Request) | Polls parsing job status | GET /parsing/job/{{jobId}}; check status field | | Check Parsing Completion (If) | Branches on job status | Condition: {{$json.status}} equals SUCCESS | | Retrieve Parsed Content (HTTP Request) | Fetches parsed markdown result | GET /parsing/job/{{jobId}}/result/markdown | | Default Data Loader (LangChain) | Loads parsed markdown into document format | Use as document source for embeddings | | Embeddings Azure OpenAI | Generates embeddings for documents | Credentials: Azure OpenAI; Model/Deployment: 3small | | Insert Data to Store (vectorStoreInMemory) | Stores documents + embeddings | Use memory store for prototyping; switch to DB for persistence | Workflow Logic On Drive change, the file binary is downloaded and sent to LlamaIndex. Workflow enters a monitor loop: Monitor Document Processing fetches job status, If node checks status. If not SUCCESS, Wait node delays before re-check. When parsing completes, the workflow retrieves markdown, loads documents, creates embeddings via Azure OpenAI, and inserts data into an in-memory vector store. Customization Options Basic Adjustments: Poll Delay: Set Wait node (default: every minute) to balance speed vs. API quota. Target Scope: Switch the trigger from a single file to a folder to auto-handle many docs. Embedding Model: Swap Azure deployment for a different model name as needed. Advanced Enhancements: Persistent Vector DB Integration: Replace vectorStoreInMemory with Pinecone or Milvus for production search. Notification: Add Slack or email nodes to notify when parsing completes or fails. Summarization: Add an LLM summarization step to generate chunk-level summaries. Scaling option: Batch uploads and chunking to reduce embedding calls; use a queue (Redis or n8n queue patterns) and horizontal workers for high throughput. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time (per doc) | ~10s–2min (depends on file size & LlamaIndex processing) | Chunk large docs; run embeddings in batches | | API calls (per doc) | 3–8 (upload, poll(s), retrieve, embedding calls) | Increase poll interval; consolidate requests | | Error handling | Retries via Wait loop and If checks | Add exponential backoff, failure notifications, and retry limits | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | Authentication errors | Invalid/missing credentials | Reconfigure n8n Credentials; do not paste API keys directly into nodes | | File not found | Incorrect fileId or permissions | Verify Drive fileId and OAuth scopes; share file with the service account if needed | | Parsing stuck in PENDING | LlamaIndex processing delay or rate limit | Increase Wait node interval, monitor LlamaIndex dashboard, add retry limits | | Embedding failures | Model/deployment mismatch or quota limits | Confirm Azure deployment name (3small) and subscription quotas | Created by: khmuhtadin Category: Knowledge Management Tags: google-drive, llamaindex, azure-openai, embeddings, knowledge-base, vector-store Need custom workflows? Contact us
by Automate With Marc
📥 Invoice Intake & Notification Workflow This automated n8n workflow monitors a Google Drive folder for newly uploaded invoice PDFs, extracts essential information (like client name, invoice number, amount, due date), logs the data into a Google Sheet for recordkeeping, and sends a formatted Telegram message to notify the billing team. For step-by-step video build of workflows like this: https://www.youtube.com/@automatewithmarc ✅ What This Workflow Does 🕵️ Watches a Google Drive folder for new invoice files 📄 Extracts data from PDF invoices using AI (LangChain Information Extractor) 📊 Appends extracted data into a structured Google Sheet 💬 Notifies the billing team via Telegram with invoice details 🤖 Optionally uses Claude Sonnet AI model to format human-friendly summaries ⚙️ How It Works – Step-by-Step Trigger: Workflow starts when a new PDF invoice is added to a specific Google Drive folder. Download & Parse: The file is downloaded and its content extracted. Data Extraction: AI-powered extractor pulls invoice details (invoice number, client, date, amount, etc.). Log to Google Sheets: All extracted data is appended to a predefined Google Sheet. AI Notification Formatting: An Anthropic Claude model formats a clear invoice notification message. Telegram Alert: The formatted summary is sent to a Telegram channel or group to alert the billing team. 🧠 AI & Tools Used Google Drive Trigger & File Download PDF Text Extraction Node LangChain Information Extractor Google Sheets Node (Append Data) Anthropic Claude (Telegram Message Formatter) Telegram Node (Send Notification) 🛠️ Setup Instructions Google Drive: Set up OAuth2 credentials and specify the folder ID to watch. Google Sheets: Link the workflow to your invoice tracking sheet. Telegram: Set up your Telegram bot and obtain the chat ID. Anthropic & OpenAI: Add your Claude/OpenAI credentials if formatting is enabled. 💡 Use Cases Automated bookkeeping and invoice tracking Real-time billing alerts for accounting teams AI-powered invoice ingestion and summary
by Xiaoyuan Zhang
Description This workflow transforms your quick event notes into polished LinkedIn posts. Simply send a message via Telegram with your event name and personal notes, and the system will match it with your calendar events, generate a professional LinkedIn post. And even if you don't feel like posting it on LinkedIn, it still serves you because it saves everything to your database for future reference. In this way you can build a personal library of your professional networking activities and insights! Who Is This For? Professional Networkers: Business professionals who attend events regularly and want to share insights on LinkedIn without spending time on content creation. Event Enthusiasts: Conference attendees, meetup participants, and workshop goers who want to document and share their experiences professionally. Busy Professionals: Anyone who wants to maintain an active LinkedIn presence but lacks time to craft posts from scratch after events. What Problem Does This Workflow Solve? After attending events, I struggle with several challenges: Time Constraints: Writing thoughtful LinkedIn posts takes time. Writer's Block: Difficulty transforming my raw notes and experiences into engaging social media content. Data Organization: Keeping track of event details, personal insights, and networking opportunities in one place. How It Works Telegram Input: Send a message to your Telegram bot with the format "Event Name: Your personal notes" Message Parsing: The system extracts the event name and your personal notes from the message Calendar Matching: Searches your Google Calendar for events from the past 7 days that match the event name Data Enrichment: Combines your personal notes with event details (date, location, attendees) from your calendar AI Content Generation: Uses Claude Opus 4 to transform your notes into a professional LinkedIn post with relevant hashtags Database Storage: Saves the complete event information and generated LinkedIn post to Supabase Ready to Post: Provides you with a polished LinkedIn post ready for publication Setup Instructions n8n (Cloud or self-hosted) Telegram Bot (Create via @BotFather) Google Calendar API (OAuth2 credentials) Anthropic API (Claude access) Supabase (Database and API credentials) My Supabase table consists these columns: Event Date (datetime) Event Title (text) Location (text) Personal Notes (text) LinkedIn Post (text) Created Date (datetime) This workflow transforms the tedious task of creating LinkedIn content into an automated, intelligent system that helps you maintain an active professional presence while building a valuable archive of your networking and learning experiences.
by Dean Pike
Convert any website into a searchable vector database for AI chatbots. Submit a URL, choose scraping scope, and this workflow handles everything: scraping, cleaning, chunking, embedding, and storing in Supabase. What it does Scrapes websites using Apify (3 modes: full site unlimited, full site limited, single URL) Cleans content (removes navigation, footer, ads, cookie banners, etc) Chunks text (800 chars, markdown-aware) Generates embeddings (Google Gemini, 768 dimensions) Stores in Supabase vector database Requirements Apify account + API token Supabase database with pgvector extension Google Gemini API key Setup Create Supabase documents table with embedding column (vector 768). Run this SQL query in your Supabase project to enable the vector store setup Add your Apify API token to all three "Run Apify Scraper" nodes Add Supabase and Gemini credentials Test with small site (5-10 pages) or single page/URL first Next steps Connect your vector store to an AI chatbot for RAG-powered Q&A, or build semantic search features into your apps. Tip: Start with page limits to test content quality before full-site scraping. Review chunks in Supabase and adjust Apify filters if needed for better vector embeddings. Sample Outputs Apify actor "runs" in Apify Dashboard from this workflow Supabase docuemnts table with scraped website content ingested in chunks with vector embeddings
by Frederik Duchi
This n8n template demonstrates how to automatically process feedback on tasks and procedures using an AI agent. Employees provide feedback after completing a task, which is then analyzed by the AI to suggest improvements to the underlying procedures. Improvements can be to update how to execute a single tasks or to split or merge tasks within a procedure. The management reviews decides whether to implement those improvements. This makes it easy to close the loop between execution, feedback, and continuous process improvement. Use cases are many: Marketing (improve the process of approving advertising content) Finance (optimize the process of expense reimbursement) Operations (refine the process of equipment maintenance) Good to know The automation is based on the Baserow template for handling Standard Operating Procedures. However, it can also be implemented in other databases. Baserow authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) The Baserow template for handling Standard Operating Procedures or a similar database with the following tables and fields: Procedures table with general procedure information like to name or description . Procedures steps table with all the steps associated with a procedure. Tasks table that contains the actual tasks based on the procedure steps. must have a field to capture Feedback must have a boolean field to indicate if the feedback has been processed or not. This to avoid that the same feedback keeps getting used. Improvement suggestions table to store the suggestions that were made by the AI agent. How it works Set table and field ids** Stores the ids of the involved Baserow database and tables, together with the information to make requests to the Baserow API Feedback processing agent** The prompt contains a small instruction to check the feedback and suggest improvements to the procedures. The system message is much more extensive to provide as much details and guidance to the agent as possible. It contains the following sections: Role: giving the agent a clear professional perspective Goals: allowing the agent to focus on clarity, efficiency and actionable improvements. Instructions: guiding the agent to a step-by-step flow Output: showing the agent the expected format and details Notes: setting guardrails for the agent to make justified and practical suggestions. The agent uses the following nodes: OpenAI Chat Model (Model): the template uses by default the gpt-4.1 model from OpenAI. But you can replace this with any LLM. current_procedures (Tool): provides information about all available procedures to the agent current_procedure steps (Tool): provides information about every step in the procedures to the agent tasks_feedback (Tool): provides the feedback of the employees to the agent. Required output schema (Output parser): forces the agent to use a JSON schema that matches the Improvement suggestions table structure for the output. This allows to easily add them to the database in the next step. Create improvement suggestions** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the Improvement suggestions table. The inserted records is the output generated by the AI agent. Check the Baserow API documentation for further details. Get non-processed feedback** Gets all records from the Tasks table that contain feedback but that are not marked as processed yet. Set feedback to processed** Updates the boolean field for each task to true to indicate that the feedback has been processed Aggregate records for input** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Update tasks to processed feedback** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to update multiple records at once in the Tasks table. The updated records will have their processed field set to true. Check the Baserow API documentation for further details. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Set the corresponding ids in the Configure settings and ids node. Check if the field names for the filters in the tasks_feedback tool node matches with the ones in your Tasks table. Check if the field names for the filters in the Get non-processed feedback node matches with the ones in your Tasks table. Check if the property name in the Set feedback to processed node matches with the ones in your Tasks table. Customising this workflow You can add a new workflow that updates the procedures based on the acceptance or rejection by the management There is a lot of customization possible in the system prompt. For example: change the goal to prioritize security, cost savings or customer experience
by Daniel Lianes
Auto-generate SEO blog posts from Google Trends to WordPress This workflow provides complete blog automation from trend detection to publication. It eliminates manual content research, writing, and publishing by using AI agents, Google Trends analysis, and WordPress integration for hands-free blog management that scales your content strategy. Overview This workflow automatically handles the entire blog creation pipeline using advanced AI coordination and SEO optimization. It manages trend discovery, topic selection, content research, writing, HTML formatting, and WordPress publishing with built-in internal linking and comprehensive performance tracking. Core Function: Autonomous blog generation that transforms trending Google searches into SEO-optimized WordPress posts with zero manual intervention, maintaining consistent publishing schedules while capturing emerging traffic opportunities. Key Capabilities Automated trend detection** - Discovers emerging topics using Google Trends via SerpAPI before they become saturated AI-powered topic selection** - Intelligent evaluation of search volume, user intent, and competition levels Content research automation** - Perplexity API integration for reliable source gathering and fact verification SEO-optimized writing** - AI agents create keyword-focused, engaging content with proper structure Internal linking intelligence** - Automatic cross-linking with existing posts for enhanced SEO authority WordPress publishing** - Direct publication with semantic HTML formatting and complete metadata Performance tracking** - Comprehensive logging in Google Sheets for analytics and optimization Tools Used n8n**: Workflow orchestration platform managing the entire automation pipeline SerpAPI**: Google Trends data access and trend analysis for keyword discovery Perplexity API**: Reliable content research and fact-checking for authoritative sources OpenRouter**: Gateway to multiple AI models for specialized content generation tasks WordPress API**: Direct publishing integration with full metadata and formatting control Google Sheets**: Performance logging, internal link database, and analytics tracking Built-in SEO Logic**: Automated slug generation, meta descriptions, and HTML optimization How to Install Import the Workflow: Download the JSON file and import into your n8n instance Configure API Access: Set up SerpAPI, Perplexity, and OpenRouter credentials in n8n WordPress Integration: Add WordPress site credentials and enable REST API access Google Sheets Setup: Create tracking spreadsheet using provided template structure Schedule Configuration: Set desired publication frequency (daily, weekly, or custom) Content Customization: Adjust AI prompts and SEO parameters for your niche Test Execution: Run manual test to verify all integrations work correctly Use Cases Content Marketing Automation**: Maintain consistent blog publishing without manual content creation SEO Traffic Capture**: Generate optimized posts targeting trending keywords before competition Authority Building**: Regular publication on emerging topics to establish thought leadership Organic Growth Strategy**: Systematic content creation that builds domain authority over time Content Calendar Management**: Automated scheduling eliminates manual planning and publishing Internal Link Building**: Systematic SEO improvement through intelligent cross-linking strategy Setup requirements SerpAPI account**: For Google Trends data access and trend monitoring capabilities Perplexity API**: Professional content research and reliable source verification OpenRouter account**: Access to GPT-4.1 and other advanced AI models for content generation WordPress site**: With REST API enabled and proper user permissions configured Google Sheets**: For comprehensive performance tracking and internal link database management Total setup time: 15-20 minutes once all API accounts are properly configured. How to customize Content Focus: Modify trend detection parameters and keyword filters to target your specific niche. Adjust topic selection criteria based on your content strategy and audience interests. Writing Style: Customize AI writing prompts to match your brand voice, adjust article length requirements, modify tone and complexity, or update HTML template structure for consistent formatting. SEO Strategy: Update internal linking logic for your site structure, modify meta description templates, adjust keyword density parameters, or customize slug generation patterns. Publishing Control: Change automation frequency, add human review checkpoints, integrate with social media platforms, or connect to email marketing systems for content distribution. Performance Optimization: Adjust Google Sheets tracking columns, modify trend analysis parameters, or integrate with analytics platforms for deeper insights. Google Sheets Template The workflow includes a pre-configured Google Sheets template for tracking: Publication dates and performance metrics Target keywords and search volume data Internal link mapping and SEO improvements Content performance analytics WordPress URLs and metadata tracking Template Structure: Date Published | Title | Slug | Target Keyword | WordPress URL | Internal Links Added | Traffic Data Was this helpful? Let me know! I truly hope this automated blog system helps scale your content strategy. Your feedback helps me create better automation resources for the community. Want to take content automation further? If you're looking to optimize your content strategy or need custom automation solutions: Advisory (Discovery Call): Have content goals but unsure how automation can help? Let's explore how AI-powered workflows can transform your content pipeline and drive organic growth. Schedule a Discovery Call Custom Content Automation: Need a tailored solution for your specific content workflow, CMS integration, or multi-platform publishing strategy? Let's build the perfect automation for your needs. Book Content Automation Consulting Stay Updated on Automation For more content automation strategies, AI workflow tips, and business automation insights: Follow me on LinkedIn #n8n #automation #wordpress #seo #contentmarketing #ai #blogging #googletrends #serpapi #perplexity #workflow #contentautomation #seooptimization #aiwriting #blogautomation #digitalmarketing #contentcreation #organicgrowth #inboundmarketing #productivity
by Ruben AI
LinkedIn DM Automation Overview Effortlessly scale personalized LinkedIn outreach using a no-code automation stack. This template provides a powerful, user-friendly system for harvesting leads from LinkedIn posts and managing outreach—all within Airtable and n8n. Features & Highlights Actionable Input:** Simply enter a LinkedIn post URL to kickstart the engine—no browser scraping or manual work needed. Lead Harvesting:** Automatically scrape commenters, likers, and profile data using Unipile’s API access. Qualification Hub:** Easily review and qualify leads right inside Airtable using custom filters and statuses. Automated Campaign Flow:** n8n handles the sequence—from sending connection requests (adhering to LinkedIn limits) to delivering personalized DMs upon acceptance. Unified Dashboard:** Monitor campaign progress, connection status, and messaging performance in real time. Flexible & Reusable:** Fully customizable for your own messaging, filters, or UD campaigns—clone, adapt, and deploy. Why Use This Template? ++Zero-code friendly:++ Ideal for entrepreneurs, sales professionals, and growth teams looking for streamlined, scalable outreach. ++Transparent and compliant:++ Built with Airtable UI and compliant API integration—no reliance on browser automation or unofficial methods. ++Rapid Deployment:++ Clone and launch your automation in under 30 minutes—no dev setup required. Setup Instructions Import the template into your n8n workspace. Connect your Airtable and Unipile credentials. Configure LinkedIn post input, filters, and DM templates in Airtable. Run the workflow and monitor results directly from Airtable or n8n. Use Cases Capture inbound leads from your viral LinkedIn posts. Qualify and nurture prospects seamlessly without manual follow-ups. Scale outreach with precision and personalization. YouTube Explanation You can access the video explanation of how to use the workflow: Explanation Video
by Trung Tran
Free PDF Generator in n8n – No External Libraries or Paid Services > A 100% free n8n workflow for generating professionally formatted PDFs without relying on external libraries or paid converters. It uses OpenAI to create Markdown content, Google Docs to format and convert to PDF, and integrates with Google Drive and Slack for archiving and sharing, ideal for reports, BRDs, proposals, or any document you need directly inside n8n. Watch the demo video below: Who’s it for Teams that need auto-generated documents (reports, guides, checklists) in PDF format. Operations or enablement teams who want files archived in Google Drive and shared in Slack automatically. Anyone experimenting with LLM-powered document generation integrated into business workflows. How it works / What it does Manual trigger starts the workflow. LLM generates a sample Markdown document (via OpenAI Chat Model). Google Drive folder is configured for storage. Google Doc is created from the generated Markdown content. Document is exported to PDF using Google Drive. (Sample PDF generated from comprehensive markdown) PDF is archived in a designated Drive folder. Archived PDF is downloaded for sharing. Slack message is sent with the PDF attached. How to set up Add nodes in sequence: Manual Trigger OpenAI Chat Model (prompt to generate sample Markdown) Set/Manual input for Google Drive folder ID(s) HTTP Request or Google Drive Upload (convert to Google Docs) Google Drive Download (PDF export) Google Drive Upload (archive PDF) Google Drive Download (fetch archived file) Slack Upload (send message with attachment) Configure credentials for OpenAI, Google Drive, and Slack. Map output fields: data.markdown → Google Docs creation docId → PDF export fileId → Slack upload Test run to ensure PDF is generated, archived, and posted to Slack. Requirements Credentials**: OpenAI API key (or compatible LLM provider) Google Drive (OAuth2) with read/write permissions Slack bot token with files:write permission Access**: Write access to target Google Drive folders Slack bot invited to the target channel How to customize the workflow Change the prompt** in the OpenAI Chat Model to generate different types of content (reports, meeting notes, checklists). Automate triggering**: Replace Manual Trigger with Cron for scheduled document generation. Use Webhook Trigger to run on-demand from external apps. Modify storage logic**: Save both .md and .pdf versions in Google Drive. Use separate folders for drafts vs. final versions. Enhance distribution**: Send PDFs to multiple Slack channels or via email. Integrate with project management tools for automated task creation.
by Guillaume Duvernay
Go beyond basic AI-generated text and create articles that are well-researched, comprehensive, and credible. This template automates an advanced content creation process that mimics a professional writing team: it plans, researches, and then writes. Instead of just giving an AI a topic, this workflow first uses an AI "planner" to break the topic down into logical sub-questions. Then, it deploys an AI "researcher" powered by Linkup to search the web for relevant insights and sources for each question. Finally, this complete, sourced research brief is handed to a powerful AI "writer" to compose a high-quality article, complete with hyperlinks back to the original sources. Who is this for? Content marketers & SEO specialists:** Scale the production of well-researched, link-rich articles that are built for authority and performance. Bloggers & thought leaders:** Quickly generate high-quality first drafts on any topic, complete with a list of sources for easy fact-checking and validation. Marketing agencies:** Dramatically improve your content turnaround time by automating the entire research and first-draft process for clients. What problem does this solve? Adds credibility with sources:** Solves one of the biggest challenges of AI content by automatically finding and preparing to include hyperlinks to the web sources used in the research, just as a human writer would. Ensures comprehensive coverage:** The AI-powered "topic breakdown" step prevents superficial content by creating a logical structure for the article and ensuring all key aspects of a topic are researched. Improves content quality and accuracy:** The "research-first" approach provides the final AI writer with a rich brief of specific, up-to-date information, leading to more detailed and factually grounded articles than a simple prompt ever could. Automates the entire writing workflow:** This isn't just an AI writer; it's an end-to-end system that automates the planning, research, and drafting process, saving you hours of manual work. How it works This workflow orchestrates a multi-step "Plan, Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Research (Web Search): The workflow then loops through each of these sub-questions. For each one, it uses Linkup to perform a targeted web search, gathering multiple relevant insights and their source URLs. Consolidate (Brief Creation): All the sourced insights from the research phase are compiled into a single, comprehensive research brief. Write (Final Generation): This complete, sourced brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate the source links as hyperlinks where appropriate. Setup Connect your Linkup account: In the Query Linkup for insights (HTTP Request) node, add your Linkup API key. We recommend creating a "Generic Credential" of type "Bearer Token" for this. Linkup's free plan is very generous and includes credits for ~1000 searches per month. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. For cost-efficiency, we recommend a smaller, faster model for Generate research questions and a more powerful, creative model for Generate the AI output. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to enter an article title and guidelines to generate your first draft! Taking it further Control your sources:* For more brand-aligned or niche content, you can restrict the web search to specific websites by adding site:example.com OR site:anothersite.com to the query in the *Query Linkup for insights** node. Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include calls-to-action.