by GuanNan
Who is this for? This template is designed for anyone who wants to integrate MCP with their AI Agents. Whether you're a developer, a data analyst, or an automation enthusiast, if you're looking to leverage the power of MCP and Google Calendar in your n8n workflows, this template is for you. What problem is this workflow solving? This template caters to MCP beginners seeking a hands - on example and developers looking to integrate Google Calendar MCP service. When integrating MCP with Google Calendar, manually updating AI Agents after changes to Google Calendar tools on the MCP Server is time - consuming and error - prone. This template automates the process, enabling the AI Agent to instantly recognize changes made to Google Calendar on the MCP Server. In project management, for example, it ensures that task schedule updates in Google Calendar are automatically detected by the AI Agent. With detailed steps, it simplifies the integration process for all users. What this workflow does This workflow focuses on integrating MCP with Google Calendar within n8n. Specifically, it allows you to build an MCP Server and Client using Google Calendar nodes in n8n. Any changes made to the Google Calendar tools on the MCP Server are automatically recognized by the MCP Client in the workflow. This means that you can make changes to your Google Calendar (such as adding, deleting, or modifying events) on the MCP Server, and the MCP Client in the n8n workflow will immediately detect these changes without any manual intervention. Setup Requirements An active n8n account. Access to Google Calendar API. You need to enable the Google Calendar API, and create the necessary credentials (OAuth 2.0 client ID). Basic knowledge of n8n workflows and MCP concepts. Step - by - step guide Create a new workflow in n8n: Log in to your n8n account and create a new workflow. Add Google Calendar nodes: Search for and add the Google Calendar nodes to your workflow. Configure the nodes with your Google Calendar API credentials. Set up the MCP Server and Client: Use the appropriate nodes in n8n to set up the MCP Server and Client. Connect the Google Calendar nodes to the MCP nodes as required. Test the workflow: Make some changes to your Google Calendar on the MCP Server and check if the MCP Client in the n8n workflow can detect these changes. How to customize this workflow to your needs If you want to customize this workflow, you can: Modify the triggers**: You can change the conditions under which the MCP Client detects changes. For example, you can set it to detect only specific types of events in Google Calendar. Integrate with other services**: You can add more nodes to the workflow to integrate with other services, such as sending notifications to Slack or saving data to a database when a change is detected.
by Jimleuk
This n8n template imports purchase order submissions from Outlook and converts attached purchase order forms in XLSX format into structured output. Data entry jobs with user-submitted XLSX forms are time consuming, incredibly mundane but necessary tasks which in likelihood are inherited and critical to business operation. While we could dream of system overhauls and modernisation, the fact is that change is hard. There is another way however - using n8n and AI! N8N offers an end-to-end solution to parse XLSX form attachments using LLM-powered OCR and send the extracted output to your ERP or otherwise. How it works An Outlook trigger is used to watch for incoming purchase order forms submitted via a shared inbox. The email attachment for the submission is a form in xlsx format - like this one Purchase Order Example - which is imported into the workflow. The 'Extract from File' node is used with the 'code' node to convert the xlsx file to markdown. This is so our LLM can understand it. The Information Extractor node is used to read and extract the relevant purchase order details and line items from the form. A simple validation step is used to check for common errors such as missing PO number or the amounts not matching up. A notification is automated to reply to the buyer if so. Once validation passes, a confirmation is sent to the buyer and the purchase order structured output can be sent along to internal systems. How to use This template only works if you're expecting and receiving forms in XLSX format. These can be invoices, request forms as well as purchase order forms. Update the Outlook nodes with your email or other emails as required. What's next? I've omitted the last steps to send to an ERP or accounting system as this is dependent on your org. Requirements Outlook for Emails Check out how to setup credentials here: https://docs.n8n.io/integrations/builtin/credentials/microsoft OpenAI for LLM document understanding and extraction. Customising the workflow This template should work for other Excel files. Some will be more complicated than others so experiment with different parsers and extraction tools and strategies. Customise the Information Extractor Schema to pull out the specific data you need. For example, capture any notes or comments given by the buyer.
by SamirLiu
📝 What this workflow does Every morning at 8 a.m., this workflow fetches the latest AI-related articles from both GNews and NewsAPI. It merges up to 40 new articles daily, selects the 15 most relevant ones on AI technology and applications, and uses GPT-4.1 to generate concise summaries in accurate Traditional Chinese (while preserving essential English technical terms). Each summary also includes the article link for easy referral. The compiled digest is then posted to your designated Telegram account or group. 👥 Who is this for? AI enthusiasts, professionals, and anyone interested in artificial intelligence news Individuals and teams wanting a concise daily digest of AI developments in Traditional Chinese Telegram users who prefer automated information delivery 🎯 What problem does this workflow solve? With the rapid evolution of AI technology, it can be overwhelming to keep up with new developments. This workflow addresses information overload by automatically collecting, summarizing, and translating the most important AI news each morning — all delivered conveniently to your chosen Telegram channel or group. ⚙️ Setup 🔑 Add NewsAPI and GNews API Keys Register for accounts on NewsAPI.org and GNews to obtain your API keys. Input your NewsAPI key directly into the Fetch NewsAPI articles node. Input your GNews API key into the Fetch GNews articles node. 🤖 Set up your Telegram Bot Create a Telegram Bot via BotFather and copy the generated Bot Token. In n8n, create Telegram Bot credentials using this token. In the Send summary to Telegram node, enter the chat ID of your target user, group, or channel to receive the messages. 🧠 Configure OpenAI Credentials In n8n, create a new credential using your OpenAI API key. Assign this credential to the GPT-4.1 Model node (or equivalent OpenAI/AI nodes). After completing these steps, your workflow is fully configured to fetch, summarize, and deliver daily AI news to your selected Telegram chat automatically. 🛠️ How to customize this workflow 🔍 Change the topic:** Update the keywords in the NewsAPI and GNews nodes for other subjects (e.g., "blockchain", "quantum computing"). ⏰ Adjust delivery time:** Modify the scheduled trigger to your preferred hour. ✍️ Tweak summary style or language:** Refine the prompt in the AI summarizer node for different tones or translate into other languages as needed. 📦 Dependencies NewsAPI account GNews account Telegram Bot OpenAI API access (for GPT-4.1) or compatible AI model for Langchain agent
by Samir Saci
Tags: Accessibility, SEO, Blogging, Marketing, Automation, AI, Web Auditing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. In my personal blog, I share insights on how to use AI, automation, and data analytics to improve logistics, operations, and digital sustainability practices. > Have you heard about accessibility? In this workflow, I use n8n to improve the quality of alternative texts for images on my personal website. 📬 For business inquiries, you can connect with me on LinkedIn Who is this template for? This workflow is for: Bloggers* and *website owners* who want to *improve accessibility** SEO professionals** looking to boost page performance Web developers* and *product teams** automating web audits What does it do? This n8n workflow: 🔍 Downloads the HTML of a blog or web page 🖼️ Extracts all ` tags and their alt` attributes 📉 Detects missing or too-short alt texts 🤖 Sends those images to GPT-4o (with vision) to generate new alt descriptions 📄 Saves the results into a Google Sheet, updating the alt text when needed How it works Set a page URL using the Set node Download HTML content Extract image src and alt using a Code node Store results in a Google Sheet Filter images with altLength < 50 Send image URL to GPT-4o Update the Google Sheet with the newly generated newAlt text The AI alt texts are concise, descriptive, and accessibility-compliant. What do I need to get started? You’ll need: A Google Sheet to store the audit results An OpenAI account with GPT-4o access Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to improve the accessibility of your website. 🎥 Watch My Tutorial Notes GPT-generated alt texts are limited to ~125–150 characters for best results Use this to comply with WCAG and improve Google indexing Easily adapt it to audit multiple domains or e-commerce catalogues This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Mario
Dynamically switch between LLMs for AI Agents using LangChain Code Purpose This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them. What it does This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model. How it works A LangChain Code Node allows multiple LLMs to be connected to a single Basic LLM Chain. On every call only one LLM is actually being connected to the Basic LLM Chain, which is determined by the index defined in a previous Node. The AI output is later validated by a Sentiment Analysis Node If the result was not satisfactory, it loops back to the beginning and executes the same query with the next available LLM The loop ends either when the result passed the requirements or when all LLMs have been used before. Setup Clone the workflow and select the belonging credentials. You'll need an OpenAI Account, alternatively you can swap the LLM nodes with ones from a different provider like Anthropic after the import. How to use Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas. After cloning this workflow into your environment, open the chat and send this example message: > I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month? Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM. Since AI responses are unpredictable, the results and number of tries will differ for each run. Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.
by Sean Lon
Target Audience You will find this workflow or template perfect if you are in the internal talent acquisition teams, recruitment agencies, HR professionals, and hiring managers seeking to bulk automate the initial screening of CVs and resumes. Eg. Automatically get result of candidate who has been shortlisted/rejected with its rationale and score automatically. By eliminating manual evaluation and screening, you get smart AI-Agent helping you to have standardized efficient, and scalable solution for handling large volumes of applications. With bulk automation, you can focus strategic decision-making rather than tedious screening tasks, ensuring a faster, more accurate, and fair hiring process. Key focus This workflow focusses on having a more organized file-folder management, trackable candidate cv, maintainable job description, autonomous ai-agent. Organized Folder-File Structure – CVs are automatically categorized based on their status, ensuring a structured workflow and easy retrieval Candidate Tracker – A real-time tracking system records the state of each CV, allowing recruiters to monitor the shortlisted, rejected, or KIV (Keep in View) candidates. AI Agent for Decision Automation – The AI autonomously orchestrates screening decisions, replacing manual LLM configurations with dynamic AI-driven evaluations for scalability and accuracy. Maintainable Job Description Management – A structured job description file ensures continuous updates, keeping hiring criteria flexible and aligned with recruitment needs. Email Notifications – The system automatically sends receipt confirmations upon processing completion, providing timely updates to recruiters. Features - Workflow Automated Resume Screening Workflow This workflow leverages Groq Llama4 for intelligent resume analysis, speeding the screening process by generating a matching score, result (shortlisted/rejected/kiv), and key insights/rationale into their suitability for provided job description. Step-by-Step Process: Monitors Google Drive:** Listens and checks for new resume cv in google drive . Retrieve Resume:** Downloads the CV resumes from google drive . Extract Resume Data:* Extract *text content** from CV resume PDF files Extract Job Description Data:* Extract *text content** from job description Analyze with Groq:** Generate a matching score based on job requirements. [SCORE: 1-10] Provide decision into their job suitability. [SHORTLISTED/REJECTED/KIV] Provide actionable insights into their job suitability. [REASON] This ensures a fast, efficient, and accurate screening process, eliminating manual evaluation. Setup Guide Step-by-Step Instructions Ensure all credentials are ready and setup (groq, gdrive ,gmail, gsheet, gdoc) View official n8n documentation on node setup accordingly. See also the notes of setup . Folder & File Setup 1. Create a google-drive folder like this View directory example 2. Create a job description like this View file example 3. Configure a tracker like this ( Candidate Name, AI Score,AI Verdict, AI Reason) View file example email conversations report as you like. You are ready to go!
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Data’s infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email
by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computing’s promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command execution—all driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. 🔑 Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends “typing…” indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8n’s flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Krishna Kumar Eswaran
🧠 Problem This Solves: For developers and creators, consistently posting quality content on LinkedIn can be time-consuming. This workflow automates the process by: Fetching the latest Dev.to articles Posting them to LinkedIn twice daily Preventing duplicates using Airtable Sending success alerts to Telegram This ensures you're always active on LinkedIn, with zero manual effort. 👥 Who This Template Is For Developers who want to build their presence on LinkedIn Tech creators or solo founders looking to grow an audience Community/page managers who want regular, curated content Busy professionals aiming for consistent LinkedIn engagement without doing it manually ⚙️ Workflow Breakdown This automation runs twice a day (9:00 AM and 7:00 PM) and performs the following steps: Fetches Dev.to articles based on a tag Checks Airtable to avoid reposting the same article Posts to LinkedIn if it’s new Sends a Telegram message after posting successfully 🧩 Step-by-Step Setup Instructions ✅ 1. Airtable Configuration Create a new base in Airtable with just one table and one column: Table Name: PostedArticles Column: ArticleID (Single line text – stores the unique ID of each Dev.to article posted) This column is used to track posted articles and prevent duplicates. 🔗 2. Dev.to API Setup Use the following endpoint in the HTTP Request node: arduino Copy Edit https://dev.to/api/articles?tag=YOUR_TAG_HERE&per_page=10 Replace YOUR_TAG_HERE with a tag like android, webdev, ai, etc. 💬 3. Telegram Bot Setup Open @BotFather in Telegram and create a new bot Save the bot token Get your chat ID using @userinfobot or via Telegram API Add a Telegram node in n8n using this token and chat ID This will notify you when a post is successfully published. 🧾 4. LinkedIn Setup Create a LinkedIn Developer App Use OAuth2 to connect it in n8n Choose to post on either a user profile or a company page 🧱 5. n8n Workflow Structure Here’s the basic structure of the workflow: Cron Node – Triggers at 9:00 AM and 7:00 PM daily HTTP Request – Fetches latest articles from Dev.to Airtable Search – Checks if ArticleID already exists IF Node – Filters new vs. already-posted articles LinkedIn Post – Publishes new article Airtable Create – Saves the new ArticleID Telegram Message – Sends success confirmation 🛠️ Customization Tips Change the Dev.to tag in the API URL Modify LinkedIn post format (add hashtags, emojis, personal notes) Adjust posting times in the Cron node Use additional filters (e.g., only post articles with a cover image or certain word count)
by JPres
👥 Who Is This For? Content creators, marketing teams, and channel managers who want a simple, hands‑off solution to upload videos and automatically generate optimized metadata from video transcripts. 🛠 What Problem Does This Solve? Manual video uploads with proper metadata creation is time‑consuming and repetitive. This workflow fully automates: Monitoring a specific Google Drive folder for new video uploads Seamless YouTube upload processing Transcript extraction for context understanding AI‑powered generation of titles, descriptions, and tags Metadata application to uploaded videos without manual intervention 🔄 Node‑by‑Node Breakdown | Step | Node Purpose | |------|---------------------------------------------------------------------| | 1 | New Video? (Trigger) – Monitors specified Google Drive folder | | 2 | Download New Video – Retrieves the video file from Google Drive | | 3 | Upload to YouTube – Uploads the video to YouTube with initial settings | | 4 | Get Transcript – Extracts transcript from the uploaded video | | 5 | Adjust Transcript Format – Formats raw transcript for processing | | 6 | Create Description – Generates SEO‑optimized description | | 7 | YT Tags (Message Model) – Creates relevant tags based on content | | 8 | YT Title (Message Model) – Generates compelling title | | 9 | Define File Path Upload Format (Optional) – Structures data paths | | 10 | Update Video’s Metadata – Applies generated title, description, tags| ⚙️ Pre‑conditions / Requirements n8n with Google Drive and YouTube API credentials configured (stored as n8n credentials/variables; no hard‑coded IDs) Dedicated Google Drive folder for video uploads YouTube channel with proper upload permissions AI service access for transcript processing and metadata generation Sufficient storage for temporary video handling ⚙️ Setup Instructions Import this workflow into your n8n instance. Configure Google Drive credentials; reference folder ID via n8n variable (do not hard‑code). Set up YouTube API credentials with upload and edit permissions. Specify the target Google Drive folder ID in the New Video? trigger node (via variable). Configure AI service credentials for transcript and metadata generation. Adjust message templates for title, description, and tag creation. Test with a small video file before production use. 🎨 How to Customize Modify AI prompts to match your channel’s tone and style. Add conditional logic based on video categories or naming conventions. Implement notification systems to alert when uploads complete. Create custom metadata templates for different content types. Include timestamps or chapter markers based on transcript analysis. Add social media sharing nodes to announce new uploads. ⚠️ Important Notes Video quality is preserved through the upload process. Consider YouTube API quotas when handling multiple uploads. Transcript quality affects metadata generation results. Videos are initially uploaded without visibility adjustments. Processing time depends on video length and transcript complexity. 🔐 Security and Privacy Store API credentials and folder IDs as n8n Credentials/Variables—remove any hard‑coded tokens or IDs. Video files are processed temporarily and not stored permanently. Limit Google Drive folder access to authorized users only. Manage YouTube upload permissions carefully (use OAuth/service accounts). Ensure compliance with organizational data‑handling policies.
by Krishna Kumar Eswaran
🧠 Problem This Solves Manually sharing Medium articles to LinkedIn daily can be repetitive and time-consuming. This automation: Fetches the latest Medium articles based on a tag (e.g., android) Posts them on LinkedIn twice daily Uses Airtable to prevent duplicates Sends a confirmation to Telegram once posted Stay consistently active on LinkedIn without lifting a finger. 👥 Who This Template Is For Developers who write or follow Medium content Tech creators or founders looking to grow an audience Community or page managers needing regular curated posts Busy professionals who want hands-free LinkedIn engagement ⚙️ Workflow Breakdown This automation runs at 9:00 AM and 7:00 PM daily and performs these steps: Fetch articles from MediumAPI.com by tag Check Airtable to prevent reposting the same article Post on LinkedIn if it’s new Store the article ID in Airtable Send a Telegram message after successful posting 🧾 Step-by-Step Setup Instructions ✅ 1. Airtable Configuration Create a base with: Table Name: PostedArticles Column: ArticleID (Single line text – to track posted articles) 🔗 2. MediumAPI Setup Go to https://mediumapi.com Sign up and generate your API key from the dashboard Use this API endpoint in an HTTP node: GET https://mediumapi.com/api/tag/YOUR_TAG/latest Headers: Authorization: Bearer YOUR_API_KEY Replace YOUR_TAG with a topic like android, ai, webdev, etc. 💬 3. Telegram Bot Setup Go to @BotFather and create a new bot Save the bot token Use @userinfobot to get your Telegram chat ID Add a Telegram node in n8n with the token + chat ID 🔗 4. LinkedIn Setup Create a LinkedIn Developer App Connect it via OAuth2 in n8n Choose to post on your profile or company page 🧱 5. n8n Workflow Structure Node Type Description Cron Triggers the flow twice a day HTTP Request Fetches articles from MediumAPI.com Airtable Search Checks if article ID already exists IF Node Skips duplicates LinkedIn Post Publishes to your LinkedIn profile/page Airtable Create Stores posted article ID Telegram Node Sends success notification 🛠️ Customization Tips Change the tag in the API URL to match your niche Add hashtags or personal comments to the LinkedIn message Schedule different posting times in the Cron node Filter Medium posts based on length or title keywords (optional)