by Eumentis
What It Does This workflow automatically runs when a new email is received in the user's Gmail account. It sends the email content to OpenAI (GPT-4.1-mini), which intelligently determines whether the message requires action. If the email is identified as actionable, the workflow sends a structured alert message to the user in Microsoft Teams. This keeps the user informed of high-priority emails in real time without the need to manually check every message. The workflow does not log any execution data, ensuring that email content remains secure and unreadable by others. How It Works Trigger on New Email**: The workflow is triggered automatically when a new email is received in the user's Gmail account. Email Evaluation with OpenAI**: The email content is sent to GPT-4.1-MINI, which evaluates whether the message requires user action. Filter Actionable Emails**: Only emails identified as actionable by the AI are allowed to proceed through the rest of the workflow. Send Notification to Teams**: For actionable emails, the workflow sends a structured alert message to the user in Microsoft Teams chat via a Power Automate webhook. Prerequisites Gmail IMAP Credentials OpenAI API Key Microsoft Teams Webhook URL Power Automate Flow to send message to Teams chat How to Set It Up 1. Set Up Power Automate Workflow 1.1 Open Workflow Power Automate in Microsoft Teams Open the Workflow app from Microsoft Teams. If it's not already added, go to Apps → search "Workflow" → click Add → open it. 1.2 Create a New Flow Click New Flow → select Create from blank. 1.3 Add a Trigger: When a Teams webhook request is received In the trigger setup, set Who can trigger the flow? to Anyone. After saving the flow, a webhook URL will be generated — this URL will be used in n8n workflow. 1.4 Add Action: Parse JSON Set Content to: Body Use the following schema: { "type": "object", "properties": { "from": { "type": "string" }, "receivedAt": { "type": "string" }, "subject": { "type": "string" }, "message": { "type": "string" } } } 1.5 Add Action: Get an @mention token for a user Set the User field to the Microsoft Teams email address of the person to notify (e.g. yourname@domain.com). 1.6 Add Action: Post message in a chat or channel In this action, configure the following: Post as: Flow bot Post in: Chat with Flow bot Recipient: Your Microsoft Teams email address (e.g., yourname@domain.com) Paste the following code into the Message (in code view): Hello @{outputs('Get_an_@mention_token_for_a_user')?['body/atMention']}, You have received a new email at your email address @{body('Parse_JSON')?['recipientEmail']} that requires your attention: From: @{body('Parse_JSON')?['sender']} Received On: @{body('Parse_JSON')?['date']} Subject: @{body('Parse_JSON')?['subject']} Please review the message at your earliest convenience. Click here to search this mail in your mailbox 1.7 Save and Enable the Flow Click Save. Turn the flow On. The webhook URL is now active and available in the first trigger step, copy it to use in n8n. Need help with the setup? Feel free to contact us 2. Configure IMAP Email Trigger First, enable 2‑Step Verification in your Google Account and generate an App Password for n8n. Then, in the IMAP node → Create Credential to connect using the following details: • User: your Gmail address • Password: the App Password • Host: imap.gmail.com • Port: 993 • SSL/TLS: Enabled Follow the n8n documentation to complete the setup. 3. Configure OpenAI Integration Add your OpenAI API key as a credential in n8n. Follow the n8n documentation to complete the setup. 4. Set Up HTTP Request to Trigger Power Automate Workflow Paste generated Webhook URL from the Power Automate workflow into the URL field of the HTTP Request node. 5. Disable Execution Logging for Privacy To ensure that email content is not stored in logs and remains fully secure, you can disable execution logging in n8n: In the n8n Workflow Editor, click on the three dots (•••) in the top right corner and select Settings. In the settings panel: Set Save manual executions to: Do not save Set Save successful production executions to: Do not save Set Save failed production executions to: Do not save if you also want to avoid logging errors Save the changes. Refer to the official n8n documentation for more details: 6. Activate the Workflow Set the workflow status to Active in n8n so it runs automatically when a new mail is received in Gmail. Need Help? Contact us for support and custom workflow development.
by darrell_tw
Water Reminder Workflow This workflow demonstrates how to use n8n and Slack to build an intelligent water drinking reminder system, combined with Google Sheets for data recording and OpenAI for generating personalized reminder messages. Google Sheet Template The iOS shortcut template: The result in iOS health: The template demo in Youtube Key Features Scheduled Reminders: Automatically sends water reminders at random times every hour. Intelligent Scheduling: Delays the next reminder if you've recently had water. AI-Generated Messages: Uses OpenAI to generate friendly and non-repetitive reminder messages. Data Tracking: Records daily water intake and calculates percentage of goal achievement. Quick Response: Easily record water intake through Slack buttons. iOS Integration: Provides iOS shortcut links to sync data with the Health app. Pre-Configuration Requirements To use this workflow, you need to set up the following: Google Sheets: Create a Google spreadsheet with log and setting sheets The log sheet should include date, time, and value columns The setting sheet is used to store daily water intake goals Slack: Create a Slack app and obtain an API token Configure permissions for interactive buttons OpenAI: Obtain an OpenAI API key iOS Shortcut (optional): Create an iOS shortcut named darrell_water for recording health data Node Configurations 1. Scheduled Triggers and Data Collection 1.1. Schedule Trigger Purpose**: Triggers water reminders on schedule Configuration**: Cron Expression: 0 {{ Math.floor(Math.random() * 11) }} 8-23 * * * Triggers at a random minute every hour, only between 8 AM and 11 PM 1.2. Google Sheets - Get Target Purpose**: Retrieves daily water intake goal Configuration**: Document ID: Your Google spreadsheet ID Sheet Name: setting 1.3. Google Sheets - Get Log Purpose**: Retrieves today's water intake records Configuration**: Document ID: Your Google spreadsheet ID Sheet Name: log Filter Condition: date equals today's date {{ $now.format('yyyy-MM-dd') }} 1.4. Summarize Purpose**: Calculates total water intake for today Configuration**: Fields to Summarize: value (sum) 1.5. Limit Purpose**: Gets the most recent water intake record Configuration**: Keep: Last items 2. Intelligent Reminder Logic 2.1. Combine Data Purpose**: Merges target and actual water intake data Configuration**: Combine By: Combine by position Number of Inputs: 3 2.2. If Purpose**: Checks if water was consumed recently Configuration**: Condition: {{ DateTime.fromISO($json.date+"T"+$json.time).format('yyyy-MM-dd HH:mm:ss') }} is after {{ $now.minus(30, "minutes") }} 2.3. Wait Purpose**: Randomly delays the reminder if water was consumed recently Configuration**: Wait Time: {{ Math.floor(Math.random() * 1) + 1 }} minutes 3. AI Message Generation and Sending 3.1. OpenAI Purpose**: Generates personalized water reminder messages Configuration**: Model: gpt-4o-mini Messages: System prompt: Requests responses in Traditional Chinese and in JSON format User prompt: Includes information about last water time, current time, goal, and progress Temperature: 1 3.2. Slack Send Drink Notification Purpose**: Sends water reminders to Slack channel Configuration**: Channel: Your Slack channel ID Message Type: Block Block UI: Contains AI-generated reminder message and water amount buttons (100ml, 150ml, 200ml, 250ml, 300ml) 4. User Interaction and Data Recording 4.1. Slack Drink Webhook Purpose**: Receives user interactions when water buttons are clicked Configuration**: HTTP Method: POST Path: slack-water-webhook 4.2. Slack Action Payload Purpose**: Parses Slack interaction data Configuration**: Mode: Raw JSON Output: {{ $json.body.payload }} 4.3. Slack Action Drink Data Purpose**: Extracts water amount and message information Configuration**: Assignments: value: {{ $json.actions[0].value }} message_text: {{ $json.message.text }} shortcut_url: shortcuts://run-shortcut?name=darrell_water&input= shortcut_url_data: JSON containing water amount and time message_ts: {{ $json.container.message_ts }} 4.4. Google Sheets Purpose**: Records water intake data to spreadsheet Configuration**: Operation: Append Document ID: Your Google spreadsheet ID Sheet Name: log Column Mapping: date: {{ $now.format('yyyy-MM-dd') }} time: {{ $now.format('HH:mm:ss') }} value: {{ $json.value }} 4.5. Send to Slack with Confirm Purpose**: Sends confirmation message and provides iOS shortcut link Configuration**: Channel: Your Slack channel ID Message Type: Block Block UI: Contains confirmation message and iOS Health app button Reply Settings: Reply to the thread of the original message Author Information This workflow was created by darrell_tw_, an engineer focused on AI and Automation. Contact: X Threads Instagram Website
by Garri
Description This workflow is an n8n-based automation that allows users to download TikTok/Reels videos without watermarks simply by sending the video link through a Telegram Bot. It uses a Telegram Trigger to receive the link from the user, then makes an HTTP request to a third-party API (tiktokio.com) to process and retrieve the download link. The workflow filters the results to find the Download without watermark link, downloads the video in MP4 format, and sends it back to the user directly in their Telegram chat. Key features: Supports the best available video quality (bestvideo+bestaudio). Automatically removes watermarks. Instant response directly in Telegram chat. Fully automated — no manual downloads required. How It Works Telegram Trigger The user sends a TikTok or Reels link to the Telegram bot. The workflow captures and stores the link for processing. HTTP Request – MediaDL API The link is sent via POST method to https://mediadl.app/api/download. The API processes the link and returns video file data. Wait Delay The workflow waits a few seconds to ensure the API response is fully ready. Edit Fields Extracts the video file URL from the API response. Additional Wait Delay Adds a short pause to avoid connection errors during the download process. HTTP Request – Proxy Download Downloads the MP4 video file directly from the filtered URL. Send Video via Telegram The downloaded video is sent back to the user in their Telegram chat. How to Set Up Create & Configure a Telegram Bot Open Telegram and search for BotFather. Send /newbot → choose a name & username for your bot. Copy the Bot Token provided — you’ll need it in n8n. Prepare Your n8n Environment Log in to your n8n instance (self-hosted or n8n Cloud). Go to Credentials → create new Telegram API credentials using your Bot Token. Import the Workflow In n8n, click Import and select the PROJECT_DOWNLOAD_TIKTOK_REELS.json file. Configure the Telegram Nodes In the Telegram Trigger and Send Video nodes, connect your Telegram API credentials. Configure the HTTP Request Nodes Ensure the Download2 and HTTP Request nodes have the correct URL and headers (pre-configured for mediadl.app). Make sure the responseFormat is set to file in the final download node. Activate the Workflow Toggle Activate in the top right corner of n8n. Test by sending a TikTok or Reels link to your bot — you should receive the no-watermark video in return.
by n8n Team
This workflow demonstrates how to connect an open-source model to a Basic LLM node. The workflow is triggered when a new manual chat message appears. The message is then run through a Language Model Chain that is set up to process text with a specific prompt to guide the model's responses. Note that open-source LLMs with a small number of parameters require slightly different prompting with more guidance to the model. You can change the default Mistral-7B-Instruct-v0.1 model to any other LLM supported by HuggingFace. You can also connect other nodes, such as Ollama. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Paolo Ronco
Automated Invoice Archiving Automatically fetch, store, and extract key information from invoices received via email from your ISP or utility provider (electricity, gas, telecom, water, etc.).The workflow saves the invoices to Google Drive (or optionally to your personal FTP/SFTP server) and logs all invoice details into Google Sheets via AI-powered extraction. Read: Full setup Guide How it works Scheduled TriggerRuns the workflow at a selected interval (e.g., every hour). You can freely adjust the timing. Gmail – Fetch MessagesReads your Gmail inbox and retrieves only messages coming from your ISP/utility provider’s email address, filtering for messages with PDF attachments. Gmail – Download Invoice Fetches the full email content and downloads the attached invoice (PDF). Google Drive – Upload File Uploads the invoice into a specific Google Drive folder of your choice. (Optional) Upload to FTP/SFTP Sends a copy of the invoice to your personal server via secure FTP/SFTP. AI Extraction Pipeline Extract PDF Text – converts the PDF into text (OCR not required if text-based). AI Agent (OpenRouter) – understands the invoice content and extracts structured fields (invoice number, date, provider name, total amount, tax info, line items, etc.) Code Node – sanitizes and parses the JSON from the AI model. Google Sheets – Append Invoice DataAdds a new row to your Google Sheet with all parsed invoice fields. (Optional) CleanupAutomatically deletes:– the Gmail message– the temporary file in Google Drive(Useful when you only want your FTP or Sheets copy.) Parameters to configure | Parameter | Description | Recommended configuration | | --- | --- | --- | | Gmail Credentials | OAuth2 credentials needed to read and delete emails. | Create OAuth credentials on Google Cloud → enable Gmail API → paste Client ID & Secret into n8n → “Connect OAuth2”. | | Sender Email Filter | Email address your provider uses to send invoices. | Example: billing@your-isp.com, invoices@utility.it, ciao@octopusenergy.it | | Google Drive Folder | Destination folder for saving invoices. | Copy the folder ID from the Drive URL and paste it into folderId. | | Google Drive Credentials | OAuth2 connection for file uploads/deletions. | Same Google Cloud project → enable Drive API → OAuth connect in n8n. | | FTP/SFTP Server (optional) | Upload invoices to your private server. | Host / IP · Port · Username · Password or SSH Key · Destination path (e.g. /home/user/invoices/). | | AI Model (OpenRouter) | Large-language model used to parse invoice text. | Example: gpt-4.1, llama-3.1, or any preferred OpenRouter model. | | Google Sheets Document | Destination spreadsheet for structured data. | Create a Sheet → add columns (Vendor, Invoice Number, Date, Amount, Service Type, etc.) → insert documentId & sheet name. | | Sheets Credentials (Service Account) | Used for writing into Google Sheets. | Create Service Account → download JSON → add to n8n → share the Sheet with the Service Account email. | | Trigger Interval | How often the workflow checks for new invoices. | Every hour · every 30 minutes · daily at set ti | Node-by-node breakdown 1. Schedule Trigger Runs at the interval you choose (default: hourly).Start → triggers entire workflow. 2. Gmail – Get Many Messages Filters inbox items using: Sender email** (your ISP/utility address) Has attachment** Unread or recent messages** Downloads metadata + attachment references. 3. Filter – Contains Attachment Ensures only messages with binary attachments continue. 4. Gmail – Get Invoice Downloads: Full email JSON The invoice PDF (binary data) 5. Google Drive – Upload File Uploads invoice PDF with a dynamic filename: {{ $json.from.value[0].name }}-{{ $json.date }}.pdf Requires: Google Drive OAuth2 credentials Folder ID (destination directory) 6. HTTP Request – Download File Retrieves the raw PDF file from Google Drive for further processing. 7. (Optional) FTP/SFTP Upload Uploads the PDF to your server using: Host / IP Port (default 22) Username Password or private key Destination path Filename is sanitized to ensure Unix compatibility. 8. (Optional) Delete Temporary File Deletes the Google Drive file if you don’t want duplicates. 9. (Optional) Delete Gmail Message Removes the original email once processed (optional inbox cleanup). 10. Extract from File (PDF → Text) Reads the PDF and extracts raw text for AI processing. 11. OpenRouter Chat Model LLM backend for the AI agent. Provides: invoice parsing field extraction structured reasoning 12. AI Agent – Extract Invoice Fields The agent is instructed to return strict JSON only, containing keys such as: vendor_name invoice_number invoice_date total_amount tax_details line_items[] po_number po_date Works for most standard PDF invoices. 13. Code – Clean & Parse JSON Sanitizes the AI output: Removes markdown fences Extracts valid JSON Parses into a clean JS object If the AI output is malformed, debugging info is returned. 14. Google Sheets – Append Data Appends the extracted fields into a structured row.Example mappings: Vendor** → {{ $json.vendor_name }} Invoice Number** → {{ $json.invoice_number }} Date** → {{ $json.invoice_date }} Amount** → {{ $json.total_amount }} Service Type** → {{ $json.line_items[0].description }} 💡 Tips & best practices Add multiple sender filters if you have more than one utility provider. Ensure invoices are text-based PDFs for best extraction results. Use Google Drive as a reliable long-term archive, or keep only FTP if you prefer local storage. Create charts in Google Sheets for tracking: Monthly utility cost trends Year-over-year comparison Consumption spikes (if included in invoices) ⚠️ Important notes Utility invoices contain personal and financial data. Keep your FTP/SFTP server secure. Google APIs require proper OAuth2 or Service Account setup; misconfiguration may cause permission errors. This workflow is for personal automation, not a replacement for official fiscal archiving. AI extraction quality depends on invoice formatting and the model you choose.
by Puspak
🚀 Remote Job Automation Workflow Automatically fetch, clean, and broadcast the latest remote job listings — powered by RemoteOK, Airtable, and Telegram. 🔧 Key Features Seamless Data Fetching: Pulls the latest job listings from the RemoteOK API using an HTTP Request node. Smart Data Processing (via Code Node): Filters out irrelevant metadata Cleans and sanitizes job descriptions (e.g., HTML tags, special characters) Handles malformed or encoded text gracefully Extracts and formats salary ranges for clarity Airtable Integration (Upsert): Stores each job as a unique record using job ID Avoids duplication through conditional upserts using Airtable's Personal Access Token Telegram Bot Broadcasting: Automatically formats and sends job posts to a Telegram group or channel Keeps your community or team updated in real-time 📦 Tech Stack RemoteOK API – source of curated remote job listings Airtable – lightweight, accessible job database Telegram Bot API – for real-time job notifications n8n – workflow automation engine to tie everything together 🔁 Workflow Overview Fetch Jobs from RemoteOK API Clean & Normalize job descriptions and metadata Extract Salary ranges and standardize them Upsert to Airtable (avoiding duplicates) Format Post for visual clarity Send to Telegram via bot integration 🧠 Perfect For Remote job boards or aggregators Recruitment agencies/startups Developers building personal job feeds Communities or channels sharing curated remote opportunities Automating newsletters or job digests ✅ Benefits Near real-time updates Minimal maintenance Full control and extensibility with n8n
by Amit Mehta
N8N Workflow: Printify Automation - Update Title and Description - AlexK1919 This workflow automates the process of getting products from Printify, generating new titles and descriptions using OpenAI, and updating those products. How it Works This workflow automatically retrieves a list of products from a Printify store, processes them to generate new titles and descriptions based on brand guidelines and custom instructions, and then updates the products on Printify with the new information. It also interacts with Google Sheets to track the status of the products being processed. The workflow can be triggered both manually or by an update in a Google Sheet. Use Cases E-commerce Automation**: Automating content updates for a Printify store. Marketing & SEO**: Generating SEO-friendly or seasonal content for products using AI. Product Management**: Batch-updating product titles and descriptions without manual effort. Setup Instructions Printify API Credentials: Set up httpHeaderAuth credentials for Printify to allow the workflow to get and update products. OpenAI API Credentials: Provide an API key for OpenAI in the openAiApi credentials. Google Sheets Credentials: The workflow requires two separate Google Sheets credentials: one for the trigger (googleSheetsTriggerOAuth2Api) and another for appending/updating data (googleSheetsOAuth2Api). Google Sheets Setup: You need a Google Sheet to store product information and track the status of the updates. The workflow is linked to a specific spreadsheet. Brand Guidelines: The Brand Guidelines + Custom Instructions node must be updated with your specific brand details and any custom instructions for the AI. Workflow Logic Trigger: The workflow can be triggered manually or by an update in a Google Sheet when the upload column is changed to "yes". Get Product Info: It fetches the shop ID and then a list of all products from Printify. Process Products: The product data is split, and the workflow loops through each product. AI Content Generation: For each product, the Generate Title and Desc node uses OpenAI to create a new title and description based on the original content, brand guidelines, and custom instructions. Google Sheets Update: The workflow appends the product information and a "Product Processing" status to a Google Sheet. It then updates the row with the newly generated title and description, and changes the status to "Option added". Printify Update: The Printify - Update Product node sends a PUT request to the Printify API to apply the new title and description to the product. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger for testing the workflow. | | Google Sheets Trigger | An automated trigger that starts the workflow when the upload column in the Google Sheet is updated. | | Printify - Get Shops | Fetches the list of shops associated with the Printify account. | | Printify - Get Products | Retrieves all products from the specified Printify shop. | | Brand Guidelines + Custom Instructions | A Set node to store brand guidelines and custom instructions for the AI. | | Generate Title and Desc | An OpenAI node that generates a new title and description based on the provided inputs. | | GS - Add Product Option | Appends a new row to a Google Sheet to track the processing status of a product. | | Update Product Option | Updates an existing row in the Google Sheet with the new product information and status. | | Printify - Update Product | A PUT request to the Printify API to update a product with new information. | Customization Tips You can swap out the Printify API calls with similar services like Printful or Vistaprint. Modify the Brand Guidelines + Custom Instructions node to change the brand name, tone, or specific instructions for the AI. Change the number of options the workflow should generate by modifying the Number of Options node. You can change the OpenAI model used in the Generate Title and Desc node, for example, from gpt-4o-mini to another model. Suggested Sticky Notes for Workflow "Update your Brand Guidelines before running this workflow. You can also add custom instructions for the AI node." "You can swap out the API calls to similar services like Printful, Vistaprint, etc." "Set the Number of Options you'd like for the Title and Description" Required Files 1V1gcK6vyczRqdZC_Printify_Automation_-Update_Title_and_Description-_AlexK1919.json: The main n8n workflow export for this automation. The Google Sheets template for this workflow. Testing Tips Run the workflow with the manual trigger to see the flow from start to finish. Change the upload column in the Google Sheet to "yes" to test the automated trigger. Verify that the new titles and descriptions are correctly updated on Printify. Suggested Tags & Categories Printify OpenAI
by Abdul Mir
Overview Automate your entire LinkedIn content machine — from research and image generation to scheduling and posting — with this AI-powered workflow. This workflow pulls in past content ideas, researches new ones using Perplexity, generates a new post (with image) using your brand's voice and style, saves the output to Google Sheets, and auto-posts twice a week to LinkedIn. It’s perfect for founders, creators, and marketers who want to stay consistent on LinkedIn without manually writing or designing every post. Who’s it for Solo founders or marketers building a LinkedIn presence Content creators growing their audience Agencies managing client content calendars Anyone who wants to post consistently without spending hours on content How it works Pulls old ideas from a Google Sheet Schedules content creation using n8n’s cron node Uses Perplexity to research current topics and trends Feeds the data into an AI agent (like Claude or GPT) to generate post copy Creates a branded image using a reference style and OpenAI’s image model Saves post content + image URL into Google Sheets Twice a week, selects one ready post, downloads the image, and publishes it to LinkedIn How to set up Add your Google Sheet ID and column names for posts Connect your OpenAI (or Claude) and Perplexity API keys Upload a brand-style reference image to Google Drive Configure your LinkedIn account and connect the node Adjust the cron schedule for both post creation and auto-posting (Optional) Edit the AI prompt to match your personal voice or niche Requirements Google Drive & Sheets access OpenAI or Claude API key Perplexity API key LinkedIn credentials (via n8n’s LinkedIn integration) How to customize Change the prompt for the AI to fit your voice or audience Swap out Perplexity for another research method Adjust how often you want posts scheduled or published Swap LinkedIn for Twitter, Slack, or another platform Add Notion or Airtable as your CMS backend
by Arthur Dimeglio
What this workflow does Automatically: fetches fresh news, filters out aggregators/PR wires and duplicates, writes a human-sounding LinkedIn post with GPT, downloads the article image to verify it’s usable, publishes to LinkedIn (with or without media), and logs the posted titles in Firestore to avoid re-posting. Runs on a daily schedule (cron) and supports two post variants: • Case 1: article has a description → richer post • Case 2: no description → short, still human and casual ⸻ How it works (high level flow) • Schedule Trigger (0 10,12,19,21 * * *): runs at 10:00, 12:00, 19:00, 21:00 (server timezone). • Firestore (Get Previous News Titles): loads previously posted titles (document asma/x20) to de-dupe. • HTTP Request (API NEWS): calls newsapi.org with query “AI Startup” for example, last 24–48h window, searchIn=title, sorted by publishedAt. • Code: Select Articles: • excludes Biztoc and common aggregators/PR wires (Techmeme, TheFly, PRNewswire, GlobeNewswire, MarketWatch press-releases, Medium, Substack, Yahoo consent, etc.), • requires valid URL + image, • groups by topic (normalized title + domain) and picks the best representative, • sorts by recency and returns up to 10 unique articles. • IF (URL & De-dupe checks): ensures link present and not already posted (compares against Firestore titles). • IF (Description Checker): branches on presence of description. • LLM Agents (2 prompts): generate a casual, human LinkedIn post in English (no emojis/links/markdown, 2–3 hashtags). • Post setup: cleans the text, passes the image URL forward. • HTTP Request (Image Downloader): retrieves the image as a file to confirm the link works. • LinkedIn Publisher: • If image OK → posts with media. • Otherwise → posts text-only. • Time Checkers + Firestore Upserts: after a successful publish, writes the article’s title to Firestore (asma/x20 fields title10, title12, title19, title21) so it won’t be posted again at other times. ⸻ Credentials & prerequisites • NewsAPI.org: API key (free tier works to start; mind rate limits). • LinkedIn OAuth2: connected account with permission to create posts on your profile (uses “Person” target in the node). • Google Firebase (Firestore): Service Account with read/write to the asma collection. The workflow uses document ID x20. ⸻ Setup (5 minutes) Import the workflow and open it in n8n. In API NEWS, set your NewsAPI key in the query param apiKey. In Get Previous News Titles and Firebase Article Saver [1–8], attach your Google Service Account and confirm projectId, collection=asma. In LinkedIn Publisher [1–4], attach your LinkedIn OAuth credential and ensure the Person is your profile URN. (Optional) Adjust the cron in Hourly trigger (server timezone). (Optional) Change the search query (q=AI startup), language, or time window in API NEWS. Enable the workflow. ⸻ Customization tips • Search scope: edit q, language, from/to in API NEWS to cover your niche. • Aggregator policy: tweak the aggregatorDomains set in the Select Articles code node. • Post voice: modify the LLM prompt (keeps the “human, slightly messy” tone). • Hashtags: the prompt ends with 2–3 simple tags (#AI #Startups #Innovation) — change as you like. • Posting times: change the cron or the downstream time-checker logic to map specific titles → time slots. • No-image fallback: text-only path is already handled; replace with a placeholder image if you prefer always-with-media. ⸻ Notes & constraints • Timezone: Schedule Trigger uses the n8n server timezone; adjust if your LinkedIn audience is elsewhere. • De-dupe: this template stores last posted titles in one Firestore doc (asma/x20) under title10, title12, title19, title21. You can change the schema or keep a rolling history. • Filtering: items missing URL or image are skipped by design. Yahoo consent pages are also skipped. • LLM costs: posts are short; usage is modest, but keep an eye on your OpenAI billing. • NewsAPI limits: free plans throttle requests; consider caching or widening the time window if you hit limits. ⸻ Troubleshooting • Nothing posts: check NewsAPI quota/response, then see the URL checker and Description Checker branches. • Image errors: some sites block hotlinking; the image download step will fall back to text-only posting. • Duplicates appeared: verify Firestore upserts executed after posting and that your comparison uses the right fields. • Wrong hours: confirm your n8n instance timezone and the cron expression. ⸻ Why this template You get a robust “news → LinkedIn” autoposter that feels authentically human (no corporate vibes), avoids low-quality aggregators, prevents duplicates, and gracefully handles media — all with clean, modular nodes that are easy to tweak.
by Yusei Miyakoshi
Who’s it for Teams that start their day in Slack and want a concise, automated summary of yesterday’s emails—ops leads, PMs, founders, and anyone handling busy inboxes without writing code. What it does / How it works Runs every morning at 08:00 (cron 0 0 8 * * ), fetches all emails received *yesterday, and routes the result: if none were found, it posts a polite “no emails” notice; if emails exist, it aggregates them and asks an AI agent to produce a structured digest, then formats and posts to your chosen Slack channel. The flow uses **Gmail → If → Aggregate (Item Lists) → AI Agent (OpenRouter model with structured output) → Code (Slack formatter) → Slack. A set of sticky notes on the canvas explains each step and required inputs. How to set up Connect Gmail (OAuth2) and keep the default date window (yesterday → today at 00:00). Connect Slack (OAuth2) and select your target channel. Add OpenRouter credentials and pick a compact model (e.g., gpt-4o-mini). Keep the provided structured-output schema and formatter code. Adjust the schedule/timezone if needed (the fallback message includes an Asia/Tokyo example). Paste this description into the yellow sticky note at the top of the canvas. Requirements Gmail & Slack accounts with appropriate scopes OpenRouter API key stored in Credentials (no hard-coded keys) n8n Cloud or self-host with LangChain agent nodes enabled How to customize the workflow Narrow Gmail results with label/search filters (e.g., from:, subject:). Change the digest sections or tone in the AI Agent system prompt. Swap the model for cost/quality needs and tweak temperature/max tokens. Localize dates/timezones in the formatter code and Slack messages. Branch the output to email, Google Docs, or Sheets for archival. Security & publishing tips Rename all nodes clearly, do not hardcode API keys, remove real channel IDs/emails before sharing, and group end-user variables in a Set (Fields) node. Keep the sticky notes—they’re mandatory for reviewers.
by Arunava
This workflow finds fresh Reddit posts that match your keywords, decides if they’re actually relevant to your brand, writes a short human-style reply using AI, posts it, and logs everything to Baserow. 💡Perfect for Lead gen without spam: drop helpful replies where your audience hangs out. Get discovered by AI surfaces (AI Overviews / SGE, AISEO/GSEO) via high-quality brand mentions. Customer support in the wild: answer troubleshooting threads fast. Community presence: steady, non-salesy contributions in niche subreddits. 🧠 What it does Searches Reddit for your keyword query on a schedule (e.g., every 30 min) Checks Baserow first so you don’t reply twice to the same post Uses an AI prompt tuned for short, no-fluff, subreddit-friendly comments Softly mentions your brand only when it’s clearly relevant Posts the comment via Reddit’s API Saves post_id, comment_id, reply, permalink, status to Baserow Processes posts one-by-one with an optional short wait to be nice to Reddit ⚡ Requirements Reddit developer API Baserow account, table, and API token AI provider API (OpenAI / Anthropic / Gemini) ⚙️ Setup Instructions Create Baserow table Fields (user-field names exactly): post_id (unique), permalink, subreddit, title, created_utc, reply (long text), replied (boolean), created_on (datetime). Add credentials in n8n Reddit OAuth2* (scopes: read, submit, identity) and set a proper *User-Agent** string (Reddit requires it). LLM**: Google Gemini and/or Anthropic (both can be added; one can be fallback in the AI Agent). Baserow**: API token. Set the Schedule Trigger (Cron) Start hourly (or every 2–3h). Pacing is mainly enforced by the Wait nodes. Update “Check duplicate row” (HTTP Request) URL**: https://api.baserow.io/api/database/rows/table/{TABLE_ID}/?user_field_names=true&filter__post_id__equal={{$json.post_id}} Header**: Authorization: Token YOUR_BASEROW_TOKEN (Use your own Baserow domain if self-hosted.) Configure “Filter Replied Posts” Ensure it skips items where your Baserow record shows replied === true (so you don’t comment twice). Configure “Fetch Posts from Reddit” Set your keyword/search query (and time/sort). Keep User-Agent header present. Configure “Write Reddit Comment (AI)” Update your brand name** (and optional link). Edit the prompt/tone** to your voice; ensure it outputs a short reply field (≤80 words, helpful, non-salesy). Configure “Post Reddit Comment” (HTTP Request) Endpoint: POST https://oauth.reddit.com/api/comment Body: thing_id: "t3_{{$json.post_id}}", text: "{{$json.reply}}" Uses your Reddit OAuth credential and User-Agent header. Update user_agent value in header by your username n8n:reddit-autoreply:1.0 (by /u/{reddit-username}) Store Comment Data on Baserow (HTTP Request) POST https://api.baserow.io/api/database/rows/table/{TABLE_ID}/?user_field_names=true Header: Authorization: Token YOUR_BASEROW_TOKEN Map: post_id, permalink, subreddit, title, created_utc, reply, replied, created_on={{$now}}. Keep default pacing Leave Wait 5m (cool-off) and Wait 6h (global pace) → \~4 comments/day. Reduce waits gradually as account health allows. Test & enable Run once manually, verify a Baserow row and one test comment, then enable the schedule. 🤝 Need a hand? I’m happy to help you get this running smoothly—or tailor it to your brand. Reach out to me via email: imarunavadas@gmail.com
by Muhammad Farooq Iqbal
Google NanoBanana Model Image Editor for Consistent AI Influencer Creation with Kie AI Image Generation & Enhancement Workflow This n8n template demonstrates how to use Kie.ai's powerful image generation models to create and enhance images using AI, with automated story creation, image upscaling, and organized file management through Google Drive and Sheets. Use cases include: AI-powered content creation for social media, automated story visualization with consistent characters, marketing material generation, and high-quality image enhancement workflows. Good to know The workflow uses Kie.ai's google/nano-banana-edit model for image generation and nano-banana-upscale for 4x image enhancement Images are automatically organized in Google Drive with timestamped folders Progress is tracked in Google Sheets with status updates throughout the process The workflow includes face enhancement during upscaling for better portrait results All generated content is automatically saved and organized for easy access How it works Project Setup: Creates a timestamped folder structure in Google Drive and initializes a Google Sheet for tracking Story Generation: Uses OpenAI GPT-4 to create detailed prompts for image generation based on predefined templates Image Creation: Sends the AI-generated prompt along with 5 reference images to Kie.ai's nano-banana-edit model Status Monitoring: Polls the Kie.ai API to monitor task completion with automatic retry logic Image Enhancement: Upscales the generated image 4x using nano-banana-upscale with face enhancement File Management: Downloads, uploads, and organizes all generated content in the appropriate Google Drive folders Progress Tracking: Updates Google Sheets with status information and image URLs throughout the entire process Key Features Automated Story Creation**: AI-powered prompt generation for consistent, cinematic image creation Multi-Stage Processing**: Image generation followed by intelligent upscaling Smart Organization**: Automatic folder creation with timestamps and file management Progress Tracking**: Real-time status updates in Google Sheets Error Handling**: Built-in retry logic and failure state management Face Enhancement**: Specialized enhancement for portrait images during upscaling How to use Manual Trigger: The workflow starts with a manual trigger (easily replaceable with webhooks, forms, or scheduled triggers) Automatic Processing: Once triggered, the entire pipeline runs automatically Monitor Progress: Check the Google Sheet for real-time status updates Access Results: Find your generated and enhanced images in the organized Google Drive folders Requirements Kie.ai Account**: For AI image generation and upscaling services OpenAI API**: For intelligent prompt generation (GPT-4 mini) Google Drive**: For file storage and organization Google Sheets**: For progress tracking and status monitoring Customizing this workflow This workflow is highly adaptable for various use cases: Content Creation**: Modify prompts for different styles (fashion, product photography, architectural visualization) Batch Processing**: Add loops to process multiple prompts or reference images Social Media**: Integrate with social platforms for automatic posting E-commerce**: Adapt for product visualization and marketing materials Storytelling**: Create sequential images for visual narratives or storyboards The modular design makes it easy to add additional processing steps, change AI models, or integrate with other services as needed. Workflow Components Folder Management**: Dynamic folder creation with timestamp naming AI Integration**: OpenAI for prompts, Kie.ai for image processing File Processing**: Binary handling, URL management, and format conversion Status Tracking**: Multi-stage progress monitoring with Google Sheets Error Handling**: Comprehensive retry and failure management systems