by Davi Saranszky Mesquita
Log errors and avoid sending too many emails Use case Most of the time, it’s necessary to log all errors that occur. However, in some cases, a scheduled task or service consuming excessive resources might trigger a surge of errors. To address this, we can log all errors but limit alerts to a maximum of one notification every 5 minutes. What this workflow does This workflow can be configured to receive error events, or you can integrate it before your own error-handling logic. If used as the primary error handler, note that this flow will only add a database log entry and take no further action. You’ll need to add your own alerts (e.g., email or push notifications). Below is an example of a notification setup I prefer to use. At the end, there’s an error cleanup option. This feature is particularly useful in development environments. If you already have an error-handling workflow, you can call this one as a sub-workflow. Its final steps include cleanup logic to reset the execution state and terminate the workflow. Setup Verify all Postgres nodes and credentials when using the 'Error Handling Sample' How to adjust it to your needs 1) You can set this workflow as a sub-workflow within your existing error-handling setup. 2) Alternatively, you can add the "Error Handling Sample" at the end of this workflow, which sends email and push notifications. Configuration Requirements: ⚠️ You must create a database table for this to work! DDL of this sample: create table p1gq6ljdsam3x1m."N8Err" ( id serial primary key, created_at timestamp, updated_at timestamp, created_by varchar, updated_by varchar, nc_order numeric, title text, "URL" text, "Stack" text, json json, "Message" text, "LastNode" text ); alter table p1gq6ljdsam3x1m."N8Err" owner to postgres; create index "N8Err_order_idx" on p1gq6ljdsam3x1m."N8Err" (nc_order); by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Krishna Kumar Eswaran
🧠 Problem This Solves: For developers and creators, consistently posting quality content on LinkedIn can be time-consuming. This workflow automates the process by: Fetching the latest Dev.to articles Posting them to LinkedIn twice daily Preventing duplicates using Airtable Sending success alerts to Telegram This ensures you're always active on LinkedIn, with zero manual effort. 👥 Who This Template Is For Developers who want to build their presence on LinkedIn Tech creators or solo founders looking to grow an audience Community/page managers who want regular, curated content Busy professionals aiming for consistent LinkedIn engagement without doing it manually ⚙️ Workflow Breakdown This automation runs twice a day (9:00 AM and 7:00 PM) and performs the following steps: Fetches Dev.to articles based on a tag Checks Airtable to avoid reposting the same article Posts to LinkedIn if it’s new Sends a Telegram message after posting successfully 🧩 Step-by-Step Setup Instructions ✅ 1. Airtable Configuration Create a new base in Airtable with just one table and one column: Table Name: PostedArticles Column: ArticleID (Single line text – stores the unique ID of each Dev.to article posted) This column is used to track posted articles and prevent duplicates. 🔗 2. Dev.to API Setup Use the following endpoint in the HTTP Request node: arduino Copy Edit https://dev.to/api/articles?tag=YOUR_TAG_HERE&per_page=10 Replace YOUR_TAG_HERE with a tag like android, webdev, ai, etc. 💬 3. Telegram Bot Setup Open @BotFather in Telegram and create a new bot Save the bot token Get your chat ID using @userinfobot or via Telegram API Add a Telegram node in n8n using this token and chat ID This will notify you when a post is successfully published. 🧾 4. LinkedIn Setup Create a LinkedIn Developer App Use OAuth2 to connect it in n8n Choose to post on either a user profile or a company page 🧱 5. n8n Workflow Structure Here’s the basic structure of the workflow: Cron Node – Triggers at 9:00 AM and 7:00 PM daily HTTP Request – Fetches latest articles from Dev.to Airtable Search – Checks if ArticleID already exists IF Node – Filters new vs. already-posted articles LinkedIn Post – Publishes new article Airtable Create – Saves the new ArticleID Telegram Message – Sends success confirmation 🛠️ Customization Tips Change the Dev.to tag in the API URL Modify LinkedIn post format (add hashtags, emojis, personal notes) Adjust posting times in the Cron node Use additional filters (e.g., only post articles with a cover image or certain word count)
by JPres
👥 Who Is This For? Content creators, marketing teams, and channel managers who want a simple, hands‑off solution to upload videos and automatically generate optimized metadata from video transcripts. 🛠 What Problem Does This Solve? Manual video uploads with proper metadata creation is time‑consuming and repetitive. This workflow fully automates: Monitoring a specific Google Drive folder for new video uploads Seamless YouTube upload processing Transcript extraction for context understanding AI‑powered generation of titles, descriptions, and tags Metadata application to uploaded videos without manual intervention 🔄 Node‑by‑Node Breakdown | Step | Node Purpose | |------|---------------------------------------------------------------------| | 1 | New Video? (Trigger) – Monitors specified Google Drive folder | | 2 | Download New Video – Retrieves the video file from Google Drive | | 3 | Upload to YouTube – Uploads the video to YouTube with initial settings | | 4 | Get Transcript – Extracts transcript from the uploaded video | | 5 | Adjust Transcript Format – Formats raw transcript for processing | | 6 | Create Description – Generates SEO‑optimized description | | 7 | YT Tags (Message Model) – Creates relevant tags based on content | | 8 | YT Title (Message Model) – Generates compelling title | | 9 | Define File Path Upload Format (Optional) – Structures data paths | | 10 | Update Video’s Metadata – Applies generated title, description, tags| ⚙️ Pre‑conditions / Requirements n8n with Google Drive and YouTube API credentials configured (stored as n8n credentials/variables; no hard‑coded IDs) Dedicated Google Drive folder for video uploads YouTube channel with proper upload permissions AI service access for transcript processing and metadata generation Sufficient storage for temporary video handling ⚙️ Setup Instructions Import this workflow into your n8n instance. Configure Google Drive credentials; reference folder ID via n8n variable (do not hard‑code). Set up YouTube API credentials with upload and edit permissions. Specify the target Google Drive folder ID in the New Video? trigger node (via variable). Configure AI service credentials for transcript and metadata generation. Adjust message templates for title, description, and tag creation. Test with a small video file before production use. 🎨 How to Customize Modify AI prompts to match your channel’s tone and style. Add conditional logic based on video categories or naming conventions. Implement notification systems to alert when uploads complete. Create custom metadata templates for different content types. Include timestamps or chapter markers based on transcript analysis. Add social media sharing nodes to announce new uploads. ⚠️ Important Notes Video quality is preserved through the upload process. Consider YouTube API quotas when handling multiple uploads. Transcript quality affects metadata generation results. Videos are initially uploaded without visibility adjustments. Processing time depends on video length and transcript complexity. 🔐 Security and Privacy Store API credentials and folder IDs as n8n Credentials/Variables—remove any hard‑coded tokens or IDs. Video files are processed temporarily and not stored permanently. Limit Google Drive folder access to authorized users only. Manage YouTube upload permissions carefully (use OAuth/service accounts). Ensure compliance with organizational data‑handling policies.
by Krishna Kumar Eswaran
🧠 Problem This Solves Manually sharing Medium articles to LinkedIn daily can be repetitive and time-consuming. This automation: Fetches the latest Medium articles based on a tag (e.g., android) Posts them on LinkedIn twice daily Uses Airtable to prevent duplicates Sends a confirmation to Telegram once posted Stay consistently active on LinkedIn without lifting a finger. 👥 Who This Template Is For Developers who write or follow Medium content Tech creators or founders looking to grow an audience Community or page managers needing regular curated posts Busy professionals who want hands-free LinkedIn engagement ⚙️ Workflow Breakdown This automation runs at 9:00 AM and 7:00 PM daily and performs these steps: Fetch articles from MediumAPI.com by tag Check Airtable to prevent reposting the same article Post on LinkedIn if it’s new Store the article ID in Airtable Send a Telegram message after successful posting 🧾 Step-by-Step Setup Instructions ✅ 1. Airtable Configuration Create a base with: Table Name: PostedArticles Column: ArticleID (Single line text – to track posted articles) 🔗 2. MediumAPI Setup Go to https://mediumapi.com Sign up and generate your API key from the dashboard Use this API endpoint in an HTTP node: GET https://mediumapi.com/api/tag/YOUR_TAG/latest Headers: Authorization: Bearer YOUR_API_KEY Replace YOUR_TAG with a topic like android, ai, webdev, etc. 💬 3. Telegram Bot Setup Go to @BotFather and create a new bot Save the bot token Use @userinfobot to get your Telegram chat ID Add a Telegram node in n8n with the token + chat ID 🔗 4. LinkedIn Setup Create a LinkedIn Developer App Connect it via OAuth2 in n8n Choose to post on your profile or company page 🧱 5. n8n Workflow Structure Node Type Description Cron Triggers the flow twice a day HTTP Request Fetches articles from MediumAPI.com Airtable Search Checks if article ID already exists IF Node Skips duplicates LinkedIn Post Publishes to your LinkedIn profile/page Airtable Create Stores posted article ID Telegram Node Sends success notification 🛠️ Customization Tips Change the tag in the API URL to match your niche Add hashtags or personal comments to the LinkedIn message Schedule different posting times in the Cron node Filter Medium posts based on length or title keywords (optional)
by Milorad Filipović
How it works It’s very important to come prepared to Sales calls. This often means a lot of manual research about the person you’re calling with. This workflow delivers a summary of the latest social media activity (LinkedIn + X) for businesses you are about to interact with each day. Scans Your Calendar**: Each morning, it reviews your Google Calendar for any scheduled meetings or calls with companies based on each attendee email address. Fetches Latest Posts**: For each identified company, it fetches recent LinkedIn and X posts and summerizes them using AI to deliver a qucik overview for a busy sales rep. Delivers Insights**: You receive personalized emails via Gmail, each dedicated to a company you’re meeting with that day, containing a reminder of the meeting and a summary of company's recent social media activity. Setup steps The workflow requires you to have the following accounts set up in their respective nodes: Google Calendar GMail Clearbit OpenAI Besides those, you will need an account on the RapidAPI platform and subscribe to the following APIs: Fresh LinkedIn Profile Data Twitter Email example
by Don Jayamaha Jr
📉 Detect key candlestick reversal patterns and volume divergence on Tesla (TSLA) using GPT-4.1 and real-time OHLCV data. This AI agent evaluates 1-hour and 1-day candles and is an essential part of the Tesla Financial Market Data Analyst Tool. It identifies signals like Doji, Engulfing, Hammer, and volume anomalies to support trade entry and exit logic. ⚠️ Not a standalone template — must be triggered by the Tesla Financial Market Data Analyst Tool 🔐 Requires: Alpha Vantage Premium API Key OpenAI GPT-4.1 access 🔍 What This Agent Does Calls Alpha Vantage to fetch: 🕐 1-hour OHLCV data 📅 1-day OHLCV data GPT-4.1 evaluates: 📊 Candlestick patterns like Doji, Engulfing, Shooting Star 🔄 Volume divergence (price/volume inconsistency) Returns a structured JSON output like: { "summary": "Bearish signs detected on 1-day chart. A shooting star formed on high volume while RSI is elevated. Volume divergence seen on 1h chart as price rises but volume weakens.", "candlestickPatterns": { "1h": "None", "1d": "Shooting Star" }, "volumeDivergence": { "1h": "Bearish", "1d": "None" }, "ohlcv": { "1h": { "close": 174.1, "volume": 1430000, "high": 175.0, "low": 173.8 }, "1d": { "close": 188.3, "volume": 21234000, "high": 189.9, "low": 183.7 } } } 🛠️ Setup Instructions Import the Workflow Name it: Tesla_1hour_and_1day_Klines_Tool Install Dependencies ✅ Tesla Financial Market Data Analyst Tool (this is the trigger parent) Add Required Credentials Alpha Vantage Premium → via HTTP Query Auth OpenAI GPT-4.1 → via OpenAI credentials Verify Web Access This tool fetches data live from Alpha Vantage: /query?function=TIME_SERIES_INTRADAY&interval=60min /query?function=TIME_SERIES_DAILY Run via Execute Workflow Trigger This tool will activate only when called by the Financial Analyst Agent. Inputs: message (optional) sessionId (used for memory continuity) 🧠 Agent Architecture | Component | Description | | ----------------------- | --------------------------------------------------- | | Candlestick Data Hour | Fetches 60min TSLA candles via Alpha Vantage | | Candlestick Data Day | Fetches daily TSLA candles via Alpha Vantage | | OpenAI Chat Model | GPT-4.1 reasoning engine for pattern detection | | Simple Memory | Maintains short-term logic context | | Tesla Klines Agent | LangChain AI agent analyzing both candle and volume | 📌 Sticky Notes Overview 📘 Workflow Purpose 🧠 Short-Term Memory Notes 🔍 1h/1d Data Fetch Logic 📉 Candlestick Pattern Types Detected 📊 Volume Divergence Definitions 🤖 GPT-4.1 Prompt Configuration 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company Logic, pattern reasoning, and prompt structure are proprietary IP. 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Automate technical edge: detect TSLA candle reversals and volume anomalies with precision using GPT-4.1 and Alpha Vantage. Required by the Tesla Financial Market Data Analyst Tool.
by Jimleuk
This n8n template demonstrates one approach to customer authentication via chat agents. Unlike approaches where you have to authenticate users prior to interacting with the agent, this approach allows guest users to authenticate at any time during the session or not at all. Note about Security: this template is for illustration purposes only and requires much more work to be ready for production! How it works A conversational agent is used for this demonstration. The key component is the Redis node just after the chat trigger which acts as the session context. For guests, the session item is blank. for customers, the session item is populated with their customer profile. The agent is instructed to generate a unique login URL only for guests when appropriate or upon request. This login URL redirects the guest user to a simple n8n form also hosted in this template. The login URL has the current sessionID as a query parameter as the way to pass this data to the form. Once login is successful, the matching session item by sessionId is populated with the customer profile. The user can now return to the chat window. Back to the agent, now when the user sends their next message, the Redis node will pick up the session item and the customer profile associated with it. The system prompt is updated with this data which let's the agent know the user is now a customer. How to use You'll need to update the "auth URL" tool to match the URL of your n8n instance. Better yet, copy the production URL of your form from the trigger. Activate the workflow to turn on production mode which is required for this workflow. Implement the authentication logic in step 3. This could be sending the user and pass to a postgreSQL data for validation. Requirements OpenAI for LLM (feel free to swap to any provider) Redis for Cache/Sessions (again, feel free to swap this out for postgresql or other database) Customising this workflow Consider not populating the session item with the user data as it can become stale. Instead, just add the userId and instruct the agent to query using tools. Extend the Login URL idea by experimenting with signup URLs or single-use Urls.
by Oliver Bardenheier
🛠️Setup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a specific tool was called by an agent. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list of tools that it used We then check whether the expected tool (from the dataset) is in that list Finally we pass this information back to n8n as a metric
by Daniel Shashko
How it Works Disclaimer: This template is for self-hosted n8n instances only. This workflow is designed for developers, data analysts, and automation enthusiasts seeking to automate personalized news collection and delivery. It seamlessly combines n8n, OpenAI (e.g., GPT-4.1), and Bright Data’s Model Context Protocol (MCP) to collect, extract, and email the latest global news headlines. On a schedule or via a manual trigger, the workflow prompts an AI agent to gather fresh news. The agent leverages context-aware memory and integrated MCP tools to conduct both search engine queries and direct web page scraping in real time, delivering more than just meta search results—it extracts actual on-page headlines and trusted links. Results are formatted and delivered automatically by email via your SMTP provider, requiring zero manual effort once configured. Who is this for? Developers, data engineers, or automation pros wanting an AI-powered, fully automated newsfeed Teams needing up-to-date news digests from trusted global sources Anyone self-hosting n8n who wishes to combine advanced LLMs with real-time web data Setup Steps Setup time: Approx. 15–30 minutes (n8n install, API configuration, node setup) Requirements: Self-hosted n8n instance OpenAI API key Bright Data MCP account credentials SMTP/email provider details Install the community MCP node (n8n-nodes-mcp) for n8n and set up Bright Data MCP access. Configure these nodes: Schedule Trigger: For automated delivery at your chosen interval. Edit Fields: To inject your AI news collection prompt. AI Agent: Connects to OpenAI and MCP, enabled with memory for context. OpenAI Chat Model: Connects via your OpenAI credentials. MCP Clients: Configure at least two—one for search (e.g. search_engine) and one for scraping (e.g. scrape_as_markdown). Send Email: Set up with recipient and SMTP information. Credentials must be entered into their respective nodes for successful execution. Customization Guidance Prompt Tweaks:** Refine your AI news prompt to target specific genres, regions, or sources, or broaden/narrow the coverage as needed. Tool Configuration:** Carefully define tool descriptions and parameters in MCP client nodes so the agent can pick the best tool for each step (e.g., only scrape real news sites). Delivery Settings:** Adjust email recipient(s) and SMTP details as needed. Workflow Enhancements:** Use sticky notes in n8n for extended documentation, alternate prompts, or troubleshooting tips. Run Frequency:** Set schedule as needed—from hourly to daily updates. Once configured, this workflow will automatically gather, extract, and email curated news headlines and links—no manual curation required!
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether an output matches an expected output (i.e. has the same meaning). The workflow takes questions about the causes of historical events and compares them with the reference answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular chat trigger so that the workflow can be started from either one. More info If we're evaluating (i.e. the execution started from the evaluation trigger), we calculate the correctness metric using AI We pass this information back to n8n as a metric If we're not evaluating we avoid calculating the metric, to reduce cost
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Summarization" which in this scenario, measures the LLM's accuracy and faithfulness in producing summaries which are based on an incoming Youtube transcript. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_summarization_quality How it works This evaluation works best for an AI summarization workflows. For our scoring, we simple compare the generated response to the original transcript. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing