by Harshal Patil
Automated error monitoring and reporting system using data tables This template helps you monitor workflow failures by automatically logging every error to a data table, then sending periodic summaries via email, Slack, Microsoft Teams, or Discordβso you catch issues before they impact your operations. What This Workflow Does The template uses two synchronized workflows to create a complete error monitoring system: Error Capture Workflow - Uses n8n's native error handling to intercept every workflow failure, extract key details (workflow name, error message, timestamp, node information, execution ID), and store them in your data table or database Report Scheduler Workflow - Runs on your configured schedule (daily, weekly, or custom) to query stored errors, aggregate insights, and send formatted summaries through your notification channel How to Use It Capture errors from all workflows β Store them in one centralized table β Get daily/weekly summaries in Slack, email, or Teams β¨ Key Features Zero-touch error logging** - No modifications needed to existing workflows; errors are captured automatically Flexible storage** - Configure any data table, PostgreSQL, MySQL, MongoDB, or cloud database as your error repository Multiple notification channels** - Send reports via email, Slack, Microsoft Teams, Discord, or custom HTTP endpoints Customizable schedules** - Daily, weekly, or custom-interval reporting to match your team's needs Rich error context** - Every logged error includes workflow name, error message, affected node, timestamp, and execution ID for quick troubleshooting Historical database** - Build a searchable error archive for pattern analysis and long-term debugging π Use Cases Monitor production workflows** - DevOps and platform teams tracking system health across multiple automated processes Debug ETL failures** - Data engineers identifying where pipelines break and why Oversee complex automation** - Teams managing dozens of interconnected workflows without manual checks Stay informed as a solo developer** - Get notified of issues without constantly logging into n8n π Prerequisites n8n instance (self-hosted or n8n Cloud) Data storage (PostgreSQL, MySQL, MongoDB, n8n's built-in tables, or similar) Notification service configured (Gmail, Slack, Teams, Discord, or custom webhook) βοΈ Configuration Steps Connect your data storage - Point the error capture workflow to your chosen database or data table Enable error monitoring - Activate the error handling trigger for workflows you want to monitor Set reporting schedule - Choose daily, weekly, or custom intervals for your summary reports Configure notifications - Add your Slack webhook, email address, Teams channel, or Discord endpoint Customize report format - Optionally adjust which error metrics and insights appear in summaries π‘ Customization Ideas Add error severity levels (critical, warning, info) to prioritize failures Set up real-time critical error alerts in addition to scheduled reports Create workflow-specific error thresholds and escalation rules Integrate with PagerDuty or Opsgenie for incident management Add visualizations or charts to your error summaries Implement automatic retry logic for specific error types π Sample Error Summary Output Your reports will include: Total errors in the reporting period Error count breakdown by workflow Most frequently occurring error types Error timeline and trends Direct links to failed executions for quick debugging π§ Maintenance Tips Review error patterns monthly to identify workflows that need optimization Archive or delete old error logs periodically to keep your database performant Adjust reporting frequency as your workflow volume grows Update notification recipients when team members join or leave
by Teng Wei Herr
How it works You provide a list of prompts and a system instruction, the workflow batches them into a single OpenAI Batch API request. The batch job is tracked in a Supabase openai_batches table. A cron job polls OpenAI every 5 minutes, and once the batch completes, the results are decoded and stored back in Supabase. Set up steps Create the openai_batches table in Supabase. Schema is in the yellow sticky note. Add your OpenAI and Supabase/Postgres credentials to the workflow. Replace the mock data with your actual prompts and you're ready to go!
by Hirokazu Kawamoto
How it works This workflow fetches RSS feeds daily and sends a notification to Slack if new topics are found. Since standard RSS snippets are often insufficient, the AI visits the source links to summarize the full articles and sends the summaries to Slack. You can then share interesting topics directly to X from Slack using the button. How to use Open the Gemini Chat Model node (attached to the AI Agent) and set up the Credential. You can obtain an API key from Google AI Studio. Open the Slack node and set up the Credential to allow sending messages. You can create a new Slack App here. Finally, open the Config node and update the rssUrls parameter with the RSS feed URLs you want to follow. Customizing this workflow You can adjust the number of topics fetched per RSS feed by modifying the takeCount parameter in the Config node.
by Supira Inc.
How it works This workflow automatically collects the latest news articles from both English and Japanese sources using NewsAPI, summarizes them with OpenAI, and appends the results into a Google Sheet. The summaries are concise (about 50 characters) in Japanese, making it easy to review news highlights at a glance. Set up steps Create a Google Sheet with two tabs: 01_Input (columns: Keyword, SearchRequired) 02_Output (columns: Date, Keyword, Summary, URL) Enter your own Google Sheet ID and tab names in the workflow. Add your NewsAPI key in the HTTP Request nodes. Connect your OpenAI account (or deactivate the summarization node if not needed). Run the workflow manually or use the daily schedule trigger at 13:00. This template is ready to use with minimal changes. Sticky notes inside the workflow provide extra guidance.
by RealSimple Solutions
POML β Prompt/Messages (No-Deps) What this does Turns POML markup into either a single Markdown prompt or chat-style messages\[] β using a zero-dependency n8n Code node. It supports variable substitution (via context), basic components (headings, lists, code, images, tables, line breaks), and optional schema-driven validation using componentSpec + attributeSpec. Credits Created by Real Simple Solutions as an n8n template friendly POML compiler (no dependencies) for full POML feature parity. View more of our _templates here_ Whoβs it for Teams who author prompts in POML and want a template-safe way to turn them into either a single Markdown prompt or chat-style messagesβwithout installing external modules. Works on n8n Cloud and self-hosted. What it does This workflow converts POML into: prompt** (Markdown) for single-shot models, or messages[]** (system|user|assistant) for chat APIs when speakerMode is true. It supports variable substitution via a context object ({{dot.path}}), lists, headings, code blocks, images (incl. base64 β data: URL), tables from JSON (records/columns), and basic message components. How it works Set (Specs & Context):** Provide componentSpec (allowed attrs per tag), attributeSpec (typing/coercion), and optional context. Code (POML β Prompt/Messages):** A zero-dependency compiler parses the POML and emits prompt or messages[]. > Add a yellow Sticky Note that includes this description and any setup links. Use additional neutral sticky notes to explain each step. How to set up Import the template. Open the first Set node and paste your componentSpec, attributeSpec, and context (examples included). In the Code node, choose: speakerMode: true to get messages[], or false for a single prompt. listStyle: dash | star | plus | decimal | latin. Run β inspect prompt/messages in the output. Requirements No credentials or community nodes. Works without external libraries (template-compliant). How to customize Add message tags (<system-msg>, <user-msg>, <ai-msg>) in your POML when using speakerMode: true. Extend componentSpec/attributeSpec to validate or coerce additional tags/attributes. Preformat arrays in context (e.g., bulleted, csv) for display, or add a small Set node to build them on the fly. Rename nodes and keep all user-editable fields grouped in the first Set node. Security & best practices Never** hardcode API keys in nodes. Remove any personal IDs before publishing. Keep your Sticky Note(s) up to date and instructional.
by Yevhenii
How it works Listens for commands via Telegram to trigger report generation. Parses the received command to determine action parameters. Fetches candidate data from Google Sheets based on parsed input. Prepares the data for report generation through custom logic. Generates a comparative report using OpenAI LLM. Sends the generated report back through Telegram. Setup steps Configure Telegram API credentials for the Telegram Trigger node. Set up Google Sheets credentials and provide access to the required sheet. Configure OpenAI API credentials to enable the LLM Report Generator. Customization Consider customizing the report generation logic in the 'Prepare Report Data' node to adjust the report format or criteria. Part of the CV Screening Agent system CV Screening Agent (main workflow) β link coming soon CV Screening β Error Handler β link coming soon
by BytezTech
Run automated daily standups using Slack, Notion, and Redis π Overview This workflow fully automates your team's daily standup process using Slack for communication, Notion for structured data storage, and Redis for real-time session management. It automatically sends standup questions to active team members, collects and stores their responses, manages conversation sessions, and generates structured summary reports for managers. Morning and evening standups run on schedule without manual intervention. Redis ensures fast and reliable session tracking, prevents duplicate standups, and maintains conversation state. All responses are securely stored in Notion for long-term reporting and tracking. This workflow eliminates manual follow-ups, improves reporting consistency, and gives managers full visibility into team progress, blockers, and attendance. βοΈ How it works This workflow runs automatically based on configured schedules. Morning standup Fetches active team members from the Notion database Creates standup sessions in Redis Sends standup questions to each team member via Slack direct message Stores responses in Notion Tracks session state using Redis Automated reports The workflow automatically generates: Morning summary report showing attendance, responses, and blockers Evening summary report showing accomplishments, completion status, and help requests Both reports are automatically sent to the Slack admin channel. Redis ensures session tracking and prevents duplicate standups. π Setup steps Import this workflow into n8n Connect your Slack credentials Connect your Notion credentials Connect your Redis credentials Configure your Notion database IDs Configure your Slack admin channel ID Activate the workflow The workflow will run automatically based on the configured schedule. π Features Automated standup management Automatically sends standup questions Tracks team responses Stores responses securely in Notion Prevents duplicate standup sessions Automated reporting Attendance tracking Task completion tracking Blocker detection Missing response detection Automatic Slack summary reports π Requirements You need the following accounts: n8n Slack Notion Redis π― Benefits Fully automated standup system No manual follow-ups required Automatic attendance tracking Identifies blockers early Improves team visibility Saves management time π¨βπ» Author BytezTech Pvt Ltd
by Khairul Muhtadin
Why You Need This Right Now π‘ Stop the panic attacks. We've all been there - accidentally deleted a workflow that took hours to build, or worse, corrupted your entire automation setup. This workflow is your safety net. Save your weekends. Instead of spending hours recreating lost work, get back to what matters. One setup protects everything, automatically. Sleep better at night. Your workflows are safely stored in two places with full version history. If something breaks, you're back online in minutes, not days. Perfect For These Situations β‘ β Business owners running critical automations β Agencies managing client workflows β Teams who need audit trails β Anyone who values their time and sanity How It Actually Works π§ Think of it like having a personal assistant who: Checks your workflows twice daily (you can change this) Creates organized backups with timestamps Stores them safely in Google Drive AND GitHub Tells you it's done via Telegram or Discord Keeps everything tidy with smart folder organization The result? A timestamped folder in your Google Drive and organized files in your GitHub repo. Everything is searchable, restorable, and audit-ready. Quick 5-Minute Setup π Import this workflow to your n8n Connect your accounts (Google Drive, GitHub, optional notifications) Set your preferences (which folder, which repo, how often) Test it once to make sure everything works Relax knowing your workflows are protected What You'll Need π Your n8n instance (obviously!) Google Drive account (free works fine) GitHub account (free works too) 5 minutes of setup time Optional: Telegram or Discord for notifications Pro Tips for Power Users π§ Want to level up? Here are some ideas: Add encryption** for sensitive workflows Create restore workflows** for one-click recovery Set up pull requests** for team review of changes Customize schedules** based on your workflow update frequency Created by: khaisa Studio - Automation experts who actually use this stuff daily Tags: backup, automation, n8n, google-drive, github, workflow-protection, business-continuity Questions? Get in touch - I'm always happy to help fellow automation enthusiasts! Remember: The best backup is the one you set up before you need it. Your future self will thank you!
by kazunori
Who this template is for This template is for teams that use GitLab merge requests and want a practical AI-assisted review workflow in n8n. It is useful for engineering teams that want faster first-pass reviews, consistent review comments, and a simple way to separate likely bugs, security risks, and maintainability issues before a human reviewer takes over. How it works This workflow starts when a user posts a trigger comment in a GitLab merge request discussion. It loads the merge request changes, splits the diff into one item per changed file, and skips files that are not suitable for inline review. Each file is then reviewed in parallel by three AI reviewers focused on bugs, security, and maintainability. Their findings are merged and sent to a verifier step, which removes weak or duplicate findings and normalizes severity and confidence. Only findings that pass the configured confidence threshold are posted. If a valid GitLab diff position can be resolved, the workflow creates an inline review comment. Otherwise, it falls back to a reply comment in the trigger discussion. A summary reply is also posted to mark the review as completed. Set up Setup usually takes around 10 to 20 minutes. You will need: a GitLab access token with permission to read merge requests and post discussions one or more AI model credentials for the reviewer and verifier steps your GitLab base URL and preferred trigger comment a minimum confidence threshold for posting findings Most detailed setup guidance is included directly in the sticky notes inside the workflow. Requirements GitLab project with merge request discussions enabled n8n credentials for GitLab API access AI chat model credentials for the reviewer and verifier nodes How to customize the workflow You can change the trigger comment, GitLab base URL, and minimum confidence threshold in the configuration section. You can also customize: which findings are posted by adjusting the confidence threshold reviewer prompts for bug, security, and maintainability analysis the final verifier behavior for severity, confidence, and duplicate handling the fallback behavior for findings that cannot be mapped to a valid inline diff position
by satoshi
Create FAQ articles from Slack threads to Notion and Zendesk This workflow helps you capture "tribal knowledge" shared in Slack conversations and automatically converts it into structured documentation. By simply adding a specific reaction (default: π) to a message, the workflow aggregates the thread, uses AI to summarize it into a Q&A format, and publishes it to your knowledge base (Notion and Zendesk). Who is this for? Customer Support Teams** who want to turn internal troubleshooting discussions into public help articles. Knowledge Managers** looking to reduce the friction of documentation. Development Teams** wanting to archive technical decisions made in Slack threads. What it does Trigger: Watches for a specific emoji reaction (π :book:) on a Slack message. Data Collection: Fetches the parent message and all replies in the thread to get the full context. AI Processing: Uses OpenAI to analyze the conversation, summarize the solution, and format it into a clear Question & Answer structure. Publishing: Creates a new page in a Notion database with tags and summaries. (Optional) Drafts a new article in Zendesk. Notification: Replies to the original Slack thread with links to the newly created documentation. Requirements n8n** (Self-hosted or Cloud) Slack** workspace (with an App installed that has permissions to read channels and reactions). OpenAI** API Key. Notion** account with an Integration Token. Zendesk** account (optional, can be removed if not needed). How to set up Configure Credentials: Set up authentication for Slack, OpenAI, Notion, and Zendesk in n8n. Setup Notion: Create a database in Notion with the following properties: Name (Title) Summary (Text/Rich Text) Tags (Multi-select) Source (URL) Channel (Select or Text) Update Configuration Node: Open the Workflow Configuration1 node (Set node) and replace the placeholder values: slackWorkspaceId: Your Slack Workspace ID (e.g., T01234567). notionDatabaseId: The ID of your Notion database. zendeskSectionId: (Optional) The ID of the section where articles should be created. Slack App Scopes: Ensure your Slack App has the following scopes: reactions:read, channels:history, groups:history, chat:write. How to customize Change the Trigger:* If you prefer a different emoji (e.g., π or π‘), update the "Right Value" in the *IF - :book: Reaction Check** node. Modify the Prompt:* Edit the *OpenAI** node to change how the AI formats the answer (e.g., ask it to be more technical or more casual). Remove Zendesk:* If you don't use Zendesk, simply delete the *Zendesk* node and remove the reference to it in the final *Slack - Notify Completion** node.
by Yassin Zehar
Description AI-powered priority re-evaluation every 2 hours. Analyzes new signals, meeting decisions, emails, and blockers, then runs 3 AI passes (Impact, Urgency, Final Ranking) to suggest re-ranking. Only updates when the ranking actually changes. Context Runs every 2 hours during work days (8:00β18:00). Pulls 5 data sources in parallel: open priorities, recent high-impact signals, todayβs meeting decisions, urgent emails, and overdue/blocked actions. A Priority Context Matrix links each priority to relevant new information. Three sequential AI passes score Impact, Urgency, and produce a Final Ranking with rationale. CompareDatasets detects changes vs current ranking. Only if the ranking changed, Notion is batch-updated and Slack shows before/after. Who is this for? β’ PMs managing dynamic priority stacks β’ Product leaders who need real-time priority alignment β’ Teams where priorities shift frequently due to customer signals Requirements β’ Notion account with PM Daily databases β’ OpenAI API key β’ Slack Bot token How it works Trigger Runs every 2 hours, MondayβFriday, 8:00β18:00. Context Collection 5 parallel pulls: open priorities, signals, meetings, emails, blocked actions. Priority Matrix Links each priority to relevant signals, emails, and decisions. Deduplicates and normalizes. 3-Pass AI Scoring Pass 1: Impact. Pass 2: Urgency. Pass 3: Final Ranking with rationale. Change Detection Compares new vs current ranking. If changed: batch-updates Notion and posts before/after to Slack. What you get Continuous priority re-evaluation every 2 hours 3-pass AI scoring (Impact, Urgency, Final) Change detection β only updates when needed Before/after ranking comparison on Slack 40+ node workflow with CompareDatasets intelligence About me : Iβm Yassin a Product Manager Scaling tech products with data-driven project management. π¬ Feel free to connect with me on Linkedin
by Yaron Been
Overview Find companies similar to your best clients using PredictLeads, enrich each with news, hiring, and tech signals, then score them 0β100 for outreach priority. This workflow reads your best client domains from Google Sheets, discovers lookalike companies via the PredictLeads Similar Companies API, enriches each with news events, job openings, and technology detections, then calculates a composite lead score from 0 to 100 based on multiple signals. Scored results are written to Google Sheets so your sales team can prioritize outreach. How it works A manual trigger starts the workflow. The workflow reads your best client domains from Google Sheets. It loops through each client and fetches similar companies from PredictLeads. It extracts lookalike company details such as domain, company name, industry, and similarity score. It loops through each lookalike and enriches it with three PredictLeads signals: News events for recent company activity Job openings for hiring signals Technology detections for tech stack insights It calculates a composite score from 0 to 100 based on: +30 points for recent news events in the last 30 days +30 points for active hiring with 5 or more open roles +20 points for using target technologies such as HubSpot, Salesforce, or Marketo +10 points for a high similarity score above 0.7 +10 points for being located in a target region such as the US, UK, or Canada It writes scored lookalikes to the Scored Lookalikes tab in Google Sheets. Setup Create a Google Sheet with two tabs: Sheet1 with a domain column for your best client domains, one per row Scored Lookalikes with these columns: domain company_name source_domain similarity_score news_count job_count tech_names tech_match country composite_score scored_at Connect your Google Sheets account using OAuth2. Add your PredictLeads API credentials using the X-Api-Key and X-Api-Token headers. Requirements Google Sheets OAuth2 credentials PredictLeads API account: https://docs.predictleads.com Notes Target technologies such as HubSpot, Salesforce, and Marketo, as well as target regions such as the US, UK, and Canada, can be adjusted in the scoring code node. The scoring weights are configurable, so you can change the point values to match your sales priorities. Start with 3 to 5 best client domains for optimal results. PredictLeads Similar Companies, News Events, Job Openings, and Technology Detections API docs: https://docs.predictleads.com