by Jihene
AI-Agent Code Review for GitHub Pull Requests Description: This n8n workflow automates the process of reviewing code changes in GitHub pull requests using an OpenAI-powered agent. It connects your GitHub repo, extracts modified files, analyzes diffs, and uses an AI agent to generate a code review based on your internal code best practices (fed from a Google Sheet). It ends by posting the review as a comment on the PR and tagging it with a visual label like ✅ Reviewed by AI. 🔧 What It Does Triggered on PR creation Extracts code diffs from the PR Formats and feeds them into an OpenAI prompt Enriches the prompt using a Google Sheet of Swift best practices Posts an AI-generated review as a comment on the PR Applies a PR label to visually mark reviewed PRs ✅ Prerequisites Before deploying this workflow, ensure you have the following: n8n Instance (Self-hosted or Cloud) GitHub Repository with PR activity OpenAI API Key** for GPT-4o, GPT-4-turbo, or GPT-3.5 GitHub OAuth App** (or PAT) connected to n8n to post comments and access PR diffs (Optional) Google Sheets API credentials if using the code best practices lookup node. ⚙️ Setup Instructions 1. Import the Workflow in n8n, click on Workflows → Import from file or JSON Paste or upload the JSON code of this template 2. Configure Triggers and Connections 🔁 GitHub Trigger Node**: PR Trigger Repository**: Select the GitHub repo(s) to monitor Events**: Set to pull_request Auth**: Use GitHub OAuth2 credentials 📥 HTTP Request Node: Get file's Diffs from PR No authentication needed; it uses dynamic path from trigger 🧠 OpenAI Model Node**: OpenAI Chat Model Model**: Select gpt-4o, gpt-4-turbo, or gpt-3.5-turbo Credential**: Provide your OpenAI API Key 🧑💻 Code Review Agent Node : Code Review Agent Connected to OpenAI and optionally to tools like Google Sheets 💬 GitHub Comment Poster Uses GitHub API to post review comments back on PR Node: GitHub Robot Credential: Use the agent Github account (OAuth or PAT) Repo : Pick your owen Github Repository 🏷️ PR Labeler (optional) Adds label ReviewedByAI after successful comment Node: Add Label to PR Label : you ca customize the label text of your owen tag. 📊 Google Sheet Best Practices config (optional) Connects to a Google Sheet for coding guideline lookups, we can replace Google sheet by another tool or data base First prepare your best practices list with the clear description and the code bad/good examples Add al the best practices in your Google Sheet Configure* the Code *Best Practices node** in the template : Credential : Use your Google Sheet account by OAuth2 URL : Add your Google Sheet document URL Sheet : Add the name of the best practices sheet
by Praveena
Idea The idea for app came since I wanted to build a unique gift for my niece because she gets excited for her birthday (which Im going to miss this year). The web app has a simple countdown (in html and JS) but more importantly, there is an AI agent that will answer some specific questions and know her preferences. How it works The questions from app are sent via web hook to N8N which has pulls preferences file (about her likes, dislikes, personality) from postgre and AI Agent that will answer questions/respond. The current status is stored back in postgre (especially about status of cat and universe happenings) before responding back. Features Integrated AI chatbot via N8N webhook Persistent conversation history Minimizable chat interface Fallback support for offline testing Features: -- Wheres Mittens - This is a query to track her lost cat in multiverse. -- Multiverse updates with recent update stored Pre Requisites Postgre SQL database is available. Alternatively, use any other database but change the N8N nodes. LLM Api Key. Step by Step Instructions Export this N8N Workflow. Modify LLM API Key, I used openAI, 4.1 For web app scofflding,you will need Node, HTML and Javascript. I've created a mini version using Node and JS with web app and N8N connection settings here: <https://github.com/productiser/FiBirthdayAgent> PostgreSQL Database Script (1 table for memory and context storage): CREATE TABLE fifi_world_context ( id TEXT PRIMARY KEY, -- e.g., 'agent_fifi' cat_location TEXT, -- e.g., "Bubble Nebula" cat_activity TEXT, -- e.g., "Playing laser tag with moon mice" fifi_preferences JSONB, -- e.g., likes/dislikes/foods/shows world_history TEXT, -- Summary of narrative events last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); 5.Modify system prompt as per your needs. Built With N8N Self hosted Self hosted web app Hosted on Vercel Total spend = <£1 (AI costs only) Total Time = <1 day Support Watch this video for web app overview and how it looks. <https://youtu.be/e7PlrTdvwoM> Contact me on info@pankstr.com/ superllmuser@gmail.com for any queries Hope you enjoy!!
by Sleak
Who is this template for? This workflow template is designed for business owners and HR professionals to automatically detect and structure unstructured job applications received through email. Additionally, other email categories can be added, each with it's own workflow. How it works Every time a new email is received, an OpenAI model classifies it into a predefined category by analyzing the plain text of the email and the extracted content from the attachment. If the email is classified as a job application, an OpenAI model uses the email’s plain text and extracted attachment content to populate predefined fields such as age and study. A relevant additional step would be to directly push the applicant and their structured job application into a CRM or ATS like Hubspot or Recruitee. Set up steps Configure your IMAP credentials to connect your email account. Use this n8n documentation page for quickstart guides for common email providers. Connect your OpenAI account in the 'Classify email' node. And add or remove any category for classification in this node. Make sure the description is clear and concise. Connect your OpenAI account in the 'Extract variables - email & attachment' node. And add or remove any predefined fields that should be populated for job applications in this node. Make sure the description is clear and concise.
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response Listens for incoming user questions (can be extended to webhook). Converts questions to vector embeddings and performs similarity search on MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce direct, context-aware answers. Maintains short-term conversation context using a memory buffer node. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Configure the AI system prompt in the “Knowledge Base Agent” node to reflect your company’s tone, answering style, and any business rules. Update the workflow description and instructions to help users understand the chat’s purpose and capabilities. Connect the MongoDB collection used for vector search in the chat workflow and update the vector search index if needed to match your setup. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection, with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by JaredCo
This n8n workflow demonstrates how to transform natural language date and time expressions into structured data with 96%+ accuracy. Parse complex expressions like "early next July", "2 weeks after project launch", or "end of Q3" into precise datetime objects with confidence scoring, timezone intelligence, and business rules validation for any automation workflow. Good to know Achieves 96%+ accuracy on complex natural language date expressions At time of writing, this is the most advanced open-source date parser available Includes AI learning that improves over time with user corrections Supports 6 languages with auto-detection (English, Spanish, French, German, Italian, Portuguese) Sub-millisecond response times with intelligent caching Enterprise-grade with business intelligence and timezone handling How it works Natural Language Input**: Receives date expressions via webhook, form, email, or chat AI-Powered Parsing**: Your world-class date parser processes the text through: 50+ custom rule patterns for complex expressions Multi-language auto-detection and smart translation Confidence scoring (0.0-1.0) for AI decision-making Ambiguity detection with helpful suggestions Business Intelligence**: Applies enterprise rules automatically: Holiday calendar awareness (US + International) Working hours validation and warnings Business day auto-adjustment Timezone normalization (IANA format) Smart Scheduling**: Creates calendar events with: Structured datetime objects (start/end times) Confidence metadata for workflow decisions Alternative interpretations for ambiguous inputs Rich context for follow-up actions Integration Ready**: Outputs connect seamlessly to: Google Calendar, Outlook, Apple Calendar CRM systems (HubSpot, Salesforce) Project management tools (Notion, Asana) Communication platforms (Slack, Teams) How to use The webhook trigger receives natural language date requests from any source Replace the MCP server URL with your deployed date parser endpoint Configure timezone preferences for your organization Customize business rules (working hours, holidays) in the parser settings Connect calendar integration nodes for automatic event creation Add notification workflows for scheduling confirmations Use Cases Meeting Scheduling**: "Schedule our quarterly review for early Q3" Project Management**: "Set deadline 2 weeks after product launch" Event Planning**: "Book venue for the weekend before Labor Day" Personal Assistant**: "Remind me about dentist appointment next Tuesday morning" International Teams**: "Team standup tomorrow morning" (auto-timezone conversion) Seasonal Planning**: "Launch campaign in late spring 2025" Requirements Natural Language Date Parser MCP server (provided code) Webhook endpoint or form trigger Calendar integration (Google Calendar, Outlook, etc.) Optional: Slack/Teams for notifications Optional: Database for learning pattern storage Customizing this workflow Multi-language Support**: Enable auto-detection for global teams Business Rules**: Configure company holidays and working hours Learning System**: Enable AI learning from user corrections Integration Depth**: Connect to your existing calendar and CRM systems Confidence Thresholds**: Set minimum confidence levels for auto-scheduling Ambiguity Handling**: Route unclear dates to human review or clarification requests Sample Input/Output Input Examples: "early next July" "2 weeks after Thanksgiving" "next Wednesday evening" "Q3 2025" "mañana por la mañana" (Spanish) "first thing Monday" Rich Output: { "parsed": [{ "start": "2025-07-01T00:00:00Z", "end": "2025-07-10T23:59:59Z", "timezone": "America/New_York" }], "confidence": 0.95, "method": "custom_rules", "business_insights": [{ "type": "business_warning", "message": "Selected date range includes July 4th holiday" }], "predictions": [{ "type": "time_preference", "suggestion": "You usually schedule meetings at 10 AM" }], "ambiguities": [], "alternatives": [{ "interpretation": "Early July 2026", "confidence": 0.15 }], "performance": { "cache_hit": true, "response_time": "0.8ms" } } Why This Workflow is Unique World-Class Accuracy**: 96%+ success rate on complex expressions AI Learning**: Improves over time with user feedback Global Ready**: Multi-language and timezone intelligence Business Smart**: Enterprise rules and holiday awareness Performance Optimized**: Sub-millisecond cached responses Context Aware**: Provides confidence scores and alternatives for AI decision-making Transform your scheduling workflows from rigid form inputs to natural, conversational date requests that your users will love!
by Alexander Bentlund
Search music and play to Spotify from Telegram This workflow is a simple demonstration on accessing a message model from Telegram and it makes searching for songs an easy task even if you can't remember the artist or song name. An OpenAI message model tries to figure out the song and sends it to an active Spotify device**. Use case Imagine an office where you play music in the background and the employees can control the music without having to login to the playing account. How it works You describe the song in Telegram. Telegram bot sends the text to n8n. An OpenAI message model tries to find the song. Spotify gets the search query string. First match is then added to queue. -- If there is no match a message is sent to Telegram and the process ends. We change to the next track in the list. We make sure the song starts playing by trying to resume. We fetch the currently playing track. We return "now playing" information to Telegram: Song Name - Artist Name - Album Name. Error handling Every Spotify step has it's on error handler under settings where we output the error. Message parser receives the error and sends it to Telegram. Requirements Active workflow* OpenAI API key Telegram bot Spotify account and Oauth2 API Spotify active on a device** .* The Telegram trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking "Test step" and then it waits for the Telegram event. When event is received, just step through all steps or just clicking "Test step" on the "Fetch Now Playing" node. .** You must have a Spotify device active when trying to communicate with a device. Open Spotify and play something - not it is active.
by n8n Team
This workflow automatically syncs Shopify orders with your Zendesk tickets. Using this workflow, Shopify orders will be added or have their information updated straight to your Zendesk tickets. Prerequisites Shopify account and Shopify credentials Zendesk account and Zendesk credentials How it works Shopify Trigger starts the workflow whenever an order is updated. Zendesk node finds if the order already exists and has a ticket assigned. Set node keeps and passes only ticket ID. Merge by Key node combines the Shopify order data with the Zendesk ticket data. If node splits the workflow conditionally, checks if the ticket already exists or not. If order is new, Zendesk node creates a new ticket for the order.
by Hostinger
Quickly transform any LinkedIn profile URL into a concise, AI‑generated professional summary — perfect for recruiters, sales teams, and hiring managers who need instant insights into prospects or candidates without manual research. How it works The workflow polls a Google Sheet for new or updated rows containing LinkedIn profile URLs. For each URL, the Real‑Time LinkedIn Scraper API (via RapidAPI) pulls experience and education sections. Extracted profile data is sent to OpenAI’s GPT model, which generates a clean, structured summary highlighting key strengths, career trajectory, and differentiators. The generated summary is written back into a new column in the same row of your Google Sheet for easy review and sharing. Set up steps Connect your Google account and select the spreadsheet + worksheet containing your list of LinkedIn URLs. Sign up for the Real‑Time LinkedIn Scraper API on RapidAPI, copy your API key, and add it to the workflow’s HTTP Request node. Insert your OpenAI API key credentials. Ensure your Google Sheet has one column for “linkedin_url” and create two empty columns named “full_name” and "summary" (or customize them based on your needs). Run a single row through the workflow to verify scraping accuracy and summary formatting, then turn on the workflow for continuous automation. With this template, eliminate hours of manual profile review — instantly gain actionable insights and focus on what really matters: building relationships and closing deals.
by AlexAutomates
Auto-Categorize Outlook Emails with AI in n8n How It Works Trigger: The workflow starts with the Microsoft Outlook Trigger node, polling your inbox every minute for new emails. Extract & Clean Email Content: The email’s key fields (from, subject, isRead, body) are extracted. The body is converted from HTML to Markdown, then sanitized to plain text for reliable AI processing. Node Setup Details: Microsoft Outlook Trigger Resource: Message Operation: Trigger on new email Fields to Output: from, subject, isRead(optional), body Folders to Include: (Set to your Inbox or specific folder IDs) Markdown Node Input: {{$json"body"}} (HTML email body) Output Key: Email Body Markdown Purpose: Converts HTML to Markdown for easier downstream processing. Sanitize Node (Code Node) Input: Email Body Markdown from previous node Purpose: Cleans up Markdown, strips images, links, HTML tags, table formatting, and truncates to 4000 characters. Sample JS Code: // Get the markdown content from the previous node const markdownContent = $input.item.json["Email Body Markdown"]; Setup AI tools Move message and Get Folders Outlook tools are required, get contacts is optional. Set each field in the tools to "defined automatically by the model" and describe each field so the model understands how to use it. OpenRouter or other LLM models tool: You can use any client for this, but make sure to use a model that does well with tool calls (Claude, GPT-4.1, Gemini 2.5 Pro, etc.). Best Practices & Notes AI Prompt Engineering:** The AI is instructed to be conservative—never move emails from real people or saved contacts, and always explain its reasoning if it doesn’t move a message. This automation only works for NEW incoming messages. Inbox Zero:** This system is designed to help you achieve and maintain Inbox Zero by keeping only actionable items in your main inbox. Customization:** You can adjust the folder logic, add more categories, or tweak the AI prompt for your specific needs. Privacy:** All processing happens within your n8n instance; no email data is stored outside your environment except for the AI call (which only receives sanitized, minimal content).
by WeblineIndia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates summarizing YouTube videos by accepting a YouTube URL via a form, fetching the video transcript using Apify, and then generating a concise summary with OpenAI GPT. Setup Instructions Prerequisites: Apify account with access to the YouTube Transcript actor. OpenAI API key (for GPT-4o-mini model). n8n instance with the Apify and OpenAI credentials configured. Configuration Steps Apify Setup: Configure Apify API credentials in the Apify node. Ensure the YouTube Transcript actor ID (1s7eXiaukVuOr4Ueg) is correct. OpenAI Setup: Add your OpenAI API key in the OpenAI Chat Model node. Confirm model selection is set to gpt-4o-mini. Customization Modify form field to accept additional inputs if needed. Adjust Apify actor input JSON in the Payload node for extra metadata extraction. Customize the summarization options to tweak summary length or style. Change OpenAI prompt or model parameters in the OpenAI Chat Model node for different output quality or tone. Steps 1. On Form Submission Node:** Form Trigger Purpose:** Collect the YouTube video URL from the user via a web form. 2. Prepare Payload Node:** Set Purpose:** Format the YouTube URL and options into the JSON payload for Apify input. 3. Fetch Transcript Node:** Apify Purpose:** Run the YouTube Transcript actor to retrieve video captions and metadata. 4. Extract Captions Purpose:** Isolate the captions field from the Apify response for processing. 5. Summarize Transcript Purpose:** Generate a concise summary of the video captions.
by Jon Bungartz
How it works creates a new page in Confluence based on a page template also defined in Confluence replaces any number of placeholders with data from your workflow generic implementation for maximum flexibility Set up steps All parameters you need to change are defined in the Set node Set your Atlassian-domain Set the template id you want to use as the basis for new pages Set the target space and parent page for new pages added based on that template. 🎥 Explainer video has all the details. =) Feedback Any feedback is welcome. If you have ideas for improvements, let me know.
by Yang
📄 What this workflow does This workflow helps you analyze Google reviews of any business to generate powerful marketing insights. By simply submitting a business name and its Google Place ID, it fetches the top 30 reviews and uses GPT-4 (via LangChain Agent) to extract valuable customer insights such as marketing angles, customer motivations, product pain points, and voice of customer (VOC) quotes. The output is stored automatically in a connected Google Sheet. 👤 Who is this for Marketing teams looking for messaging inspiration Founders or product managers exploring customer feedback Brand strategists gathering real-world insights Agencies running VOC or sentiment analysis 🛠️ Requirements Dumpling AI API key** OpenAI GPT-4 or GPT-4o access** Google Sheets connection** A form or manual input with: Business Name Google Place ID ⚙️ How to set up Connect Credentials Dumpling AI (via HTTP Header Auth) OpenAI (GPT-4) Google Sheets (OAuth2) Prepare your Google Sheet Create columns: Business Name, Place ID, Marketing Angles, Customer Motivations, Frictions and Barriers, Product Opportunities, VOC Snippets Update Nodes Replace the Google Sheets Document ID and Tab Name with yours Check that the Dumpling API node is linked to your credential Optional: tweak the prompt in the LangChain Agent node to fit your tone or goals 🤖 How it works (Workflow Steps) User submits business name + Google Place ID Dumpling AI fetches top 30 reviews Workflow aggregates review text GPT-4 via LangChain analyzes the reviews Insights are parsed and logged to Google Sheets 💡 Customization Ideas Push output to Notion, Airtable, or Slack Add sentiment scoring to prioritize themes Create summaries for each insight category Schedule insights to be emailed weekly This is a plug-and-play VOC research workflow — great for founders, marketers, and product teams who want actionable data from real customers without doing manual review scraping or summarizing.