by Jimleuk
This n8n workflow is a fun way to query and search over your credentials on your n8n instance. Good to know Your credentials should remain safe as this workflow does not decrypt or use any decrypted data. Example Usage "Which workflows are using Slack and Google Calendar?" "Which workflows have AI in their name but are not using openAI?" How it works Using the n8n API, it fetches all workflow data on the instance. Workflow data contains references to credentials used so this will be extracted. With some necessary reformatting, the workflows and their credentials metadata are stored to a SQLite database. Next, an AI agent is used with a custom SQL tool that reads the SQLite database created in the previous step. The AI agent is instructed to perform SQL queries against our workflow credential table when asked about credentials by the user. Requirements You'll need an n8n API key. Please note that only workflows will be scoped to your API key. Customising the workflow Add extra table fields to the SQLite database to answer even more complex queries such as: workflow status to differentiate between active and inactive workflows.
by David Roberts
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a category matches the expected one. The workflow takes support tickets and generates a category and priority, which is then compared with the correct answers in the dataset. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info Once the category is generated by the agent, we check whether it matches the expected one in the dataset Finally we pass this information back to n8n as a metric
by Angel Menendez
Who is this for? This workflow is for professionals and teams who want to automate LinkedIn message replies with intelligent, human-like responses — without losing control over tone or accuracy. Ideal for founders, sales teams, DevRel, or community managers handling high-volume inbound messages. What problem is this workflow solving? Responding to every LinkedIn message manually is slow and inconsistent. Basic AI bots generate replies without context or nuance. This subworkflow solves both problems by using structured message routing from Notion and profile insights from UniPile to craft smart, context-aware responses. What this workflow does This workflow takes the sender’s message and profile (from LinkedIn Auto Message Router with Request Detection) and references your centralized Notion database of message types. It uses that to either match the message to a known response or generate a new one using OpenAI's GPT model — all while following professional tone guidelines. This is the third workflow in a 3-part automation system: Receives data from LinkedIn Auto Message Router with Request Detection Uses UniPile LinkedIn Profile Lookup Subworkflow to enrich responses based on follower count or org data Example Use Case If a message comes from someone with low reach (e.g., under 1,000 followers), the AI politely deflects a meeting request. If an influencer reaches out, the AI immediately offers a booking link. Your team controls this logic by updating the Notion database — no edits to the workflow required. Setup Connect this workflow as a subworkflow in your router or Slack approval flow Store your Notion API key and database ID in n8n Provide the following parent inputs: message – The LinkedIn message text sender – Name of the sender chatid – Session ID (optional for memory) linkedinprofile – Enriched array with LinkedIn context (follower count, connection info, etc.) Add your preferred AI model credentials (supports OpenAI, Gemini, or Ollama) Optional: Customize system prompt to better match your brand voice How to customize this workflow to your needs Update the Notion schema to include industry-specific categories or actions Change the AI tone (e.g., humorous, more corporate, etc.) Add conditional logic for auto-sending messages without Slack approval Extend to support multiple platforms (e.g., email, X/Twitter, Instagram DMs)
by Yang
🧾 What this workflow does This workflow automatically generates avatar-style videos from the latest AI-related news using Dumpling AI and HeyGen. It runs every hour, scrapes trending articles, turns them into 30–60 second spoken scripts with GPT-4o, and produces short avatar videos with HeyGen. Finally, it logs the final video URL in a Google Sheet. 👤 Who is this for Newsletters and creators who want to automate AI trend updates Content marketers generating short-form video content Product teams experimenting with AI-generated summaries Automation enthusiasts combining LLMs + video + trending data ⚙️ How to set up 🔐 Requirements Dumpling AI API Key** stored securely as HTTP Header credential HeyGen API Key** added as an HTTP Header credential OpenAI API Key** for GPT-4o (can use GPT-4o-mini if preferred) Google Sheets account** with one column: Video link 🛠 Step-by-step setup Google Sheet Setup Create a Google Sheet with a single column named: Video link Update Credentials Use n8n’s credential manager to add tokens for: Dumpling AI HeyGen OpenAI Google Sheets Optional Customizations In the "Dumpling AI: Search AI News" node, you can change "query": "AI Agent" to other trending keywords (e.g., "Generative AI", "Autonomous Agents", etc.) Update the avatar_id and voice_id in the HeyGen request to match your preferred look/sound 🧠 How it works The Schedule Trigger runs hourly. Dumpling AI searches for fresh news related to "AI Agent." The top 4 news links are scraped for full content. Articles are merged and fed into GPT-4o via a LangChain Agent to produce a casual, conversational video script. HeyGen creates a video using the script, avatar, and voice. The workflow waits until the video rendering is complete. Once done, the final video link is logged into Google Sheets. 🧪 Customization Ideas Change the interval (e.g., every 6 hours, daily) Swap avatar/voice in HeyGen to fit your brand Expand to post the video directly to social media Add image background or B-roll overlays using Creatomate This is a fast, automated pipeline to create explainer-style AI news updates using real-time data and generative video tools.
by Lucas Walter
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. AI dental appointment booking with Google Calendar and Sheets Who's it for This workflow is perfect for dental practices, medical offices, and healthcare providers who want to automate their appointment scheduling process. It's ideal for practices that receive high volumes of appointment requests and want to reduce manual booking while maintaining accurate patient records. What it does This AI-powered voice agent handles complete appointment booking workflows for "Pearly Whites Dental." When patients call or submit requests, the system: Analyzes the request using Google Gemini AI to understand patient needs Checks calendar availability in real-time via Google Calendar integration Automatically finds and offers up to 2 available appointment slots when the preferred time isn't available Books confirmed appointments directly to the practice calendar Logs all patient information (name, insurance, concerns) to Google Sheets for record-keeping Maintains conversation context across interactions for natural dialogue flow The workflow operates in Central Time Zone and assumes standard business hours (8 AM - 5 PM, excluding lunch). How it works The system receives webhook requests containing patient interaction data. The AI agent processes this information and determines which tools to use based on the request type. For availability checks, it intelligently searches multiple time slots in 30-minute increments until finding suitable options. All appointments are automatically formatted as "Dental Appointment | [Patient Name]" and logged with complete patient details. Requirements Google Calendar API access with OAuth2 credentials Google Sheets API access for patient data logging Google Gemini API key for AI processing Webhook endpoint for receiving requests Pre-configured Google Calendar and Sheets document How to set up Configure Google Calendar credentials in the calendar tool nodes Set up Google Sheets integration with your patient tracking spreadsheet Add your Google Gemini API key to the language model node Update the calendar ID in both calendar nodes to match your practice calendar Modify the Google Sheets document ID to point to your patient records sheet Test the webhook endpoint to ensure proper request processing How to customize the workflow Adjust business hours** by modifying the availability checking logic in the system prompt Change appointment duration** by updating the end time calculation (currently set to 1 hour) Modify patient data fields** by updating the Google Sheets column mapping Update practice name** by changing "Pearly Whites Dental" references in the system prompt Customize response format** by adjusting the AI agent's instructions for different appointment types
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Relevance" which in this scenario, measures the relevance of the agent's response to the user's question. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_relevance.py How it works This evaluation works best for Q&A agents. For our scoring, we analyse the agent's response and ask another AI to generate a question from it. This generated question is then compared to the original question using cosine similarity. A high score indicates relevance and the agent's successful ability to answer the question whereas a low score means agent may have added too much irrelevant info, went off script or hallucinated. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Oliver Bardenheier
🛠️Setup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by Alex Kim
🎬 Google Veo 3 Prompt and Video Generator via Leonardo.ai + Claude 4 Transform text descriptions into cinematic videos using Google's Veo 3 model through Leonardo.ai's platform! 🚀 What This Workflow Does This advanced automation pipeline takes your creative ideas and turns them into professional-quality videos using Google's powerful Veo 3 model (accessed via Leonardo.ai), enhanced by Claude 4's sophisticated prompt engineering. ✨ Key Features 🤖 AI-Powered Prompt Enhancement**: Uses Claude 4 Sonnet with Wikipedia integration to craft optimal Google Veo 3 prompts 🎥 Professional Video Generation**: Leverages Google's Veo 3 model through Leonardo.ai for high-quality text-to-video conversion ☁️ Automatic Cloud Storage**: Videos are automatically saved to your Google Drive 📋 Structured Prompting**: Follows Google Veo3 best practices with 8 essential elements (Subject, Context, Action, Style, Camera Motion, Composition, Ambiance, Audio) ⚡ Hands-Off Processing**: Set it and forget it - the workflow handles the entire pipeline 🔧 How It Works Input Your Concept - Describe your video idea in the "Video Context" node AI Enhancement - Claude 4 transforms your description into a cinematic Google Veo 3 prompt using advanced techniques Video Generation - Google's Veo 3 model (via Leonardo.ai) creates your video (720p resolution, ~8 seconds) Smart Waiting - 4-minute processing buffer ensures completion Auto-Download - Retrieves the finished video from Leonardo's servers Cloud Storage - Uploads directly to your Google Drive folder 💡 Perfect For Content Creators** looking to automate video production Marketing Teams** needing quick promotional videos Educators** creating engaging visual content Social Media Managers** generating scroll-stopping content Creative Professionals** exploring AI-assisted filmmaking 📋 Requirements Leonardo AI account with API access Anthropic API key (Claude 4 Sonnet) Google Drive integration N8N instance (cloud or self-hosted) 👨💻 About the Creator Created by: AlexK1919 - AI-Native Workflow Automation Architect, n8n Ambassador and Verified Partner, Co-Founder @ WotAI If you'd like to review more Google Veo 3 Prompts organized by business category, check out over 9,000+ free, pre-made prompts at: Google Veo 3 Prompts 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/ 🎯 Example Output Input: "Star Wars stormtrooper digging for uranium in desert, saying something funny" The AI generates a structured prompt with: Subject**: Detailed character description Context**: Desert environment specifics Action**: Dynamic digging movements Style**: Cinematic vlog aesthetic Camera**: Appropriate angles and movement Audio**: Dialogue, sound effects, and music ⚙️ Setup Notes Character Limit**: Prompts are optimized for Leonardo's 1,500 character API limit Processing Time**: Allow 4+ minutes for Google Veo3 video generation Quality**: 720p resolution with native audio generation Consistency**: Uses advanced Google Veo3 prompting for reliable results 🔄 Customization Options Modify the prompt engineering system message for different styles Adjust video resolution and model parameters Change storage destination (Google Drive folder) Add post-processing steps or notifications 📈 Why This Workflow Rocks Unlike simple text-to-video tools, this workflow: Intelligently enhances** your prompts using AI for Google Veo 3 Follows industry best practices** for Google Veo3 prompting Automates the entire pipeline** from idea to stored video Leverages multiple AI models** for superior results Handles technical details** like API limits and timing 🚨 Pro Tips Be specific in your initial context - detail creates better videos The workflow includes comprehensive Google Veo3 prompting guidelines Videos are typically 5-8 seconds - plan accordingly for longer content Experiment with different styles and camera movements optimized for Veo 3 The AI can access Wikipedia for factual enhancement Ready to revolutionize your video creation process? Import this workflow and start generating professional videos with just a text description! Perfect for anyone looking to harness the power of AI for content creation. Tags: #veo3 #GoogleVeo3 #AI #VideoGeneration #Leonardo #Claude #Automation #ContentCreation #GoogleAI
by Pavel Zamorev
This n8n template automates the transformation of raw meeting notes into structured tasks and documents using GPT (or another model) , syncing them to Notion and TickTick via a Telegram bot. Use Cases Automate note-taking and formatting for daily standups, brainstorming sessions, or client calls. Reduce cognitive load by eliminating manual tracking of ideas and tedious formatting. Convert discussions into actionable tasks instantly with TickTick and structured notes in Notion. How It Works Capture Notes: Send raw meeting notes to a Telegram bot. AI Processing: The workflow sends the text to AI, which: Removes duplicates and extracts key points. Formats content into structured Markdown notes for Notion. Identifies tasks with deadlines (e.g., "- Prepare presentation (Responsible: John, Deadline: Friday)"). Task Parsing: Extracts task titles, removing metadata like "Responsible" and "Deadline." Review & Edit: The bot returns formatted notes and tasks for review in Telegram. Sync & Publish: Notes are published to a Notion database. Tasks are exported to TickTick via API. Confirmation: A Telegram reaction (e.g., 👌 emoji) confirms successful processing. Setup Instructions Set Up Telegram Bot: Create a Telegram bot via BotFather and obtain an API token. Add the token to the "Telegram Trigger" and "Send-Edited-Notes" nodes under credentials (telegramApi). Configure OpenAI: Obtain an OpenAI API key and add it to the "Edit-Notes" node (openAiApi credentials). Ensure the model is set to gpt-4.1-mini in the node parameters. Set Up Notion: Create a Notion database for notes (e.g., "Meetings"). Add the database ID to the "Create a Database Page" node (databaseId). Configure Notion API credentials (notionApi) in the node. Set Up TickTick: Obtain a TickTick API key and add it to the "Create a Task" node (tickTickOAuth2Api credentials). Specify your TickTick project ID in the node (projectId). Deploy Workflow: Ensure your n8n instance is self-hosted to support community nodes (TickTick, Notion). Activate the workflow in n8n. Test: Send a test message to the Telegram bot (e.g., "Discussed project timeline. Tasks: - Prepare slides (Responsible: Alice, Deadline: Friday)"). Verify that notes appear in Notion, tasks in TickTick, and a 👌 reaction in Telegram. Configuration Examples Telegram Trigger: { "parameters": { "updates": ["message"], "additionalFields": {} }, "credentials": { "telegramApi": { "id": "your-telegram-api-id", "name": "meeting notes" } } } OpenAI Prompt (in "Edit-Notes" node): Analyze the quick meeting notes from {{ $json.message.text }} Generate meeting notes and a task list in the following format:\nMeeting Notes:\n- [Note 1]\n- [Note 2]\n\nTasks:\n- [Task 1] \n- [Task 2] Notion Database Page { "parameters": { "resource": "databasePage", "databaseId": "your-notion-database-id", "title": "MN {{ $now }}", "blockUi": { "blockValues": [ { "textContent": "{{ $json.message.text }}" } ] } } } Requirements Requires an OpenAI API key (or another model). APIs: Pre-configured Notion and TickTick API credentials are required. The template includes setup guides. Setup: Uses community nodes, requiring a self-hosted n8n instance. Customizing This Workflow Replace the Telegram bot with a webhook or form for alternative inputs (e.g., mobile apps). Modify the OpenAI prompt in the "Edit-Notes" node to customize note and task formats. Add filters in the "Split Notes and Tasks" node to prioritize tasks (e.g., ++#urgent++). Integrate Google Calendar via an additional HTTP Request node to auto-set deadlines based on text (e.g., "by Friday").
by Adam Janes
How it works The automation loads rows from a Google Sheet of leads that you want to contact. It makes a Google search via Apify for LinkedIn links based on the First name / Last name / Company. Another Apify actor fetches the right LinkedIn profile based on the first profile which is retuned The same process is done for the company that the lead works for, giving extra context. If the lead has a current company listed on their LinkedIn, we use that URL to do the lookup, rather than doing a separate Google search. A call is made to OpenRouter to get an LLM to generate an email based on a prompt designed to do personalized outreach. An email is sent via a Gmail node. Set up steps Connect your Google Sheets + Gmail accounts to use these APIs. Make an account with Apify and enter your credentials. Set your details in the "Set My Data" node to customize the workflow to revolve around your company + value proposition. I would recommend changing the prompt in the "Generate Personalized Email" node to match the tone of voice that you want your agent to have. You can change the guidelines to e.g. change whether the agent introduces itself, and give more examples in the style you want to make the output better.
by Ranjan Dailata
Who this is for? Extract & Summarize Yelp Business Review is an automated workflow that extracts the Yelp business reviews using Bright Data Web Unlocker, process and formats the raw data, summarizes using the Google Gemini's LLM, and forward the concise summary with the review respose to a specified webhook endpoint. This workflow is tailored for: Local SEO Specialists who need structured insights from Yelp reviews to optimize listings. Business Owners wanting quick summaries of what customers love or complain about. Reputation Managers who monitor brand sentiment and identify customer pain points. Data Analysts & Researchers extracting Yelp review patterns at scale. AI Product Builders needing clean Yelp review data as input for their LLMs or recommender systems. What problem is this workflow solving? Yelp reviews are rich in customer sentiment but messy to work with manually. This workflow solves: The pain of scraping Yelp review content manually. The challenge of building the structured data with the summary. The need for structured outputs suitable for analysis, reports, or AI input. What this workflow does This automated pipeline does the following: Bright Data Integration**: Queries Yelp and scrapes business listing data using Bright Data's Web Unlocker. Structured Data Formatting**: Formats the Yelp review data to a structured response in JSON format. Google Gemini Summarization**: Sends the cleaned reviews to Google Gemini to: Output Delivery**: Returns the structured response with the concise summary over the webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Yelp Business Review URL with the Bright Data zone by navigating to the Set Yelp URL with the Bright Data Zone node. Update the Webhook Notifier for the merged response node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you’re a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Target Specific Business Categories** Update the Yelp Business Review input to scrape different businesses like gyms, salons etc. Limit Reviews** Add filters by description, location, page range to get the top reviews. Tweak the Data Extraction Node** Update the Structured Data Extractor node Output Parser for building the JSON response with the appropriate fields or attributes. Tweak the Summarization Prompt** Modify the Gemini prompt to generate a comprehensive summary. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Max Tkacz
Who is this for This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup. Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n How it works The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context. Key features include: Querying the database with flexible filters. Searching within individual Notion pages and extracting relevant blocks. Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification. This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets. Set up Find basic set up instructions inside the workflow itself or watch a quickstart video 👇