by Oliver Bardenheier
🛠️Setup Guide 'Get OVH Invoices to Google Sheets' Author: Oliver Bardenheier Who is this for? This Workflow is for all users who have services (Domains, BareMetal, VPS, Cloud, etc.) with Provider OVH.com (European API) It automatically retrieves invoice data, -files and puts the Data in a Google Spreadsheet for further processing. What problem is this workflow solving? / use case Currently the invoices from OVH do not come as an attachment via mail, it is just a link. So, the receiver has to be logged in to the ovh account to download the file. Even more effort if one is using 2FA. This workflow retrieves all information through the oauth2 token. What this workflow does This Workflow automatically retrieves invoice data, -files from Your OVH.com account and puts the Data in a Google Spreadsheet for further processing. It also saves the invoice PDF to a certain (yearly) folder in Your Google Drive. Setup Make a copy of this Google Sheet Template Set the timeframe for the query to Your likings in "Query Latest OVH Invoices" You could set an email trigger before and make the frame only one day. Log into Your OVH Account and get Your Credentials here Authentication using oAuth2 Authorization Code "Login with OVHcloud SSO" You need to Authorize OVHcloud API console If this worked fine You'll see a green text: "Access Token Received" Head over to the OVH API Console to get Your Token. Set Up Header Auth in the HTTP nodes: Authentication = Generic Credential Type Generic Auth Type = Header Auth Header Auth = Your OVH Header Credentials: -- a.) In every API Call in the console You'll find a curl example, just take the data from the line including: -H "authorization: Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......" -- b.) Create a new Credential in n8n for the header auth. Put in the 'name' Field: authorization Copy Your Token including Bearer in the value field: 'Bearer eyJhxxxxxxxxxxxxxxxxxxxxxxxxxxxxx......' How to customize this workflow to your needs You can put in a mail trigger that activates on every incoming invoice mail from OVH. Adjusting the timeframe to get invoices from a certain time period, or remove the time variables completely to get ALL invoices.
by Daniel Shashko
How it Works Disclaimer: This template is for self-hosted n8n instances only. This workflow is designed for developers, data analysts, and automation enthusiasts seeking to automate personalized news collection and delivery. It seamlessly combines n8n, OpenAI (e.g., GPT-4.1), and Bright Data’s Model Context Protocol (MCP) to collect, extract, and email the latest global news headlines. On a schedule or via a manual trigger, the workflow prompts an AI agent to gather fresh news. The agent leverages context-aware memory and integrated MCP tools to conduct both search engine queries and direct web page scraping in real time, delivering more than just meta search results—it extracts actual on-page headlines and trusted links. Results are formatted and delivered automatically by email via your SMTP provider, requiring zero manual effort once configured. Who is this for? Developers, data engineers, or automation pros wanting an AI-powered, fully automated newsfeed Teams needing up-to-date news digests from trusted global sources Anyone self-hosting n8n who wishes to combine advanced LLMs with real-time web data Setup Steps Setup time: Approx. 15–30 minutes (n8n install, API configuration, node setup) Requirements: Self-hosted n8n instance OpenAI API key Bright Data MCP account credentials SMTP/email provider details Install the community MCP node (n8n-nodes-mcp) for n8n and set up Bright Data MCP access. Configure these nodes: Schedule Trigger: For automated delivery at your chosen interval. Edit Fields: To inject your AI news collection prompt. AI Agent: Connects to OpenAI and MCP, enabled with memory for context. OpenAI Chat Model: Connects via your OpenAI credentials. MCP Clients: Configure at least two—one for search (e.g. search_engine) and one for scraping (e.g. scrape_as_markdown). Send Email: Set up with recipient and SMTP information. Credentials must be entered into their respective nodes for successful execution. Customization Guidance Prompt Tweaks:** Refine your AI news prompt to target specific genres, regions, or sources, or broaden/narrow the coverage as needed. Tool Configuration:** Carefully define tool descriptions and parameters in MCP client nodes so the agent can pick the best tool for each step (e.g., only scrape real news sites). Delivery Settings:** Adjust email recipient(s) and SMTP details as needed. Workflow Enhancements:** Use sticky notes in n8n for extended documentation, alternate prompts, or troubleshooting tips. Run Frequency:** Set schedule as needed—from hourly to daily updates. Once configured, this workflow will automatically gather, extract, and email curated news headlines and links—no manual curation required!
by Yang
Who is this for? This workflow is perfect for eCommerce teams, market researchers, and product analysts who want to track or extract product information from websites that restrict scraping tools. It’s also useful for virtual assistants handling product comparison tasks. What problem is this workflow solving? Many eCommerce and retail sites use dynamic content or anti-bot protections that make traditional scraping methods unreliable. This workflow bypasses those issues by taking a screenshot of the full page, using OCR to extract visible text, and summarizing product information with GPT-4o—all fully automated. What this workflow does This workflow monitors a Google Sheet for new URLs. Once a new link is added, it performs the following steps: Trigger on New URL in Sheet – Watches for new rows added to a Google Sheet. Screenshot URL via Dumpling AI – Sends the URL to Dumpling AI’s screenshot endpoint to capture a full-page image of the product webpage. Save Screenshot to Drive Folder – Uploads the screenshot to a specific Google Drive folder for reference or logging. Extract Text from Screenshot with Dumpling AI – Uses Dumpling AI’s image-to-text endpoint to pull all visible content from the screenshot. Extract Product Info from Screenshot Text with GPT-4o – Sends the extracted raw text to GPT-4o, prompting it to identify structured product information such as product name, price, ratings, deals, and purchase options. Split Each Product Entry – Splits the GPT response (an array of product objects) so each product becomes an individual item for saving. Save Products info to Google Sheet – Appends each product’s structured details to a separate sheet in the same spreadsheet. Setup Google Sheet Create a Google Sheet with at least two sheets: Sheet1 should contain a header row with a column labeled URL. Sheet2 should contain headers: Product Name, price, purchased, ratings, deal, buyingOptions. Connect your Google account in both the trigger and final write-back node. Dumpling AI Sign up at Dumpling AI Create an API key and use it for both HTTP modules: Screenshot URL via Dumpling AI Extract Text from Screenshot with Dumpling AI The screenshot endpoint used is https://app.dumplingai.com/api/v1/screenshot. Google Drive Create a folder for storing screenshots. In the Save Screenshot to Drive Folder node, select the correct folder or provide the folder ID. Make sure permissions allow uploading from n8n. OpenAI Provide an API key for GPT-4o in the Extract Product Info from Screenshot Text with GPT-4o node. The prompt is structured to return structured product listings in JSON format. Split & Save Split Each Product Entry takes the array of product objects from GPT and makes each one a separate execution. Save Products info to Google Sheet writes structured fields into Sheet2 under: Product Name, price, purchased, ratings, deal, buyingOptions. How to customize this workflow Adjust the GPT prompt to return different product fields (e.g., shipping info, product categories). Use a filter node to limit which types of products get written to the final sheet. Add sentiment analysis to analyze review content if available. Replace Google Drive with Dropbox or another file storage app. Notes Make sure you monitor your API usage on both Dumpling AI and OpenAI to avoid rate limits. This setup is great for snapshot-based extraction where scraping is blocked or unreliable.
by Sherlockes
What does this template help with? Save the data of activities recorded and stored in Strava to a Google Sheets document. How it works: We have a Google Sheets spreadsheet where each row represents a Strava activity with the date, reference, distance, time, and elevation. Periodically, the workflow checks the latest activities in our Strava account to see if any are missing from the spreadsheet and adds them to the list. All fields must be properly formatted according to how they are stored in the Google Sheets spreadsheet. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Google Sheets and Strava account. In the 'activities' node, you must enter the name of the file and the sheet where you want to save the imported data. In the 'Strava' node, you must select the corresponding credential. You can adjust the format of dates, times, and distances according to your needs in the 'strava_last' node. The rest of the information is available at sherblog.es Template was created in n8n v1.72.1
by Jimleuk
> Note: This template requires a self-hosted community edition of n8n. Does not work on cloud. Try It Out This n8n template shows how to validate API requests with Auth0 Authorization tokens. Auth0 doesn't work with the standard JWT auth option because: 1) Auth0 tokens use the RS256 algorithm. 2) RS256 JWT credentials in n8n require the user to use private and public keys and not secret phrase. 3) Auth0 does not give you access to your Auth0 instance private keys. The solution is to handle JWT validation after the webhook is received using the code node. How it works There are 2 approaches to validate Auth0 tokens: using your application's JWKS file or using your signing cert. Both solutions uses the code node to access nodeJS libraries to verify the token. JWKS**: the JWK-RSA library is used to validate the application's JWKS URI hosted on Auth0 Signing Cert**: the application's signing cert is imported into the workflow and used to verify token. In both cases, when the token is found to be invalid, an error is thrown. However, as we can use error outputs for the code node, the error does not stop the workflow and instead is redirected to a 401 unauthorized webhook response. When token is validated, the webhook response is forwarded on the success branch and the token decoded payload is attached. How to use Follow the instructions as stated in each scenario's sticky notes. Modify the Auth0 details with that of your application and Auth0 instance. Requirements Self-hosted community edition of n8n Ability to install npm packages Auth0 application and some way to get either the JWK url or signing cert.
by Yaron Been
🤖 AI Cart Recovery Agent: Smart Abandoned Checkout Assistant Transform abandoned carts into recovered sales with intelligent automation. This sophisticated n8n workflow monitors checkout abandonment, implements smart waiting periods, and sends AI-generated personalized recovery emails only when needed - maximizing conversions while respecting customer experience. 🔄 How It Works This intelligent 7-step recovery system recovers lost sales automatically: Step 1: Initial Abandonment Detection The workflow fetches current abandoned checkout data from your e-commerce platform (Shopify, WooCommerce, etc.), identifying customers who added items but didn't complete their purchase. Step 2: Strategic Grace Period Instead of immediately sending recovery emails, the system waits 1 hour (customizable), giving customers natural time to complete their purchase without pressure or interruption. Step 3: Smart Re-verification After the waiting period, the workflow rechecks the abandonment status by fetching updated checkout data, ensuring accuracy before taking action. Step 4: Intelligent Decision Logic Advanced conditional logic compares initial and updated abandonment lists, determining if customers are still abandoned or have completed their purchase during the grace period. Step 5: AI-Powered Email Generation For customers still showing abandonment, GPT generates personalized recovery emails featuring: Customer's actual name for personal connection Specific products left in their cart Friendly, non-pushy messaging tone Optional discount incentives Compelling call-to-action to complete purchase Step 6: Automated Email Delivery Personalized recovery emails are sent directly to abandoned customers via Gmail or your preferred email service, maintaining professional branding and deliverability. Step 7: Comprehensive Activity Logging All recovery attempts are logged in Google Sheets for tracking, including customer details, email content, and campaign performance analytics. ⚙️ Setup Steps Prerequisites E-commerce platform with API access (Shopify, WooCommerce, BigCommerce) OpenAI API key for personalized email generation Gmail or SMTP email service for delivery Google Sheets for activity tracking and analytics n8n instance (cloud or self-hosted) E-commerce Platform Configuration Shopify Setup: API Endpoint: https://your-store.myshopify.com/admin/api/2023-10/checkouts.json Authentication: X-Shopify-Access-Token header Required Permissions: Read checkouts, Read customers Parameters: status=abandoned WooCommerce Setup: API Endpoint: https://your-site.com/wp-json/wc/v3/orders Authentication: Consumer Key/Secret or JWT Parameters: status=pending, status=failed Required Plugins: WooCommerce REST API Configuration Steps 1. Credential Setup E-commerce API**: Store admin API access tokens or keys OpenAI API Key**: GPT-4 access for intelligent email generation Gmail OAuth2**: Professional email delivery service Google Sheets OAuth2**: Activity logging and performance tracking 2. Abandonment Detection Configuration Monitoring Frequency**: Set workflow trigger schedule (hourly, daily) Grace Period Duration**: Customize wait time (default: 1 hour) Platform Integration**: Configure API endpoints for your specific platform Data Filtering**: Set criteria for what constitutes abandonment 3. AI Email Customization Default email generation includes: Personalization Level**: Customer name, product specifics, cart value Tone Customization**: Friendly, urgent, helpful, or premium Discount Integration**: Optional percentage or fixed amount offers Brand Voice**: Maintain consistent company messaging and style 4. Recovery Campaign Settings Email Timing**: Optimal sending times based on customer time zones Frequency Limits**: Prevent over-emailing with cooldown periods Segmentation Rules**: Different approaches for high-value vs standard carts Follow-up Sequences**: Multi-email recovery campaigns with escalating incentives 5. Performance Tracking Setup Analytics Dashboard**: Google Sheets with recovery metrics and ROI Success Tracking**: Monitor completion rates and revenue recovered A/B Testing**: Compare different email approaches and timing Customer Journey**: Track from abandonment through recovery completion 🚀 Use Cases E-commerce Retailers Fashion & Apparel**: Recover high-value clothing and accessory purchases Electronics**: Target abandoned tech purchases with technical support offers Home & Garden**: Remind customers about seasonal or home improvement items Beauty & Cosmetics**: Recover abandoned skincare and makeup purchases Subscription & SaaS Businesses Software Trials**: Convert abandoned trial signups into paid subscriptions Membership Sites**: Recover incomplete membership purchases Online Courses**: Re-engage learners who abandoned course purchases Digital Services**: Follow up on abandoned service bookings or consultations B2B E-commerce Office Supplies**: Recover bulk order abandonments with volume discounts Industrial Equipment**: Follow up on high-value equipment quote requests Professional Services**: Re-engage businesses that abandoned service bookings Software Licenses**: Recover enterprise software purchase abandonments Specialty Retailers Luxury Goods**: Provide white-glove service for high-value abandoned purchases Custom Products**: Follow up on personalized or custom order abandonments Seasonal Items**: Time-sensitive recovery for holiday or event-specific products Limited Edition**: Create urgency for exclusive or limited availability items Service-Based Businesses Travel & Hospitality**: Recover abandoned hotel, flight, or package bookings Event Tickets**: Re-engage customers who abandoned concert or event purchases Professional Services**: Follow up on abandoned consultation or service bookings Fitness & Wellness**: Recover abandoned membership or class package purchases 🔧 Advanced Customization Options Multi-Platform Integration Extend beyond single platform monitoring: Shopify Plus: Advanced checkout analytics and customer segmentation WooCommerce: Custom post-purchase and abandonment tracking Magento: Enterprise-level cart recovery with customer journey mapping BigCommerce: API-driven recovery with advanced personalization Custom Platforms: Webhook-based abandonment detection and recovery Intelligent Email Sequencing Create sophisticated recovery campaigns: Progressive Incentives**: Escalating discounts over multiple touchpoints Behavioral Triggers**: Different emails based on cart value, customer history Seasonal Campaigns**: Holiday-specific recovery messaging and offers Win-Back Sequences**: Long-term customer re-engagement beyond immediate recovery Advanced Personalization Enhance AI-generated content with: Purchase History Analysis**: Reference previous purchases and preferences Browsing Behavior**: Include recently viewed items and categories Geographic Personalization**: Local offers, shipping options, or store locations Demographic Targeting**: Age, gender, or interest-based messaging customization Performance Optimization Implement advanced tracking and optimization: Revenue Attribution**: Track exact recovery amounts and ROI calculations Customer Lifetime Value**: Prioritize high-value customer recovery efforts Conversion Funnel Analysis**: Identify optimal timing and messaging strategies Predictive Analytics**: Use ML to predict recovery likelihood and optimize approaches 📊 Recovery Email Examples Fashion Retailer Example: Subject: You left something stylish behind, Sarah! Hi Sarah, I noticed you were checking out those gorgeous items in your cart earlier - the Bohemian Summer Dress and Classic Leather Handbag have been waiting for you! I completely understand if you got busy or needed time to think it over. These pieces are still available and ready to ship to you today. Since you showed such great taste in selecting these items, I'd love to offer you 10% off your order to make the decision easier. Just use code WELCOME10 at checkout. Your cart includes: • Bohemian Summer Dress (Size M) - $89.99 • Classic Leather Handbag (Brown) - $156.99 Complete your purchase now and get free shipping to your door! [Complete My Purchase] Best regards, The StyleHub Team P.S. These items are popular and inventory is limited - don't wait too long! Software/SaaS Example: Subject: Your ProductivityPro trial is waiting, Mike Hi Mike, You were just one step away from unlocking the full power of ProductivityPro for your team at TechStartup Inc. I noticed you explored our Premium Plan features - the advanced reporting and team collaboration tools that could streamline your workflow and boost productivity by up to 40%. Since you invested time exploring our platform, I'd like to offer you an exclusive 25% discount on your first year. This offer is valid for the next 48 hours. Your selected plan: • ProductivityPro Premium (5 users) - $99/month • With 25% discount: $74/month (Save $300/year!) Ready to transform your team's productivity? [Activate My Account] Questions? Reply to this email or schedule a quick 15-minute demo call. Best regards, David Chen Customer Success Manager, ProductivityPro High-Value B2B Example: Subject: Your equipment quote is ready for approval, Jennifer Hi Jennifer, Thank you for your interest in our Industrial Packaging System for ManuCorp's new facility expansion. I understand that equipment investments of this scale require careful consideration and stakeholder alignment. Your configured system includes: • Model X5000 Packaging Line - $45,000 • Installation & Training Package - $8,000 • Extended 3-Year Warranty - $3,500 Total Investment: $56,500 Given the scope of your project, I'd like to extend our Q1 promotion pricing, which provides: 15% discount on equipment ($6,750 savings) Free installation supervision ($2,000 value) Expedited 6-week delivery This brings your total to $48,750 - a savings of $7,750. I'm available for a brief call to address any technical questions or help facilitate internal approvals. [Accept Quote & Proceed] Best regards, Robert Martinez Senior Sales Engineer Industrial Solutions Inc. Direct: (555) 123-4567 🛠️ Troubleshooting & Best Practices Common Issues & Solutions API Rate Limiting Implement exponential backoff for API requests Stagger workflow execution times across different stores Monitor API usage and upgrade plans as needed Cache frequently accessed data to reduce API calls Email Deliverability Challenges Use authenticated SMTP services with proper SPF/DKIM setup Monitor sender reputation and email engagement metrics Implement opt-out mechanisms and respect unsubscribe requests Segment email lists and avoid over-emailing customers False Positive Recoveries Extend grace periods for complex checkout processes Implement more sophisticated abandonment detection logic Add customer behavior analysis before triggering recovery Create exception rules for technical checkout failures Optimization Strategies Recovery Timing Optimization A/B test different grace period durations (30 min, 1 hour, 3 hours) Analyze customer behavior patterns to optimize sending times Consider time zone differences for global customer bases Implement seasonal timing adjustments for holidays and events Content Personalization Enhancement Continuously refine AI prompts based on successful recoveries Implement dynamic discount strategies based on cart value Create customer segment-specific messaging approaches Add urgency elements for time-sensitive or limited inventory items Performance Measurement Track recovery rates, revenue impact, and customer satisfaction Implement cohort analysis for long-term customer value impact Monitor email engagement metrics and optimize accordingly Calculate true ROI including customer acquisition costs and lifetime value 📈 Success Metrics Recovery Performance Indicators Recovery Rate**: Percentage of abandoned carts successfully recovered Revenue Recovery**: Total dollar amount recovered from abandoned purchases Email Engagement**: Open rates, click rates, and conversion rates Time to Recovery**: Average time from abandonment to completed purchase Business Impact Measurements ROI Calculation**: Revenue recovered vs workflow operational costs Customer Retention**: Impact on long-term customer relationships Average Order Value**: Effect on overall purchase values post-recovery Operational Efficiency**: Automation savings vs manual recovery efforts 📞 Questions & Support Need help implementing your AI Cart Recovery Agent? 📧 E-commerce Automation Expert Support Email**: Yaron@nofluff.online Response Time**: Within 24 hours on business days Specialization**: E-commerce automation, cart recovery optimization, AI email personalization 🎥 Comprehensive Implementation Resources YouTube Channel**: https://www.youtube.com/@YaronBeen/videos Complete setup guides for major e-commerce platforms Advanced AI email personalization techniques Recovery campaign optimization strategies Integration tutorials for Shopify, WooCommerce, and custom platforms Performance tracking and analytics implementation 🤝 E-commerce Automation Community LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Connect for ongoing e-commerce automation support and consulting Share your cart recovery success stories and ROI achievements Access exclusive templates for different industry verticals Join discussions about e-commerce automation trends and innovations 💬 Support Request Guidelines Include in your support message: Your e-commerce platform and current cart abandonment rates Average order values and customer segments you serve Current recovery processes and conversion challenges Integration requirements with existing marketing tools Specific technical errors or workflow execution issues
by Dvir Sharon
Automated Content Idea Generation and Expansion with Google Gemini and Google Sheets This n8n workflow automates the process of generating content ideas based on a user-defined topic, then expands each idea into a more detailed content piece (like a blog post) using Google Gemini, and finally saves all the generated data (idea title, description, and full content) into a Google Sheet. It's a powerful tool for streamlining content creation workflows. This workflow includes: Generation of multiple content ideas from a single topic. Expansion of each idea into detailed content using AI. Storage of ideas and generated content in a structured Google Sheet. Sticky Notes within the workflow for inline documentation and setup guidance. Prerequisites n8n Instance: You need a running n8n instance (self-hosted or cloud). Google AI Account: Access to Google AI (Gemini). You will need an API key. Google Account: Access to Google Sheets. You will need to create or use an existing spreadsheet with specific column headers. Installation and Setup Import the Workflow: Copy the entire JSON code provided. In your n8n instance, go to "Workflows". Click "New" -> "Import from JSON". Paste the JSON code and click "Import". Configure Credentials: Google AI (Gemini): Find the "Google Gemini Chat Model for Content Idea Generator" node and the "Google Gemini Chat Model for Content Generation" node. Click on the "Credentials" field in both nodes (it will likely show a placeholder name like "Google Gemini(PaLM) Api account"). Click "Create New". Select "Google AI API". Enter your Google AI API Key. Save the credential. (You can reuse the same credential for both nodes). Google Sheets: Find the "Google Sheets" node. Click on the "Credentials" field (it will likely show a placeholder name like "Google Sheets account"). Click "Create New". Select "Google Sheets OAuth2 API". Follow the steps to connect your Google Account and grant n8n access to Google Sheets. Save the credential. Configure Google Sheets Node: Open the "Google Sheets" node settings. Spreadsheet ID: Replace the placeholder value with the actual ID of your Google Sheet. You can find the Spreadsheet ID in the URL of your Google Sheet (it's the long string of characters between /d/ and /edit). Sheet Name: Select or enter the name or GID of the sheet within your spreadsheet where you want to save the data (e.g., Sheet1 or gid=0). Columns: Ensure your Google Sheet has columns named title, description, and content. The node is configured to map the generated data to these specific column headers. Save the node settings. Review Sticky Notes: Look at the Sticky Notes placed around the workflow canvas. They provide helpful context and reminders for setup, required Google Sheet columns, and the AI models used. How to Use Activate the Workflow: Toggle the workflow switch to "Active". Trigger the Workflow: Since this workflow uses a "When clicking ‘Execute workflow’" node as the trigger, you can run it directly from the n8n editor. Click the "Execute Workflow" button. The workflow will start automatically. Set the Topic: Open the "Set the input fields" node. Modify the topic value to the subject you want to generate content ideas about. Save the node settings. Monitor Execution: Watch the workflow execute. The nodes will light up as they process. The "Loop Over Items" node will show multiple executions as it processes each generated idea. Check Results: The generated content ideas (title, description) and the expanded content will be written as new rows in the Google Sheet you configured. Each row will correspond to one generated idea and its content. This workflow provides a robust starting point for AI-assisted content creation. You can customize the AI prompts in the "Content Idea Generator" and "LLM Content Generator" nodes to refine the output style and format, or integrate additional steps like sending notifications or further processing the generated content.
by Zacharia Kimotho
Generate new keywords for SEO with the monthly Search volumes This workflow is an improvement on the workflows below. It can be used to generate new keywords that you can use for your SEO campaigns or Google ads campaigns Generate SEO Keyword Search Volume Data using Google API and Generating Keywords using Google Autosuggest Usage Send the keywords you need as an array to this workflow Pin the data and map it to the set Keywords node Map the keywords to the Google ads API with the location and Language of your choice Split the results and set them data Pass this to the next nodes as needed for storage Make a copy of this spreedsheet and update the data accordingly Having challenges with the google Ads API? Read this blog Setup Replace the trigger with your desired trigger eg a webhook or manual trigger Map the data correctly to the set Keywords node On the Generate new keywords, Update the {customer_id} on the url and login-customer-id with your actual one. Update the developer-token` also with your values. The url should be corrected as below https://googleads.googleapis.com/v18/customers/{customer-id}:generateKeywordIdeas You should send the headers as below { "name": "content-type", "value": "application/json" }, { "name": "developer-token", "value": "5j-tyzivCNmiCcoW-xkaxw" }, { "name": "login-customer-id", "value": "513554 " } and the json body should take the following format { "geoTargetConstants": ["geoTargetConstants/2840"], "includeAdultKeywords": false, "pageToken": "", "pageSize": 2, "keywordPlanNetwork": "GOOGLE_SEARCH", "language": "languageConstants/1000", "keywordSeed": { "keywords": {{ $json.Keyword }} } } Troubleshooting If you get an error with the workflow, check the credentials you are using Check the account you are using eg the right customer id and developer token Follow the guide on the blog to set up your Google ads account Made by @Imperol
by Agent Studio
Overview This n8n workflow retrieves AI agent chat memory logs stored in Postgres and pushes them to Google Sheets, creating one sheet per session. It’s useful for teams building chat-based products or agents and needing to review or analyze session logs in a collaborative format. Who is it for Anyone with an AI Agent in Production storing the conversation logs in Postgres (or Supabase) who wants to see transcript and have control Product teams building AI agents or assistants. Teams that want to centralize conversation history for analysis or support. Anyone managing AI chat memory and needing to explore it in a spreadsheet. Prerequisites A Postgres database with a n8n_chat_histories table with an AI Agent connected to it. If you need an example, you can follow this tutorial Once done, you need to run the Postgresql query to add the created_at column (see Setup > Add a datetime column) Google Sheets access and OAuth credentials connected to n8n. A Google Sheets document set up as a template (see below). Google Sheets Template This workflow expects a Google Sheets file where each session will be stored in its own tab. A basic tab layout is duplicated and renamed with the session ID. 👉 Use this template as a starting point Note: You can hide the template after the first tabs have been created How it works Trigger The workflow can be launched manually or on a schedule (e.g. daily at noon). Retrieve sessions Runs a SQL query to get distinct session_id values from the n8n_chat_histories table. Loop over sessions For each session: Clears the corresponding sheet (if it exists). Duplicates the template tab. Renames it with the current session_id. Fetch messages Selects all messages linked to the session from Postgres. Append to sheet Adds each message to the Google Sheet with columns: Who: speaker role (user, assistant, etc.) Message: text content Date: timestamp from created_at, formatted yyyy-MM-dd hh:mm:ss Notes The sheet is cleared and rebuilt each run to ensure logs are up-to-date. If a sheet for a session doesn’t exist, it will be created by duplicating the first tab (template) You can group sessions under a persistent ID (like user_id) by overriding session_id in your memory config. Works perfectly with Supabase by using PG credentials from the connection pooler. 👉 If you're looking for a solution to better visualize and analyse conversations, reach out to us!
by William Lettieri
Overview Transform your LLM into a powerful GitHub automation specialist with this n8n workflow template. In a world where multiple MCP servers can overwhelm LLMs with context, this streamlined solution provides a dedicated GitHub Agent that handles all GitHub API operations through a single, specialized tool. When you need GitHub operations like creating repositories, managing issues, or handling pull requests, your LLM can make one simple call to the GitHub Agent. This agent specializes exclusively in GitHub MCP server operations, offloading all contextual complexity and providing clean, efficient GitHub automation. ✨ Features Single MCP Server Trigger** - One tool and one parameter to handle all GitHub API interactions Specialized GitHub Agent** - Dedicated AI agent with direct GitHub MCP Server connection Self-Executing Workflow** - "When Executed by Another Workflow" trigger enables seamless workflow chaining Scalable Architecture** - Ready to integrate with unlimited GitHub tools and operations Context Optimization** - Reduces LLM token usage by delegating GitHub complexity to a specialized agent Flexible Request Processing** - Handles any GitHub operation through natural language requests 🎯 Use Cases Repository Management** - Create, clone, and manage repositories programmatically Issue Tracking** - Automate issue creation, updates, and management workflows Pull Request Automation - Streamline code review and merge processes GitHub Actions Integration** - Trigger and monitor CI/CD workflows Team Collaboration** - Automate notifications and team management tasks Documentation Updates** - Automatically update README files and documentation 🏗️ Workflow Architecture Node Breakdown: MCP Server Trigger - Receives requests with GitHub operation parameters Set GitHub Username - Configures GitHub user context for API calls OpenAI Chat Model - Powers the intelligent GitHub agent with contextual understanding Simple Memory - Maintains conversation context and operation history GitHub AI Agent - Specialized Tools Agent with direct GitHub MCP Server access [MCP Server Trigger] → [Set GitHub Username] → [GitHub AI Agent] ↓ [OpenAI Chat Model] ← [Simple Memory] ← [GitHub API Operations] 📋 Requirements Essential Prerequisites: ✅ OpenAI API Key - For AI Agent and Chat Model functionality ✅ GitHub Username Configuration - Edit the "Set GitHub Username" node with your GitHub username for API calls ✅ n8n Version - Compatible with n8n 2024+ releases ✅ MCP Server Setup - Existing GitHub MCP server configuration Recommended Setup: GitHub Personal Access Token with appropriate permissions Basic understanding of n8n workflow configuration Familiarity with GitHub API operations 🚀 Setup Instructions Step 1: Import and Configure Import the workflow template into your n8n instance Navigate to the Set GitHub Username node Replace the placeholder with your actual GitHub username Step 2: API Keys Setup Configure your OpenAI API key in the Chat Model node Ensure your GitHub credentials are properly configured in n8n Test the connection to verify API access Step 3: MCP Server Integration Connect your existing GitHub MCP server to the workflow Verify the MCP Server Trigger is properly configured Test with a simple GitHub operation (e.g., "List my repositories") Step 4: Deploy and Test Activate the workflow in your n8n instance Test with various GitHub operations to ensure functionality Monitor execution logs for any configuration issues 🔧 Customization Options Agent Behavior Modify the Chat Model prompt** to adjust agent personality and response style Configure memory settings** to control conversation context retention Adjust timeout settings** for long-running GitHub operations GitHub Operations Extend supported operations** by adding new GitHub API endpoints Configure repository filters** to limit scope of operations Set up notification preferences** for important GitHub events Integration Points Webhook triggers** for real-time GitHub event processing Scheduled operations** for regular repository maintenance Cross-workflow triggers** for complex automation chains 💡 Pro Tips Start Simple**: Begin with basic operations like repository listing before attempting complex workflows Monitor Token Usage**: The specialized agent approach significantly reduces OpenAI API costs Batch Operations**: Group related GitHub operations in single requests for efficiency Error Handling**: The agent provides detailed error messages for troubleshooting 🤝 Support and Community Documentation**: Official n8n Documentation Community Forum**: n8n Community Issues & Contributions**: Feel free to suggest improvements or report issues 📄 License This workflow template is provided under the MIT License. You're free to use, modify, and redistribute with attribution. Created by: William Lettieri Version: 1.0 Last Updated: May 28, 2025 Compatibility: n8n 2024+
by InfraNodus
Set Up ElevenLabs Voice Chat Agent using Graph RAG Knowledge Graphs as Experts This workflow creates an AI voice chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. We use ElevenLabs to set up a voice agent that can be embedded to any website or used via their API. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) and to update with new knowledge A knowledge graph has a holistic overview of your knowledge base Better retrieval of relations between the document chunks = higher quality responses Ability to reuse in other n8n workflows How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. The user's prompt is received from the ElevenLabs Conversational AI agent via an n8n Webhook, which also takes care of the voice interaction. The response from n8n is then sent to the Webhook, which is polled by the ElevenLabs voice agent. This agent processes the response and provides the final answer. Here's a description step by step: The user submits a question using ElevenLabs voice interface The question is sent via the knowledge_base tool in ElevenLabs to the n8n Webhook with the POST request containing the user's prompt and sessionID for Chat Memory node in n8n. The n8n AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge auto-generated by InfraNodus (we call each tool an "expert"). The n8n AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Webhook endpoint ElevenLabs conversational AI agent picks up the response arriving from the knowledge_base tool via the webhook and then condenses it for conversational format and transforms text into voice. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow You will also need to set up an ElevenLabs account and to set up a conversational AI agent there. See the Post note in the n8n workflow for a complete step-by-step description or our support article on setting up ElevenLabs AI voice agent Once the voice AI agent is ready, you might want to combine it with a text AI chatbot workflow so your users have a choice between the text and voice interaction. In that case, you may be interested to use our free open-source website popup chat widget popupchat.dev where you can create an embed code to add to your blog or website and allow the user to choose between the text and voice interaction. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key An ElevenLabs account FAQ 1. How many "experts" should I aim for? We recommend to aim for the number of experts as the optimal number of people in a team, which is usually 2-7. If you add more experts, your AI orchestrating agent will have troubles choosing the most suitable "expert" tool for the user's query. You can mitigate this by specifying in the AI agent description that it can choose maximum 3-7 experts to provide a response. 2. Why use InfraNodus GraphRAG and not standard vector store for knowledge? First, vector stores are complex to set up and to update. You'd need a separate workflow for that, decide on the vector dimensions, add metadata to your knowledge, etc. With InfraNodus, you have a complete RAG / GraphRAG solution under the hood that is easy to set up and provides high-quality responses that takes the overall structure and the relations between your ideas into account. 3 Why not use ElevenLabs' own knowledge? One of the reasons is that you want your knowledge base to be in one place so you can reuse it in other n8n workflows. Another reason is that you will not have such a good separation between the "experts" when you converse with the agent. So the answers you get will be based on top matches from all the books / articles you upload, while with the InfraNodus GraphRAG setup you can better control which graphs are consulted as experts and have an explicit way to display this data. Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available on our GitHub repo for n8n workflows. Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20318967066396-How-to-Build-a-Text-Voice-AI-Agent-Chatbot-with-n8n-Elevenlabs-and-InfraNodus Also check out the video tutorial with a demo:
by Airtop
README Automating Video File Download from Sample.cat with Airtop.ai Use Case Automating file downloads from web pages is useful for scenarios like bulk media retrieval, dataset access, or recurring content backups. This workflow ensures a hands-free, consistent process for retrieving downloadable content. What This Automation Does This automation performs a reliable download of a video file from a specified webpage using the following steps: Initiates an Airtop browser session. Opens a specified URL containing downloadable media. Interacts with the page to click the download button. Waits for the file to be processed and made available. Retrieves metadata to confirm availability. Downloads the file. Terminates the browser session to clean up resources. How It Works Manual Trigger: Activated by user test. Session: Starts an Airtop browser session. Window: Navigates to https://sample.cat/en/webm. Interaction: Simulates a click on the download button for the video titled “SD 640x360 (Seawater, drone view video, 30 FPS)”. Wait: Pauses for 10 seconds to allow the file to be ready for download. Get File Data: Checks for downloadable files in the session. Download File: Retrieves the file using its ID. Terminate: Ends the browser session to free up resources. Setup Requirements Airtop API Key — required to authenticate API calls. Next Steps Enhance with Retry Logic**: Loop file availability check until status = available for more robust automation. Customize File Targets**: Dynamically pass URLs and button descriptors for multi-source downloads. Connect to Storage**: Pipe downloaded files to cloud storage or databases for archiving. Read more about automating file downloads from the web