by William Lettieri
Overview Transform your LLM into a powerful GitHub automation specialist with this n8n workflow template. In a world where multiple MCP servers can overwhelm LLMs with context, this streamlined solution provides a dedicated GitHub Agent that handles all GitHub API operations through a single, specialized tool. When you need GitHub operations like creating repositories, managing issues, or handling pull requests, your LLM can make one simple call to the GitHub Agent. This agent specializes exclusively in GitHub MCP server operations, offloading all contextual complexity and providing clean, efficient GitHub automation. ✨ Features Single MCP Server Trigger** - One tool and one parameter to handle all GitHub API interactions Specialized GitHub Agent** - Dedicated AI agent with direct GitHub MCP Server connection Self-Executing Workflow** - "When Executed by Another Workflow" trigger enables seamless workflow chaining Scalable Architecture** - Ready to integrate with unlimited GitHub tools and operations Context Optimization** - Reduces LLM token usage by delegating GitHub complexity to a specialized agent Flexible Request Processing** - Handles any GitHub operation through natural language requests 🎯 Use Cases Repository Management** - Create, clone, and manage repositories programmatically Issue Tracking** - Automate issue creation, updates, and management workflows Pull Request Automation - Streamline code review and merge processes GitHub Actions Integration** - Trigger and monitor CI/CD workflows Team Collaboration** - Automate notifications and team management tasks Documentation Updates** - Automatically update README files and documentation 🏗️ Workflow Architecture Node Breakdown: MCP Server Trigger - Receives requests with GitHub operation parameters Set GitHub Username - Configures GitHub user context for API calls OpenAI Chat Model - Powers the intelligent GitHub agent with contextual understanding Simple Memory - Maintains conversation context and operation history GitHub AI Agent - Specialized Tools Agent with direct GitHub MCP Server access [MCP Server Trigger] → [Set GitHub Username] → [GitHub AI Agent] ↓ [OpenAI Chat Model] ← [Simple Memory] ← [GitHub API Operations] 📋 Requirements Essential Prerequisites: ✅ OpenAI API Key - For AI Agent and Chat Model functionality ✅ GitHub Username Configuration - Edit the "Set GitHub Username" node with your GitHub username for API calls ✅ n8n Version - Compatible with n8n 2024+ releases ✅ MCP Server Setup - Existing GitHub MCP server configuration Recommended Setup: GitHub Personal Access Token with appropriate permissions Basic understanding of n8n workflow configuration Familiarity with GitHub API operations 🚀 Setup Instructions Step 1: Import and Configure Import the workflow template into your n8n instance Navigate to the Set GitHub Username node Replace the placeholder with your actual GitHub username Step 2: API Keys Setup Configure your OpenAI API key in the Chat Model node Ensure your GitHub credentials are properly configured in n8n Test the connection to verify API access Step 3: MCP Server Integration Connect your existing GitHub MCP server to the workflow Verify the MCP Server Trigger is properly configured Test with a simple GitHub operation (e.g., "List my repositories") Step 4: Deploy and Test Activate the workflow in your n8n instance Test with various GitHub operations to ensure functionality Monitor execution logs for any configuration issues 🔧 Customization Options Agent Behavior Modify the Chat Model prompt** to adjust agent personality and response style Configure memory settings** to control conversation context retention Adjust timeout settings** for long-running GitHub operations GitHub Operations Extend supported operations** by adding new GitHub API endpoints Configure repository filters** to limit scope of operations Set up notification preferences** for important GitHub events Integration Points Webhook triggers** for real-time GitHub event processing Scheduled operations** for regular repository maintenance Cross-workflow triggers** for complex automation chains 💡 Pro Tips Start Simple**: Begin with basic operations like repository listing before attempting complex workflows Monitor Token Usage**: The specialized agent approach significantly reduces OpenAI API costs Batch Operations**: Group related GitHub operations in single requests for efficiency Error Handling**: The agent provides detailed error messages for troubleshooting 🤝 Support and Community Documentation**: Official n8n Documentation Community Forum**: n8n Community Issues & Contributions**: Feel free to suggest improvements or report issues 📄 License This workflow template is provided under the MIT License. You're free to use, modify, and redistribute with attribution. Created by: William Lettieri Version: 1.0 Last Updated: May 28, 2025 Compatibility: n8n 2024+
by Jez
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. This workflow demonstrates how to build and expose a sophisticated n8n AI Agent as a single, callable tool using the Multi-Agent Collaboration Protocol (MCP). It allows external clients or other AI systems to easily query software library documentation via Context7, without needing to manage the underlying tool orchestration or complex conversational logic. Core Idea: Instead of building complex agentic loops on the client-side (e.g., in Python, a VS Code extension, or another AI development environment), this workflow offloads the entire agent's reasoning and tool-use process to n8n. The client simply sends a natural language query (like "How do I use Flexbox in Tailwind CSS?") to an SSE endpoint, and the n8n agent handles the rest. Key Features & How It Works: Public MCP Endpoint: The main workflow uses the Context7 MCP Server Trigger node to create an SSE endpoint. This makes the agent accessible to any MCP-compatible client. The path for the endpoint is kept long and random for basic 'security by obscurity'. Tool Workflow as an Interface: A Tool Workflow node (named call_context7_ai_agent in this example) is connected to the MCP Server Trigger. This node defines the single "tool" that external clients will see and call. Dedicated AI Agent Sub-Workflow: The call_context7_ai_agent tool invokes a separate sub-workflow which contains the actual AI logic. This sub-workflow starts with a Context7 Workflow Start node to receive the user's query. A Context7 AI Agent node (using Google Gemini in this example) is the brain, equipped with: A system prompt to guide its behavior. Simple Memory to retain context for each execution (using {{ $execution.id }} as the session key). Two specialized Context7 MCP client tools: context7-resolve-library-id: To convert library names (e.g., 'Next.js') into Context7-specific IDs. context7-get-library-docs: To fetch documentation using the resolved ID, with options for specific topics and token limits. Seamless Tool Use: The AI Agent autonomously decides when and how to use the resolve-library-id and get-library-docs tools based on the user's query, handling the multi-step process internally. Benefits of This Approach: Simplified Client Integration:** Clients interact with a single, powerful tool, sending a simple query. Reduced Client-Side Token Consumption:** The detailed prompts, tool descriptions, and conversational turns are managed server-side by n8n, saving tokens on the client (especially useful if the client is another LLM). Centralized Agent Management:** Update your agent's capabilities, tools, or LLM model within n8n without any changes needed on the client side. Modularity for Agentic Systems:** Perfect for building complex, multi-agent systems where this n8n workflow can act as a specialized "expert" agent callable by others (e.g., from environments like Smithery). Cost-Effective:** By using a potentially less expensive model (like Gemini Flash) for the agent's orchestration and leveraging the free tier or efficient pricing of services like Context7, you can build powerful solutions economically. Use Cases: Providing an intelligent documentation lookup service for coding assistants or IDE extensions. Creating specialized AI "micro-agents" that can be consumed by larger AI applications. Building internal knowledge base query systems accessible via a simple API-like interface. Setup: Ensure you have the necessary n8n credentials for Google Gemini (or your chosen LLM) and the Context7 MCP client tools. The Path in the Context7 MCP Server Trigger node should be unique and secure. Clients connect to the "Production URL" (SSE endpoint) provided by the trigger node. This workflow is a great example of how n8n can serve as a powerful backend for building and deploying modular AI agents. I've made a video to try and explain this a bit too https://www.youtube.com/watch?v=dudvmyp7Pyg
by Automate With Marc
🧠 Google Drive Upload Trigger → Pinecone Vector Upsert for Document Indexing Category: AI & LLM / Document Indexing Level: Intermediate Tags: Google Drive, Pinecone, OpenAI, Embeddings, Vector Store, LangChain, RAG 📄 What This Workflow Does This workflow watches a specific Google Drive folder and automatically uploads any newly added document to a Pinecone vector database — complete with OpenAI-generated embeddings. Perfect for setting up retrieval-augmented generation (RAG) pipelines, semantic search, or document Q&A systems. Once configured, your knowledge base stays up-to-date with zero manual effort. Watch Full Step By Stey Tutorial Video Here: https://www.youtube.com/@Automatewithmarc 🔧 How It Works 📁 Google Drive Trigger Watches a specific folder and triggers when new documents are uploaded. 🔍 Google Drive File Search & Download Finds and fetches all files in the folder. 🔄 Loop Over Each File Handles batch processing for multiple files. 📃 Document Loader Parses each file as binary and applies custom metadata like document type. ✂️ Text Splitter Breaks content into manageable chunks for embedding (e.g., 600 characters, 60 overlap). 🧠 OpenAI Embeddings Generates vector embeddings using OpenAI. 📦 Pinecone Vector Store Inserts/upserts documents into a specific Pinecone namespace for search-ready indexing. 🧠 Why This is Useful This is a production-grade setup for: Building vector search tools over internal docs Feeding up-to-date data into RAG agents or chatbots Auto-tagging and chunking files for scalable AI workflows Whether you’re indexing course outlines, SOPs, or technical docs — this automation keeps your vector store fresh and organized. 🪜 Setup Instructions Connect your Google Drive, OpenAI, and Pinecone accounts. Specify the Google Drive folder to monitor. Customize metadata, chunk size, or vector namespace as needed. Activate the workflow and drop a file into the folder — magic happens behind the scenes. 📌 Notes Works best with PDFs or text-based documents. You can swap out OpenAI with other embedding models if needed. Consider adding notifications or logging (e.g., via Slack or email) for better observability.
by Gleb D
This n8n workflow automates the discovery, enrichment, and comparative analysis of startups from the Crunchbase dataset via Bright Data, enhanced with AI, and exports structured results to Google Sheets. 🚀 What It Does Receives a keyword from the user that describes the area of interest — such as an industry, sector, technology, or trend (e.g., "AI in healthcare", "carbon capture", "edtech"). This keyword is used to filter relevant startups from the Crunchbase dataset via Bright Data. Fetches data from Bright Data's Crunchbase snapshot API. Extracts and cleans key fields from the JSON response. Sorts startups by most recent founding date. Selects the top 10 most recent companies. Sends these 10 companies to Google Gemini AI for comparative analysis. Embeds the AI-generated summary into the final export. Appends results to a Google Sheet for tracking and reporting. 🛠️ Step-by-Step Setup Get user keyword input from a form. Use 3 Bright Data requests: Start snapshot. Poll snapshot status until ready. Fetch snapshot data in JSON format. Use a Python Code node to: Parse and sort companies by founded_date. Clean and standardize data fields. Pass the top 10 companies into Gemini AI for comparative insight. Merge the AI output back with company data. Send everything to Google Sheets. 🧠 How It Works Snapshot Control: Polls every few seconds until the Bright Data snapshot is complete. Code Cleanup: Ensures consistent structure and formatting across all records. Comparative AI Analysis: Gemini compares all 10 companies at once and returns a unified analysis. Merging Output: AI analysis is merged into the first company’s record (to avoid duplication), while all 10 are exported. 📤 Google Sheet Output Each row includes: name, founded, about, num_employees, type, ipo_status, full_description, social_media_links, address, website, funding_total, num_investors, lead_investors, founders, products_and_services, monthly_visits, crunchbase_link, ai_analysis. AI comparative analysis summary (only once per batch – attached to the first company). All fields from above customizible through the python code (you can add additional ones from Bright Data output). 🔐 Required Credentials Bright Data* – Replace *YOUR_API_KEY** in 3 HTTP Request nodes. Google Gemini API** – For AI analysis. Google Sheets OAuth2** – For spreadsheet export. ⚠️ Notes AI output is shared once per batch of 10 companies, attached to the first company entry. You can configure the limit of batch size in the first "Code" node.
by ist00dent
This n8n template allows you to perform real-time currency conversions by simply sending a webhook request. By integrating with the ExchangeRate.host API, you can get up-to-date exchange rates for over 170 world currencies, making it an incredibly useful tool for financial tracking, e-commerce, international business, and personal budgeting. 🔧 How it works Receive Conversion Request Webhook: This node acts as the entry point for the workflow, listening for incoming POST requests. It's configured to expect a JSON body containing: from: The 3-letter ISO 4217 currency code for the source currency (e.g., USD, PHP). to: The 3-letter ISO 4217 currency code for the target currency (e.g., EUR, JPY). amount: The numeric value you want to convert. Important: The ExchangeRate.host API access_key is handled securely by n8n's credential system and should not be included in the webhook body or headers. Convert Currency: This node makes an HTTP GET request to the ExchangeRate.host API (api.exchangerate.host). It dynamically constructs the URL using the from, to, and amount from the webhook body. Your API access key is securely retrieved from n8n's pre-configured credentials (HTTP Query Auth type) and automatically added as a query parameter (access_key). The API then performs the conversion and returns a JSON object with the conversion details. Respond with Converted Amount: This node sends the full currency conversion result received from ExchangeRate.host back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: E-commerce Platforms: Display prices in local currencies on the fly for international customers. Convert incoming international payments to your local currency for accounting. Calculate shipping costs in different currencies. Financial Tracking & Budgeting Apps: Update personal or business budgets with converted values. Track expenses incurred in foreign currencies. Automate portfolio value conversion for multi-currency investments. International Business & Freelancers: Generate invoices in a client's local currency based on your preferred currency. Quickly estimate project costs or earnings in different currencies. Automate reconciliation of international transactions. Travel Planning: Convert travel expenses from one currency to another while abroad. Build simple tools to estimate costs for trips in different countries. Data Analysis & Reporting: Standardize financial data from various sources into a single currency for unified reporting. Build dashboards that display converted financial metrics. Custom Integrations: Connect to CRMs, accounting software, or internal tools to automate currency-related tasks. Build chatbots that can answer currency conversion queries. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "from": "USD", "to": "PHP", "amount": 100 } The workflow will return a JSON response similar to this (results will vary based on currencies and amount): { "date": "2025-06-03", "historical": false, "info": { "rate": 58.749501, "timestamp": 1717398188 }, "query": { "amount": 100, "from": "USD", "to": "PHP" }, "result": 5874.9501, "success": true } ⚙️ Setup Instructions Get an ExchangeRate.host Access Key: Go to https://exchangerate.host/ and sign up for a free API key. Create an n8n Credential for ExchangeRate.host: In your n8n instance, go to Credentials. Click "New Credential" and search for "HTTP Query Auth". Set the Name (e.g., ExchangeRate.host API Key). Set API Key to your ExchangeRate.host access key. Set Parameter Name to access_key. Set Parameter Position to Query. Save the credential. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure ExchangeRate.host API Node: Double-click the Convert Currency node. Under "Authentication", select "Generic Credential Type". Choose "HTTP Query Auth" as the Generic Auth Type. Select the credential you created (e.g., "ExchangeRate.host API Key") from the dropdown. Configure Webhook Path: Double-click the Receive Conversion Request Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /convert-currency). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow is a powerful starting point. Here's how you can make it even more robust and integrated: Robust Error Handling: Add an IF node after Convert Currency to check {{ $json.success }}. If false, branch to an Error Trigger node or send an alert (e.g., Slack, Email) with {{ $json.error.info }} to notify you of API issues or invalid inputs. Include a Try/Catch block to gracefully handle network issues or malformed responses. Input Validation & Defaults: Add a Function node after the webhook to validate if from, to, and amount are present and in the correct format. If not, return a clear error message to the user. Set default from or to currencies if they are not provided in the webhook, making the API more flexible. Logging & Auditing: After a successful conversion, use a Google Sheets, Airtable, or database node (e.g., PostgreSQL, MongoDB) to log every conversion request, including the input currencies, amount, converted result, date, and possibly the calling IP (from the webhook headers). This is crucial for financial auditing and analysis. Rate Limits & Caching: If you anticipate many requests, be mindful of ExchangeRate.host's API rate limits. You can introduce a Cache node to store recent conversion results for a short period, reducing redundant API calls for common conversions. Alternatively, add a Delay node to space out requests if you're hitting limits. Format & Rounding: Use a Function node or Set node to format the result to a specific number of decimal places (e.g., {{ $json.result.toFixed(2) }}). Add currency symbols or full currency names to the output for better readability. Alerting on Significant Changes: Chain this workflow with a Cron or Schedule node to periodically fetch exchange rates for a pair you care about (e.g., USD to EUR). Use an IF node to compare the current rate with a previously stored rate. If the change exceeds a certain percentage, send an alert via Slack, Email, or Telegram to notify you of significant market shifts. Integration with Payment Gateways: For e-commerce, combine this with nodes for payment gateways (e.g., Stripe, PayPal) to automatically convert customer payments received in foreign currencies to your base currency before recording. Multi-currency Pricing for Products: Use this workflow in conjunction with your product database. When a user selects a different country/currency, trigger this webhook to dynamically convert product prices and display them instantly.
by ist00dent
This n8n template provides a simple yet powerful utility for validating if a given string input is a valid JSON format. You can use this to pre-validate data received from external sources, ensure data integrity before further processing, or provide immediate feedback to users submitting JSON strings. 🔧 How it works Webhook: This node acts as the entry point for the workflow, listening for incoming POST requests. It expects a JSON body with a single property: jsonString: The string that you want to validate as JSON. Code (JSON Validator): This node contains custom JavaScript code that attempts to parse the jsonString provided in the webhook body. If the jsonString can be successfully parsed, it means it's valid JSON, and the node returns an item with valid: true. If parsing fails, it catches the error and returns an item with valid: false and the specific error message. This logic is applied to each item passed through the node, ensuring all inputs are validated. Respond to Webhook: This node sends the validation result (either valid: true or valid: false with an error message) back to the service that initiated the webhook request. 👤 Who is it for? This workflow is ideal for: Developers & Integrators: Pre-validate JSON payloads from external systems (APIs, webhooks) before processing them in your workflows, preventing errors. Data Engineers: Ensure the integrity of JSON data before storing it in databases or data lakes. API Builders: Offer a dedicated endpoint for clients to test their JSON strings for validity. Customer Support Teams: Quickly check user-provided JSON configurations for errors. Anyone handling JSON data: A quick and easy way to programmatically check JSON string correctness without writing custom code in every application. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "jsonString": "{\"name\": \"n8n\", \"type\": \"workflow\"}" } Example of an invalid JSON string: { "jsonString": "{name: \"n8n\"}" // Missing quotes around 'name' } The workflow will return a JSON response indicating validity: For a valid JSON string: { "valid": true } For an invalid JSON string: { "valid": false, "error": "Unexpected token 'n', \"{name: \"n8n\"}\" is not valid JSON" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /validate-json). Activate Workflow: Save and activate the workflow. 📝 Tips This JSON validator workflow is a solid starting point. Consider these enhancements: Enhanced Error Feedback: Upgrade: Add a Set node after the Code node to format the error message into a more user-friendly string before responding. Leverage: Make it easier for the caller to understand the issue. Logging Invalid Inputs: Upgrade: After the Code node, add an IF node to check if valid is false. If so, branch to a node that logs the invalid jsonString and error to a Google Sheet, database, or a logging service. Leverage: Track common invalid inputs for debugging or improvement. Transforming Valid JSON: Upgrade: If the JSON is valid, you could add another Function node to parse the jsonString and then operate on the parsed JSON data directly within the workflow. Leverage: Use this validator as the first step in a larger workflow that processes JSON data. Asynchronous Validation: Upgrade: For very large JSON strings or high-volume requests, consider using a separate queueing mechanism (e.g., RabbitMQ, SQS) and an asynchronous response pattern. Leverage: Prevent webhook timeouts and improve system responsiveness.
by Don Jayamaha Jr
A short-term technical analysis agent for 15-minute candles on Binance Spot Market pairs. Calculates and interprets key trading indicators (RSI, MACD, BBANDS, ADX, SMA/EMA) and returns structured summaries, optimized for Telegram or downstream AI trading agents. This tool is designed to be triggered by another workflow (such as the Binance SM Financial Analyst Tool or Binance Quant AI Agent) and is not intended for standalone use. 🔧 Key Features ⏱️ Uses 15-minute kline data (last 100 candles) 📈 Calculates: RSI, MACD, Bollinger Bands, SMA/EMA, ADX 🧠 Interprets numeric data using GPT-4.1-mini 📤 Outputs concise, formatted analysis like: • RSI: 72 → Overbought • MACD: Cross Up • BB: Expanding • ADX: 34 → Strong Trend 🧠 AI Agent Purpose > You are a short-term analysis tool for spotting volatility, early breakouts, and scalping setups. Used by higher agents to determine: Entry/exit precision Momentum shifts Scalping opportunities ⚙️ How it Works Triggered externally by another workflow Accepts input: { "message": "BTCUSDT", "sessionId": "123456789" } Sends POST request to backend endpoint: https://treasurium.app.n8n.cloud/webhook/15m-indicators Fetches last 100 candles and calculates indicators Passes data to GPT for interpretation Returns summary with indicator tags for human readability 🔗 Dependencies This tool is triggered by: ✅ Binance SM Financial Analyst Tool ✅ Binance Spot Market Quant AI Agent 🚀 Setup Instructions Import into your n8n instance Make sure /15m-indicators webhook is active and calculates indicators correctly Connect your OpenAI GPT-4.1-mini credentials Trigger from upstream agent with Binance symbol and session ID Ensure all external calls (to Binance + webhook) are working 🧪 Example Use Cases | Use Case | Result | | ------------------------------------- | --------------------------------------- | | Short-term trade decision for ETHUSDT | Receives 15m signal indicators summary | | Input from Financial Analyst Tool | Returns real-time volatility snapshot | | Telegram bot asks for “DOGE update” | Returns momentum indicators in 15m view | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Michael Muenzer
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Fetch SEO and traffic information from ahref for a list of domains in a Google Sheet. This is great for marketing research and SEO workflow optimizations and saves tons of time. How it works We'll import domains from the Google sheet We use an SEO MCP server to fetch data from ahref free tooling The fetched data is stored in the Google sheet Set up steps Copy Google Sheet template and add it in all Google Sheet nodes Make sure that n8n has read & write permissions for your Google sheet. Add your list of domains in the first column in the Google sheet Add MCP credentials for seo-mcp
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by Alex Emerich
Convert PostgreSQL table to CSV CSV is a super useful and universal way to transfer data between different tools. This workflow gives an example of how to take data from PostgreSQL and convert it easily into a CSV. What you need Before running the workflow, please make sure you have access to a remote PostgreSQL server and have table data: book_title,book_author,read_date Demons,Fyodor Dostoyevsky,2022-09-08 Ulysses,James Joyce,2022-05-06 Catch-22,Joseph Heller,2023-01-04 The Bell Jar,Sylvia Plath,2023-01-21 Frankenstein,Mary Shelley,2023-02-14 How it works Trigger the workflow on click Declare the name of the Excel file and sheet names Remotely connect to the PostgreSQL database and specify query execution Write the query data to CSV The detailed process is explained further in the tutorial: https://blog.n8n.io/postgres-export-to-csv/
by Sam Robertson
Generate Summaries from Uploaded Files using OpenAI Assistants API 📑 Overview Upload a document (PDF, DOCX, PPTX, TXT, CSV, JSON, or Markdown) and receive an AI-generated summary containing: title** – 5-10 words summary** – 1-2 sentences bullets** – 3-5 key points tags** – 3-6 short keywords The workflow: Stores the file in OpenAI. Runs an Assistant with File Search and Code Interpreter enabled. Polls until the run finishes. Retrieves the summary JSON. ✅ Prerequisites OpenAI Assistant Create one at <https://platform.openai.com/assistants> Enable File Search and Code Interpreter Note: The assistant ID starts with asst_ OpenAI API credential setup in n8n Go to Credentials → New → HTTP Header Auth Header name: Authorization Value: Bearer YOUR-OPENAI-API-KEY (replace YOUR-OPENAI-API-KEY with your OpenAI API secret key for your assistant, starts with sk-) Name it: openAIApiHeader 🔧 Setup Import the workflow JSON. When n8n prompts for a credential, choose openAIApiHeader for every HTTP Request node. Open Run Assistant → Body and replace "assistant_id": "REPLACE_WITH_YOUR_ASSISTANT_ID" with your real ID (starts with asst_…). Save. 🚀 How it works | # | Node | Purpose | |---|------|---------| | 1 | On form submission | User uploads a file (File). | | 2 | Upload File | POST /v1/files (multipart) → returns file_id. | | 3 | Create Thread | Creates a thread and attaches the uploaded file. | | 4 | Run Assistant | Starts the run using your assistant_id. | | 5 | Poll Run Status → Wait 2 s → IF | Loops until status = completed. | | 6 | Fetch Summary | GET /v1/threads/{thread_id}/messages → summary JSON. | 🖌️ Customisation ideas Edit the user prompt in Create Thread to change summary length, tone, or language. Add an HTTP Response node after Fetch Summary to return plaintext to the uploader. Replace the polling loop with OpenAI’s forthcoming wait-for-run endpoint when available. No community nodes required. Works on any n8n Cloud plan (Starter, Pro, Enterprise) or self-hosted Community Edition.
by Joachim Hummel
This n8n workflow automates posting Amazon affiliate products to Mastodon — complete with image upload, description, and a shortened tracking URL using Shlink. 🔧 How it works Input Source: The workflow starts by reading from a connected Google Sheet that contains: SHlink (Shortlink) Amazon Link Description (Optional) PicURL Send /NO or YES A Send column (used as a flag to check if the row was already posted) Image Upload: It fetches the product image via HTTP and uploads it directly to a Mastodon instance via the /media API endpoint. URL Shortening (Shlink): The original Amazon URL is shortened using your self-hosted or cloud-hosted Shlink instance to enable click tracking and better presentation. Text Generation: A two-line promotional text is automatically generated using a Language Model (LLM), based on the product description. Posting to Mastodon: The post is then published on Mastodon with: The image The generated text The shortened Shlink URL Row Update: Once published, the Send column in the Google Sheet is updated to "YES" to prevent duplicates. Requirements ✅ Shlink – Required for shortening and tracking Amazon URLs ✅ Google Sheet – Used as a product queue and post ✅ Google Sheet Example https://link.unixweb.home64.de/w7VqY ✅ Mastodon account – OAuth2 credentials with write scope ✅ Product image URL – Must be valid and accessible ✅ n8n credentials – Set up for Google Sheets, Mastodon, and optionally OpenRouter or other LLM providers This workflow is ideal for content creators, affiliate marketers, and automation fans who want to save time and optimize reach across the Fediverse. #affiliate #amazon #mastodon #advertisment