by Aryan Shinde
How it works This workflow automates the process of creating, approving, and optionally posting LinkedIn content from a Google Sheet. Here's a high-level overview: Scheduled Trigger: Runs automatically based on your defined time interval (daily, weekly, etc.). Fetch Data from Google Sheets: Pulls the first row from your sheet where Status is marked as Pending. Generate LinkedIn Post Content: Uses OpenAI to create a professional LinkedIn post using the Post Description and Instructions from the sheet. Format & Prepare Data: Formats the generated content along with the original instruction and post description for email. Send for Approval: Sends an email to a predefined user (e.g., marketing team) with a custom form for approval, including a dropdown to accept/reject and an optional field for edits. (Optional) Image Fetch: Downloads an image from a URL (if provided in the sheet) for future use in post visuals. Set up steps You’ll need the following before you start: A Google Sheet with the following columns: Post Description, Instructions, Image (URL), Status Access to an OpenAI API key A connected Gmail account for sending approval emails Your own Google Sheets and Gmail credentials added in n8n Steps: Google Sheet Preparation: Create a new Google Sheet with the mentioned columns (Post Description, Instructions, Image, Status, Output, Post Link). Add a row with test data and set Status to Pending. Credentials: In n8n, create OAuth2 credentials for: a. Google Sheets b. Gmail c. OpenAI (API Key) Assign these credentials to the respective nodes in the JSON. OpenAI Model: Choose a model like gpt-4o-mini (used here) or any other available in your plan. Adjust the prompt in the "Generate Post Content" node if needed. Email Configuration: In the Gmail node, set the recipient email to your own or your team’s address. Customize the email message template if necessary. Schedule the Workflow: Set the trigger interval (e.g., every morning at 9 AM). Testing: Run the workflow manually first to confirm everything works. Check Gmail for the approval form, respond, and verify the results.
by Femi Ad
Generate & Schedule Social Media Posts with GPT-4 and Telegram Approval Workflow This comprehensive content automation system features 23 nodes that seamlessly orchestrate AI-powered content creation, validation, and multi-platform publishing through Telegram interaction. It supports posting to major platforms like Twitter, LinkedIn, Facebook, Instagram, and more via the Upload-Post API. Core Components Telegram Integration: Bidirectional messaging with approval workflows and real-time notifications. AI Content Engine: Configurable language models (GPT-4, Claude, etc.) via OpenRouter with structured output parsing. Content Validation: Character count enforcement (240-265), format checking, and quality threshold monitoring. Multi-Platform Publishing: Post on any social media platform with Upload-Post API - better and easier to use than Blotato, with a dedicated n8n community node. Approval System: Preview and approve/reject functionality before content goes live. Web Research: Optional Tavily integration for real-time information gathering. Target Users Content creators seeking consistent social media presence. Digital marketers managing multiple brand accounts. Entrepreneurs wanting automated thought leadership. Agencies needing scalable content solutions. Small businesses without dedicated social media teams. Setup Requirements To get started, you'll need: Telegram Bot: Create via @BotFather and configure webhook. Required APIs: OpenRouter (for AI model access). Upload-Post API (superior alternative to Blotato with community node support). Tavily API (optional for research). n8n Prerequisites: Version 1.7+ with Langchain nodes. Webhook configuration enabled. Proper credential storage setup. Disclaimer: This template uses community-supported nodes, such as the Upload-Post API node. These may require additional setup and could change with n8n updates. Always verify compatibility and test in a safe environment. Step-by-Step Setup Guide Install n8n: Ensure you're running n8n version 1.7 or higher. Enable webhook configurations in your settings. Set Up Credentials: In n8n, add credentials for OpenRouter, Upload-Post API, and optionally Tavily. Store them securely. Create Telegram Bot: Go to Telegram, search for @BotFather, and create a new bot. Note the token and set up a webhook pointing to your n8n instance. Import the Workflow: Copy the workflow JSON (available in the template submission) and import it into your n8n dashboard. Configure Nodes: Set your AI model preferences in the OpenRouter node. Link your social media accounts via the Upload-Post API node. Adjust validation settings (e.g., character limits, retry attempts) as needed. Test the Workflow: Trigger a test run via Telegram by sending a content request. Approve or reject the preview, and monitor the output. Schedule or Automate: Use n8n's scheduling features for automated triggers, or run manually for on-demand posts. Usage Instructions Initiate via Telegram: Send a message to your bot with a topic or prompt (e.g., "Create a post about AI automation for entrepreneurs"). AI Generation: The system generates content using your chosen model, with optional web research. Validation Check: Content is automatically validated for length, quality (70% pass threshold), and format. Approval Workflow: Receive a preview in Telegram. Reply with "approve" to post, or "reject" to retry (up to 3 attempts). Publishing: Approved content posts to your selected platforms. Get notifications on success or errors. Customization: Adapt for single posts, 3-6 post threads, or different tones (business, creative, educational, personal, technical). Use scheduling for consistent posting. Workflow Features Universal Platform Support: Post to any social media platform via Upload-Post API. Scheduling Flexibility: Automated triggers or manual execution. Content Types: Single posts or multi-post threads. Quality Control: 30% error tolerance with detailed validation reporting. Character Optimization: Enforced 240-265 character range for maximum engagement. Topic Versatility: Adapts tone and style based on content type. Error Handling: Comprehensive validation with helpful user feedback. Performance Specifications: AI retry attempts: 3 for reliability. Validation threshold: 70% pass rate. Format support: Single posts and 3-6 post threads. Platform coverage: Any social media platform through Upload-Post API. Research capability: Optional web search for trending topics. Why Upload-Post API? Community-supported n8n node for easier integration. More reliable and feature-rich than Blotato. Supports all major social platforms. Active development and support. Workflow Image Need help customizing this workflow for your specific use case, Femi? As a fellow entrepreneur passionate about automation and business development, I'd be happy to consult. Connect with me on LinkedIn: https://www.linkedin.com/in/femi-adedayo-h44/ or email for support. Let's make your AI automation agency even more efficient!
by A Z
Automatically scrape Meta Threads for posts hiring specific roles (e.g. automation engineers, video editors, graphic designers), filter true hiring intent, deduplicate, and send alerts. We are taking automation roles as an example for now. What it does This workflow continuously scans Threads for fresh posts mentioning the roles you care about. It uses AI to filter out self-promotion and service ads, keeping only posts where the author is hiring. Qualified posts are saved into Google Sheets for tracking and sent to Telegram for instant alerts. It’s ideal for freelancers, agencies, and job seekers who want a steady radar of opportunities. How it works (Step by Step) Schedule trigger – Runs on a set interval (e.g. every 12 hours). Scrape Threads posts – Fetches recent posts from multiple keywords (e.g., “n8n expert”, “hire video editor”, “graphic designer”, etc.) via Apify. Merge results – Combines posts into a single stream. Normalize fields – Maps raw data into clean fields: text, author, URL, timestamp, profile link. AI filter – Uses an AI Agent to: Accept only posts where someone is hiring (rejects “hire me” style self-promo). Apply simple geography rules (e.g., allow US, UK, UAE, CA; pass unknowns). Exclude roles outside your scope. Deduplication – Checks Google Sheets to skip posts already seen. Save to Google Sheets – Writes qualified posts with full details. Telegram alerts – Sends you the matched post instantly so you can act. Who it’s for Freelancers: Get first dibs on gigs before others spot them. Agencies: Build a client pipeline by tracking hiring signals. Job seekers: Spot hidden opportunities in your target field. Customization Ideas Swap keywords to monitor roles you care about (e.g., “UI/UX designer”, “motion graphics editor”, “copywriter”). Add Slack or Discord notifications instead of Telegram. Expand geo rules to match your region. Use Sheets as a CRM—add columns for status, outreach date, etc
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Davide
This workflow automates the generation of AI-enhanced, contextualized images using FLUX Kontext, based on prompts stored in a Google Sheet. The generated images are then saved to Google Drive, and their URLs are written back to the spreadsheet for easy access. Example Image: Prompt: The girl is lying on the bed and sleeping Result: Perfect for E-commerce and Social Media This workflow is especially useful for e-commerce businesses: Generate product images with dynamic backgrounds based on the use-case or season. Create contextual marketing visuals for ads, newsletters, or product pages. Scale visual content creation without the need for manual design work. How It Works Trigger**: The workflow can be started manually (via "Test workflow") or scheduled at regular intervals (e.g., every 5 minutes) using the "Schedule Trigger" node. Data Fetch**: The "Get new image" node retrieves a row from a Google Sheet where the "RESULT" column is empty. It extracts the prompt, image URL, output format, and aspect ratio for processing. Image Generation**: The "Create Image" node sends a request to the FLUX Kontext API (fal.run) with the provided parameters to generate a new AI-contextualized image. Status Check**: The workflow waits 60 seconds ("Wait 60 sec." node) before checking the status of the image generation request via the "Get status" node. If the status is "COMPLETED," it proceeds; otherwise, it loops back to wait. Result Handling**: Once completed, the "Get Image Url" node fetches the generated image URL, which is then downloaded ("Get Image File"), uploaded to Google Drive ("Upload Image"), and the Google Sheet is updated with the result ("Update result"). Set Up Steps To configure this workflow, follow these steps: Google Sheet Setup: Create a Google Sheet with columns for PROMPT, IMAGE URL, ASPECT RATIO, OUTPUT FORMAT, and RESULT (leave this empty). Link the sheet in the "Get new image" and "Update result" nodes. API Key Configuration: Sign up at fal.ai to obtain an API key. In the "Create Image" node, set the Header Auth with: Name: Authorization Value: Key YOURAPIKEY Google Drive Setup: Specify the target folder ID in the "Upload Image" node where generated images will be saved. Schedule Trigger (Optional): Adjust the "Schedule Trigger" node to run the workflow at desired intervals (e.g., every 5 minutes). Test Execution: Run the workflow manually via the "Test workflow" node to verify all steps function correctly. Once configured, the workflow will automatically process pending prompts, generate images, and update the Google Sheet with results. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Dr. Firas
Google Maps Data Extraction Workflow for Lead Generation This workflow is ideal for sales teams, marketers, entrepreneurs, and researchers looking to efficiently gather detailed business information from Google Maps for: Lead generation Market analysis Competitive research Who Is This Workflow For? Sales professionals** aiming to build targeted contact lists Marketers** looking for localized business data Researchers** needing organized, comprehensive business information Problem This Workflow Solves Manually gathering business contact details from Google Maps is: Tedious Error-prone Time-consuming This workflow automates data extraction to increase efficiency, accuracy, and productivity. What This Workflow Does Automates extraction of business data (name, address, phone, email, website) from Google Maps Crawls and extracts additional website content Integrates OpenAI to enhance data processing Stores structured results in Google Sheets for easy access and analysis Uses Google Search API to fill in missing information Setup Import the provided n8n workflow JSON into your n8n instance. Set your OpenAI and Google Sheets API credentials. Provide your Google Maps Scraper and Website Content Crawler API keys. Ensure SerpAPI is configured to enhance data completeness. Customizing This Workflow to Your Needs Adjust scraping parameters: Location Business category Country code Customize Google Sheets output format to fit your current data structure Integrate additional AI processing steps or APIs for richer data enrichment Final Notes This structured approach ensures: Accurate and compliant data extraction** from Google Maps Streamlined lead generation Actionable and well-organized data ready for business use 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo
by German Velibekov
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform email overload into actionable insights with this automated daily digest workflow that intelligently summarizes categorized emails using AI. Who's it for This workflow is perfect for busy professionals, content creators, and newsletter subscribers who need to stay informed without spending hours reading through multiple emails. Whether you're tracking industry news, monitoring competitor updates, or managing content subscriptions, this automation helps you extract key insights efficiently. How it works The workflow runs automatically each morning at 9 AM, fetching emails from a specific Gmail label received in the last 24 hours. Each email is processed through OpenAI's language model using LangChain to create concise, readable summaries that preserve important links and formatting. All summaries are then combined into a single, well-formatted digest email and sent to your inbox, replacing dozens of individual emails with one comprehensive overview. How to set up Create a Gmail label for emails you want summarized (e.g., "Tech News", "Industry Updates") Configure credentials for both Gmail OAuth2 and OpenAI API in their respective nodes Update the Gmail label ID in the "Get mails (last 24h)" node with your specific label Set your email address in the "Send Digested mail" node Adjust the schedule in the Schedule Trigger if you prefer a different time than 9 AM Test the workflow with a few labeled emails to ensure proper formatting Requirements Gmail account with OAuth2 authentication configured OpenAI API account and valid API key At least one Gmail label set up for email categorization Basic understanding of n8n workflow execution How to customize the workflow Change summarization style: Modify the prompt in the "Summarization Mails" node to adjust tone, length, or format of summaries. You can make summaries more technical, casual, or focus on specific aspects like action items. Adjust time range: Change the receivedAfter parameter in the Gmail node to fetch emails from different time periods (last 2 days, last week, etc.). Multiple labels: Duplicate the Gmail retrieval section to process multiple labels and combine them into categories within your digest. Add filtering: Insert additional conditions to filter emails by sender, subject keywords, or other criteria before summarization. Custom formatting: Modify the "Combine Subject and Body" code node to change the HTML structure, add styling, or include additional metadata like email timestamps or priority indicators.
by Jon Doran
Summary Engage multiple, uniquely configured AI agents (using different models via OpenRouter) in a single conversation. Trigger specific agents with @mentions or let them all respond. Easily scalable by editing simple JSON settings. Overview This workflow is for users who want to experiment with or utilize multiple AI agents with distinct personalities, instructions, and underlying models within a single chat interface, without complex setup. It solves the problem of managing and interacting with diverse AI assistants simultaneously for tasks like brainstorming, comparative analysis, or role-playing scenarios. It enables dynamic conversations with multiple AI assistants simultaneously within a single chat interface. You can: Define multiple unique AI agents. Configure each agent with its own name, system instructions, and LLM model (via OpenRouter). Interact with specific agents using @AgentName mentions. Have all agents respond (in random order) if no specific agents are mentioned. Maintain conversation history across multiple turns. It's designed for flexibility and scalability, allowing you to easily add or modify agents without complex workflow restructuring. Key Features Multi-Agent Interaction:** Chat with several distinct AI personalities at once. Individual Agent Configuration:** Customize name, system prompt, and LLM for each agent. OpenRouter Integration:** Access a wide variety of LLMs compatible with OpenRouter. Mention-Based Triggering:** Direct messages to specific agents using @AgentName. All-Agent Fallback:** Engages all defined agents randomly if no mentions are used. Scalable Setup:** Agent configuration is centralized in a single Code node (as JSON). Conversation Memory:** Remembers previous interactions within the session. How to Set Up Configure Settings (Code Nodes): Open the Define Global Settings Code node: Edit the JSON to set user details (name, location, notes) and add any system message instructions that all agents should follow. Open the Define Agent Settings Code node: Edit the JSON to define your agents. Add or remove agent objects as needed. For each agent, specify: "name": The unique name for the agent (used for @mentions). "model": The OpenRouter model identifier (e.g., "openai/gpt-4o", "anthropic/claude-3.7-sonnet"). "systemMessage": Specific instructions or persona for this agent. Add OpenRouter Credentials: Locate the AI Agent node. Click the OpenRouter Chat Model node connected below it via the Language Model input. In the 'Credential for OpenRouter API' field, select or create your OpenRouter API credentials. How to Use Start a conversation using the Chat Trigger input. To address specific agents, include @AgentName in your message. Agents will respond sequentially in the order they are mentioned. Example: "@Gemma @Claude, please continue the count: 1" will trigger Gemma first, followed by Claude. If your message contains no @mentions, all agents defined in Define Agent Settings will respond in a randomized order. Example: "What are your thoughts on the future of AI?" will trigger Chad, Claude, and Gemma (based on your default settings) in a random sequence. The workflow will collect responses from all triggered agents and return them as a single, formatted message. How It Works (Technical Details) Settings Nodes: Define Global Settings and Define Agent Settings load your configurations. Mention Extraction: The Extract mentions Code node parses the user's input (chatInput) from the When chat message received trigger. It looks for @AgentName patterns matching the names defined in Define Agent Settings. Agent Selection: If mentions are found, it creates a list of the corresponding agent configurations in the order they were mentioned. If no mentions are found, it creates a list of all defined agent configurations and shuffles them randomly. Looping: The Loop Over Items node iterates through the selected agent list. Dynamic Agent Execution: Inside the loop: An If node (First loop?) checks if it's the first agent responding. If yes (true path -> Set user message as input), it passes the original user message to the Agent. If no (false path -> Set last Assistant message as input), it passes the previous agent's formatted output (lastAssistantMessage) to the next agent, creating a sequential chain. The AI Agent node receives the input message. Its System Message and the Model in the connected OpenRouter Chat Model node are dynamically populated using expressions referencing the current agent's data from the loop ({{ $('Loop Over Items').item.json.* }}). The Simple Memory node provides conversation history to the AI Agent. The agent's response is formatted (e.g., AgentName:\n\nResponse) in the Set lastAssistantMessage node. Response Aggregation: After the loop finishes, the Combine and format responses Code node gathers all the lastAssistantMessage outputs and joins them into a single text block, separated by horizontal rules (---), ready to be sent back to the user. Benefits Scalability & Flexibility:** Instead of complex branching logic, adding, removing, or modifying agents only requires editing simple JSON in the Define Agent Settings node, making setup and maintenance significantly easier, especially for those managing multiple assistants. Model Choice:** Use the best model for each agent's specific task or persona via OpenRouter. Centralized Configuration:** Keeps agent setup tidy and manageable. Limitations Sequential Responses:** Agents respond one after another based on mention order (or randomly), not in parallel. No Direct Agent-to-Agent Interaction (within a turn):* Agents cannot directly call or reply to each other *during the processing of a single user message. Agent B sees Agent A's response only because the workflow passes it as input in the next loop iteration. Delayed Output:* The user receives the combined response only *after all triggered agents have completed their generation.
by A Z
Automatically scrape X (Twitter) for posts hiring specific roles (e.g., automation engineers, video editors, graphic designers), filter true hiring intent with AI, deduplicate in Google Sheets, and alert via Telegram. What it does Pulls recent X/Twitter posts for multiple role keywords via Apify. Normalizes each post (text, author, links, location). Uses an AI Agent to keep only posts where the author is hiring (not self-promo). Checks Google Sheets for duplicates by URL before saving. Writes qualified posts to a sheet and sends a Telegram notification. We are using n8n automation roles as the example here How it works (Step by Step) Schedule Trigger – Runs on an interval (currently every 12 hours). Scrape X/Twitter – Apify tweet-scraper fetches up to 50 latest posts for keywords like: n8n developer, looking for n8n, n8n expert, hire AI automation, looking for AI automation. Normalize Fields – Set node maps to: url, text, author.userName, author.url, author.location. AI Filter & Dedupe Check Accept only clear hiring posts for n8n/AI automation roles (reject self-promotion). Queries Google Sheets to see if url already exists; duplicates are dropped. Gate – IF node passes only non-empty AI outputs. Parse JSON Safely – Code node extracts/validates JSON from the AI output. Save to Google Sheets – Appends/updates a row (matching on url). Telegram Alert – Sends a message with the tweet URL, author, location, and text. Who it’s for Freelancers, agencies, and job seekers who want a steady radar of real hiring posts for their target roles. Customization Ideas Swap keywords to track other roles (video editors, designers, copywriters, etc.). Add Slack/Discord notifications. Extend the AI rules (e.g., different geographies or role scopes). Treat the sheet as a mini-CRM (status, outreach date, notes).
by Zacharia Kimotho
How it works This workflow gets the search console results data and exports this to google sheets. This makes it easier to visualize and do other SEO related tasks and activities without having to log into Search Console Setup and use Set your desired schedule Enter your desired domain Connect to your Google sheets or make a copy of this sheet. Detailed Setup Inputs and Outputs:** Input: API response from Google Search Console regarding keywords, page data, and date data. Output: Entries written to Google Sheets containing keyword data, clicks, impressions, CTR, and positions. Setup Instructions: Prerequisites:** An n8n instance set up and running. Active Google Account with access to Google Search Console and Google Sheets. Google OAuth 2.0 credentials for API access. Step-by-Step Setup:** Open n8n and create a new workflow. Add the nodes as described in the JSON. Configure the Google OAuth2 credentials in n8n to enable API access. Set your domain in the Set your domain node. Customize the Google Sheets document URLs to your personal sheets. Adjust the schedule in the Schedule Trigger node as per your requirements. Save the workflow. Configuration Options:** You can customize the date ranges in the body of the HttpRequest nodes. Adjust any fields in the Edit Fields nodes based on different data requirements. Use Case Examples: Useful in tracking website performance over time using Search Console metrics. Ideal for digital marketers, SEO specialists, and web analytics professionals. Offers value in compiling performance reports for stakeholders or team reviews. Running and Troubleshooting: Running the Workflow:** Trigger the workflow manually or wait for the schedule to run it automatically. Monitoring Execution:** Check the execution logs in n8n's dashboard to ensure all nodes complete successfully. Common Issues:** Invalid OAuth credentials – ensure credentials are set up correctly. Incorrect Google Sheets URLs – double-check document links and permissions. Scheduling conflicts – make sure the schedule set does not overlap with other workflows.
by David Ashby
Complete MCP server exposing 2 Analytics API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Analytics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Analytics API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Rate_Limit (1 endpoints) • GET /rate_limit/: Retrieve Application Rate Limits 🔧 User_Rate_Limit (1 endpoints) • GET /user_rate_limit/: Retrieve User Rate Limits 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Analytics API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Murtaja Ziad
A n8n workflow designed to shorten URLs using Dub.co API. How it works: It shortens a url using Dub.co API, with the ability to use custom domains and projects. It updates the current shortened url if the slug has been already used. Estimated Time: Around 15 minutes. Requirements: A Dub.co account. Configuration: Configure the "API Auth" node to add your Dub.co API key, project slug, and the long URL. There some extras that you're able to configure too. You will be able to do that by clicking the "API Auth" node and filling the fields. Detailed Instructions: Sticky notes within the workflow provide extensive setup information and guidance. Keywords: n8n workflow, dub.co, dub.sh, url shortener, short urls, short links