by NanaB
What it does This n8n workflow creates a cutting-edge, multi-modal AI Memory Assistant designed to capture, understand, and intelligently recall your personal or business information from diverse sources. It automatically processes voice notes, images, documents (like PDFs), and text messages sent via Telegram. Leveraging GPT-4o for advanced AI processing (including visual analysis, document parsing, transcription, and semantic understanding) and MongoDB Atlas Vector Search for persistent and lightning-fast recall, this assistant acts as an external brain. Furthermore, it integrates with Gmail, allowing the AI to send and search emails as part of its memory and response capabilities. This end-to-end solution blurprint provides a powerful starting point for personal knowledge management and intelligent automation. How it works 1. Multi-Modal Input Ingestion π£οΈπΈππ¬ Your memories begin when you send a voice note, an image, a document (e.g., PDF), or a text message to your Telegram bot. The workflow immediately identifies the input type. 2. Advanced AI Content Processing π§ β¨ Each input type undergoes specialized AI processing by GPT-4o: Voice notes are transcribed into text using OpenAI Whisper. Images are visually analyzed by GPT-4o Vision, generating detailed textual descriptions. Documents (PDFs) are processed for text extraction, leveraging GPT-4o for robust parsing and understanding of content and structure. Unsupported document types are gracefully handled with a user notification. Text messages are directly forwarded for further processing. This phase transforms all disparate input formats into a unified, rich textual representation. 3. Intelligent Memory Chunking & Vectorization βοΈπ·οΈβ‘οΈπ’ The processed content (transcriptions, image descriptions, extracted document text, or direct text) is then fed back into GPT-4o. The AI intelligently chunks the information into smaller, semantically coherent pieces, extracts relevant keywords and tags, and generates concise summaries. Each of these enhanced memory chunks is then converted into a high-dimensional vector embedding using OpenAI Embeddings. 4. Persistent Storage & Recall (MongoDB Atlas Vector Search) πΎπ These vector embeddings, along with their original content, metadata, and tags, are stored in your MongoDB Atlas cluster, which is configured with Atlas Vector Search. This allows for highly efficient and semantically relevant retrieval of memories based on user queries, forming the core of your "smart recall" system. 5. AI Agent & External Tools (Gmail Integration) π€π οΈ When you ask a question, the AI Agent (powered by GPT-4o) acts as the central intelligence. It uses the MongoDB Chat Memory to maintain conversational context and, crucially, queries the MongoDB Atlas Vector Search store to retrieve relevant past memories. The agent also has access to Gmail tools, enabling it to send emails on your behalf or search your past emails to find information or context that might not be in your personal memory store. 6. Smart Response Generation & Delivery π¬β‘οΈπ± Finally, using the retrieved context from MongoDB and the conversational history, GPT-4o synthesizes a concise, accurate, and contextually aware answer. This response is then delivered back to you via your Telegram bot. How to set it up (~20 Minutes) Getting this powerful workflow running requires a few key configurations and external service dependencies. Telegram Bot Setup: Use BotFather in Telegram to create a new bot and obtain its API Token. In your n8n instance, add a new Telegram API credential. Give it a clear name (e.g., "My AI Memory Bot") and paste your API Token. OpenAI API Key Setup: Log in to your OpenAI account and generate a new API key. Within n8n, create a new OpenAI API credential. Name it appropriately (e.g., "My OpenAI Key for GPT-4o") and paste your API key. This credential will be used by the OpenAI Chat Model (GPT-4o for processing, chunking, and RAG), Analyze Image, and Transcribe Audio nodes. MongoDB Atlas Setup: If you don't have one, create a free-tier or paid cluster on MongoDB Atlas. Create a database and a collection within your cluster to store your memory chunks and their vector embeddings. Crucially, configure an Atlas Vector Search index on your chosen collection. This index will be on the field containing your embeddings (e.g., embedding field, type knnVector). Refer to MongoDB Atlas documentation for detailed instructions on creating vector search indexes. In n8n, add a new MongoDB credential. Provide your MongoDB Atlas connection string (ensure it includes your username, password, and database name), and give it a clear name (e.g., "My Atlas DB"). This credential will be used by the MongoDB Chat Memory node and for any custom HTTP requests you might use for Atlas Vector Search insertion/querying. Gmail Account Setup: Go to Google Cloud Console, enable the Gmail API for your project, and configure your OAuth consent screen. Create an OAuth 2.0 Client ID for a Desktop app (or Web application, depending on your n8n setup and redirect URI). Download the JSON credentials. In n8n, add a new Gmail OAuth2 API credential. Follow the n8n instructions to configure it using your Google Client ID and Client Secret, and authenticate with your Gmail account, ensuring it has sufficient permissions to send and search emails. External API Services: If your Extract from File node relies on an external service for robust PDF/DocX text extraction, ensure you have an API key and the service is operational. The current flow uses ConvertAPI. Add the necessary credential (e.g., ConvertAPI) in n8n. How you could enhance it β¨ This workflow offers numerous avenues for advanced customization and expansion: Expanded Document Type Support: Enhance the "Document Processing" section to handle a wider range of document types beyond just PDFs (e.g., .docx, .xlsx, .pptx, markdown, CSV) by integrating additional conversion APIs or specialized parsing libraries (e.g., using a custom code node or dedicated third-party services like Apache Tika, Unstructured.io). Fine-tuned Memory Chunks & Metadata: Implement more sophisticated chunking strategies for very long documents, perhaps based on semantic breaks or document structure (headings, sections), to improve recall accuracy. Add more metadata fields (e.g., original author, document date, custom categories) to your MongoDB entries for richer filtering and context. Advanced AI Prompting: Allow users to dynamically set parameters for their memory inputs (e.g., "This is a high-priority meeting note," "This image contains sensitive information") which can influence how GPT-4o processes, tags, and stores the memory, or how it's retrieved later. n8n Tool Expansion for Proactive Actions: Significantly expand the AI Agent's capabilities by providing it with access to a wider range of n8n tools, moving beyond just information retrieval and email External Data Source Integration (APIs): Expand the AI Agent's tools to query other external APIs (e.g., weather, stock prices, news, CRM systems) so it can provide real-time information relevant to your memories. Getting Assistance & More Resources Need assistance setting this up, adapting it to a unique use case, or exploring more advanced customizations? Don't hesitate to reach out! You can contact me directly at nanabrownsnr@gmail.com. Also, feel free to check out my Youtube Channel where I discuss other n8n templates, as well as Innovation and automation solutions.
by Pablo
What this template does The Ultimate Scraper for n8n uses Selenium and AI to retrieve any information displayed on a webpage. You can also use session cookies to log in to the targeted webpage for more advanced scraping needs. β οΈ Important: This project requires specific setup instructions. Please follow the guidelines provided in the GitHub repository: n8n Ultimate Scraper Setup : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. The workflow version on n8n and the GitHub project may differ; however, the most up-to-date version will always be the one available on the GitHub repository : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. How to use Deploy the project with all the requirements and request your webhook. Example of request: curl -X POST http://localhost:5678/webhook-test/yourwebhookid \ -H "Content-Type: application/json" \ -d '{ "subject": "Hugging Face", "Url": "github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repos" } ], "cookie": [] }' Or to just scrap a url : curl -X POST http://localhost:5678/webhook-test/67d77918-2d5b-48c1-ae73-2004b32125f0 \ -H "Content-Type: application/json" \ -d '{ "Target Url": "https://github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repo" } ], "cookies": [] }' `
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain β Must match the domain of the WHMCS/WISECP Docker server. clients_dir β Directory where user data related to Docker and disks will be stored. mount_dir β Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Budi SJ
Automated Financial Reporting Using Google Vision OCR, Telegram & Google Sheets This workflow automates the process of recording financial transactions from photos of receipts or shopping receipts. Users simply send an image of the receipt via Telegram. The image is processed using the Google Vision API to detect text, then extracted and structured by LLM via OpenRouter. The final result is saved to Google Sheets and also displayed to the user via a Telegram bot. π§Ύ Google Sheets Template Create a Google Sheet using this template: Financial Reporting π οΈ Key Features The workflow starts when a user sends a photo of a receipt to the Telegram bot. The image is converted to text using the Google Vision API's OCR. Data processing with LLM (OpenRouter) helps identify and structure transaction elements such as: date, vendor name & address, receipt/invoice number, item list (product name, quantity, unit price, total), and transaction category. Cleaned and structured data is automatically recorded to Google Sheets per item. The system also sends a summary of the recording results in an easy to read text format. Users can also send text messages to the bot to query stored transaction data, which will be answered by a Google Sheets-based AI Agent. π§ Requirements Active Telegram Bot + API Token Google Vision API Key OpenRouter Account + API Key Google Sheets connected to n8n π§© Setup Instructions Replace all API keys and tokens with your own in the relevant nodes. Google Vision API Key: Set in 'Set Vision API' node. Telegram Bot Token: Set in 'Set Telegram Token' node and all Telegram nodes. OpenRouter API Key: Set in all OpenRouter nodes. Google Sheets: Connect your own Google Sheets credential. Use the provided Google Sheets template or your own. Activate the workflow after configuration. (Optional) Review sticky notes for step-by-step explanations.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. π Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. π§ Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. π Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. π Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. π Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. π Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. β±οΈ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. β Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. β Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. π Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. π Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. π₯ Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. π₯ Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. π Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. π Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form β Format APIs β [Indeed Trigger] + [LinkedIn Trigger] β β Check Status Check Status β β Wait 60s Wait 60s β β Verify Ready Verify Ready β β Validate Data Validate Data β β Fetch Indeed Fetch LinkedIn β β ββββ Merge Results ββββ β Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Lucas Peyrin
How it works This template is a hands-on, practical exam designed to test your understanding of the fundamental JSON data types. It's the perfect way to solidify your knowledge after learning the basics. Think of it as the "driver's test" that comes after the "theory lesson". You'll be given a series of tasks, and the workflow will automatically check your answers, providing instant feedback. The test is broken down into six sequential challenges, each focusing on a core data type: String: Writing text values correctly. Number: Using integers and decimals. Boolean: Working with true and false. Null: Representing a non-existant value. Array: Creating ordered lists of data. Object: Building nested key-value structures. For each challenge, you'll modify a Set node with the correct JSON syntax. When you execute the workflow, a corresponding IF node will validate your input. A green path means you passed and can move to the next challenge. A red path means you need to try again! Set up steps Setup time: < 1 minute This workflow is a self-contained test and requires no setup or credentials. Read the instructions on the main sticky note to understand the goal. Start with the first challenge, "Test - String". Activate and modify the node according to the instructions on the purple sticky note next to it. Click "Execute Workflow". If the execution path is green, you've passed! You can move on to the next "Test" node in the sequence to continue. If the path is red, read the hint in the error message and try again. Repeat the process until you reach the final success message. Good luck!
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This n8n-powered automation uses Bright Data's MCP Client to extract real-time data from a price drop site listing the amazon products, including price changes and related product details. The extracted data is enriched with structured data transformation, content summarization, and sentiment analysis using Google Gemini LLM. The Amazon Price Drop Intelligence Engine is designed for: Ecommerce Analysts** who need timely updates on competitor pricing trends Brand Managers** seeking to understand consumer sentiment around pricing Data Scientists** building pricing models or enrichment pipelines Affiliate Marketers** looking to optimize campaigns based on dynamic pricing AI Developers** automating product intelligence pipelines What problem is this workflow solving? This workflow solves several key pain points: Reliable Scraping: Uses Bright Data MCP, a managed crawling platform that handles proxies, captchas, and site structure changes automatically. Insight Generation: Transforms unstructured HTML into structured data and then into human-readable summaries using Google Gemini LLM. Sentiment Context: Goes beyond raw pricing data to reveal how customers feel about the price change, helping businesses and researchers measure consumer reaction. Automated Reporting: Aggregates and stores data for easy access and downstream automation (e.g., dashboards, notifications, pricing models). What this workflow does Scrape price drop site with Bright Data MCP The workflow begins by scraping targeted price drop site for Amazon listings using Bright Data's Model Context Protocol (MCP). You can configure this to target: Structured Data Extraction Once the HTML content is retrieved, Google Gemini is employed to: Parse and structure the product information (title, price, discount, brand, ratings) Summarization & Sentiment Analysis The extracted data is passed through an LLM chain to: Generate a concise summary of the product and its recent price movement Perform sentiment analysis on user reviews and public perception Store the Results Save to disk for archiving or bulk processing Updated in a Google Sheet, making it instantly shareable with your team or integrated into a BI dashboard Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target different platforms**: Switch Amazon for Walmart, eBay, or any ecommerce source using Bright Dataβs flexible scraping infrastructure. Enrich with more LLM tasks**: Add brand tone analysis, category classification, or competitive benchmarking using Gemini prompts. Visualize output**: Pipe the Google Sheet to Looker Studio, Tableau, or Power BI. Notification integrations**: Add Slack, Discord, or email notifications for price drop alerts.
by Calistus Christian
What this template does Sends you an email (via Gmail) whenever any workflow that references this one fails. The message includes the workflow name/ID, execution URL, last node executed, and the error message. Why itβs useful Centralizes error notifications so you notice failures immediately and can jump straight to the failed execution. Prerequisites A Gmail account connected through n8nβs Gmail node credentials. This workflow set as the Error Workflow inside the workflows you want to monitor. How it works Error Trigger starts this workflow whenever a linked workflow fails. Gmail (Send β Message) composes and sends an email using details from the Error Trigger. Notes Error workflows donβt need to be activated to work. You canβt test them by running manuallyβerrors must occur in an automatically run workflow (cron, webhook, etc.).
by Gregory
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Overview This is a Telegram Bot capable of receiving information from the user in the form of text messages, voice messages, images or documents (e.g., presentations, PDFs, HTML pages), and publishing posts to the user's social platforms. The bot always sends the user a draft of the post for verification before publishing it. The bot saves relevant information to its long-term memory (vector store), so you don't need to repeat it in every interaction (e.g., who you are, your company, product, etc.). This template supports creating posts in LinkedIn and X. Setup Requirements To use this template your will need: Google's AI Studio API key. Get one here: https://aistudio.google.com/app/apikey Telegram Bot API key. You receive one when you register a new Telegram Bot via @BotFather bot in Telegram. LinkedIn API key. Follow the instructions here to create one: https://docs.n8n.io/integrations/builtin/credentials/linkedin/ X API key. Follow the instructions here to create one: https://docs.n8n.io/integrations/builtin/credentials/twitter/ Step-by-step instruction Import this template Create a new Telegram Bot or get an API key for existing one. Configre Telegram nodes with Telegram API key. Obtain a Google's AI Studio API key. Set it in "Describe document", "Describe audio" and "Google Gemini Chat Model". Create an API key for LinkedIn. Create an API key for X. Set our LinkedIn key in "Create post in LinkedIn" nodes. Set your X key in "Create X (Twitter) post" node. Other Bright-colored notes in the template highlight information that needs to be set before launching the template.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features π€ Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. π MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). π§ Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. π£οΈ Conversational Memory: Remembers context within a single execution flow. π Customizable System Prompt: Tailor the AI's behavior and responses. π§© Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits π Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. πΈ Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. π§ Centralized Logic: Manage and update search strategies and AI behavior in one place. π Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)
by pavith
πDescription This automation workflow enables users to upload files via an N8N form, automatically analyzes the content using Google Gemini agents, and delivers the analyzed results via email along with a chatbot link. The system leverages Llama Cloud API, Google Gemini LLM, Pinecone vector database, and Gmail to provide a seamless, multilingual content analysis experience. β Prerequisites Before setting up this workflow, ensure the following are in place: An active N8N instance. Access to Llama Cloud API. Google Gemini LLM API keys (for Translator & Analyzer agents). A Pinecone account with an active index. A Gmail account with API access configured. Basic knowledge of N8N workflow setup. βοΈ Setup Instructions Deploy the N8N Form Create a public-facing form using N8N. Configure it to accept: File uploads. User email input. File Preprocessing Store the uploaded files temporarily. Organize and preprocess them as needed. Content Extraction using Llama Cloud API Feed the files into the Llama Cloud API. Extract and parse the content for further processing. Translation (if required) Use a Translator Agent (Google Gemini). Check if the content is in English. If not, translate it. Content Analysis Forward the (translated) content to the Analyzer Agent (Google Gemini). Perform deep analysis to extract insights. Vector Storage in Pinecone Store both: The parsed and translated content. The analyzed content. Use Pinecone to store the content as embeddings for chatbot use. User Notification via Gmail Send the analyzed content and chatbot link to the userβs provided email using Gmail API. π§© Customization Guidance To add more languages: Update the translation logic to include additional language support. To modify analysis depth: Adjust the prompts sent to the Gemini Analyzer Agent. To change the chatbot behavior: Retrain or reconfigure the chatbot to utilize the new Pinecone index contextually. π Workflow Summary User uploads files and email via N8N form. Files are parsed using Llama Cloud API. Content is translated (if needed) using Gemini Translator Agent. Translated content is analyzed by the Gemini Analyzer Agent. Parsed and analyzed data is stored in Pinecone. User receives email with analyzed results and a chatbot link.
by Nick Saraev
AI Ad Scraper & Image Generator with Facebook Ad Library Categories: PPC Automation, Creative Generation, Competitive Intelligence This workflow creates an end-to-end ad library scraper and AI image spinner system that automatically discovers competitor ads, analyzes their design elements, and generates multiple unique variations ready for your own campaigns. Built to eliminate 60-70% of manual creative work for PPC agencies, this system transforms competitor research into actionable ad variants in minutes. Benefits Automated Competitor Research** - Scrapes Facebook Ad Library for active competitor campaigns automatically AI-Powered Creative Analysis** - Uses OpenAI vision to comprehensively analyze ad design elements and copy Intelligent Image Generation** - Creates 3+ unique variations per source ad while maintaining effective layouts Complete Asset Organization** - Automatically organizes source ads and generated variations in structured Google Drive folders Campaign-Ready Output** - Generates Google Sheets database with direct links to all assets for immediate campaign deployment Massive Time Savings** - Replaces hours of manual creative work with automated competitive intelligence and generation How It Works Facebook Ad Library Scraping: Connects to Facebook's Ad Library through Apify scraper integration Searches active ads based on keywords, industries, or competitor targeting Filters for image-based ads and removes video-only content for processing Intelligent Asset Organization: Creates unique Google Drive folder structure for each scraped ad campaign Separates source competitor ads from AI-generated variations Maintains organized asset library for easy campaign management and iteration AI-Powered Creative Analysis: Uses OpenAI's vision model to comprehensively describe each competitor ad Identifies design elements, color schemes, layout patterns, and messaging approaches Generates detailed creative briefs for intelligent variation generation Smart Image Variation System: Creates 3 unique style variations per source ad using advanced AI prompting Maintains effective layout structures while changing colors, fonts, and styling Customizes messaging and branding to match your business requirements Campaign Database Integration: Logs all source ads and generated variations in organized Google Sheets Provides direct links to all assets for immediate campaign deployment Tracks performance data and creative iterations for ongoing optimization Required Setup Configuration Google Drive Structure: The workflow automatically creates this folder organization: PPC Thievery (Parent Folder) βββ [Ad Archive ID] (Per Campaign) β βββ 1. Source Assets (Original competitor ads) β βββ 2. Spun Assets (AI-generated variations) Google Sheets Database Columns: timestamp - Unique record identifier ad_archive_id - Facebook's internal ad identifier page_id - Advertiser's Facebook page ID original_image_url - Direct link to source competitor ad page_name - Advertiser's business name ad_body - Original ad copy text date_scraped - When the ad was discovered spun_prompts - AI-generated variation instructions asset_folder - Link to campaign's Google Drive folder source_folder - Link to original ads folder spun_folder - Link to generated variations folder direct_spun_image_link - Direct link to generated ad image Set Variables Configuration: Update these values in the "Set Variables" node: googleDriveFolderId - Your parent Google Drive folder ID changeRequest - Your brand-specific variation instructions spreadsheetId - Your Google Sheets database ID Apify API Setup: Create Apify account and obtain API key Replace <your-apify-api-key-here> with actual credentials Customize search terms in the JSON body for your target competitors Adjust scraping count (default: 20 ads per run) Business Use Cases PPC Agencies** - Automate competitive research and creative generation for client campaigns E-commerce Brands** - Monitor competitor advertising strategies and generate response campaigns Marketing Teams** - Scale creative production with AI-powered competitive intelligence Freelance Marketers** - Offer advanced competitive analysis and creative services to clients SaaS Companies** - Track competitor messaging and generate differentiated ad variations Agency Teams** - Replace manual creative research with automated competitive intelligence systems Revenue Potential This system revolutionizes PPC agency economics: 60-70% reduction** in manual creative work and competitive research time 3-5x faster** campaign launch times with ready-to-use creative assets $2,000-$5,000 service value** for comprehensive competitive intelligence and creative generation Scalable competitive advantage** through automated monitoring of competitor campaigns Premium positioning** offering AI-powered creative intelligence that competitors can't match manually Difficulty Level: Advanced Estimated Build Time: 2-3 hours Monthly Operating Cost: ~$100 (Apify + OpenAI + Google APIs) Watch My Complete Live Build Want to see me build this entire system from scratch? I walk through every component live - including the ad library integration, AI analysis setup, image generation pipeline, and all the debugging that goes into creating a production-ready competitive intelligence system. π₯ See My Live Build Process: "Ad Library Scraper & AI Image Spinner System (N8N Build)" This comprehensive tutorial shows the real development process - including advanced AI prompting for image generation, competitive analysis strategies, and the organizational systems that make this scalable for agency use. Set Up Steps Initial Database Setup: Run the initialization flow once to create your Google Drive folder and Sheets database Copy the generated folder ID and spreadsheet ID into the "Set Variables" node Configure your brand-specific change request template for consistent output Apify Integration: Set up Apify account with Facebook Ad Library scraper access Configure API credentials and test with small ad batches Customize search parameters for your target competitors and industries AI Service Configuration: Connect OpenAI API for vision analysis and image generation Set up appropriate rate limiting to control processing costs Test the complete AI pipeline with sample competitor ads Google Services Setup: Configure Google Drive API credentials for automated folder creation Set up Google Sheets integration for campaign database management Test the complete asset organization and tracking workflow Campaign Customization: Define your brand guidelines and messaging requirements in the change request Set up variation templates for different campaign types and industries Configure batch processing limits based on your API usage requirements Production Optimization: Remove the limit node for full-scale competitive monitoring Set up automated scheduling for regular competitive intelligence gathering Monitor and optimize AI prompts based on generated creative quality Advanced Optimizations Scale the system with: Multi-Platform Scraping:** Extend to LinkedIn, Twitter, and Google Ads for comprehensive competitive intelligence Performance Tracking:** Integrate with ad platforms to track performance of generated variations Style Guide Automation:** Create industry-specific variation templates for consistent brand application A/B Testing Integration:** Automatically test generated variations against source ads for performance optimization CRM Integration:** Connect competitive intelligence data with sales and marketing systems Important Considerations API Rate Limits:** Built-in delays prevent service overload and ensure reliable operation Creative Quality:** System generates multiple variations to account for AI generation variability Legal Compliance:** Use generated variations as inspiration while respecting intellectual property rights Cost Management:** Monitor OpenAI image generation costs and adjust batch sizes accordingly Competitive Ethics:** Focus on learning from successful patterns rather than direct copying Why This System Works The competitive advantage lies in speed and scale: Minutes vs. Hours:** Generate campaign-ready creative variations in minutes instead of hours of manual work Systematic Analysis:** AI vision provides consistent, comprehensive analysis that humans might miss Organized Intelligence:** Structured asset management enables rapid campaign deployment and iteration Scalable Monitoring:** Automated competitive research that scales beyond manual capacity Quality Variations:** Multiple AI-generated options ensure high-quality creative output Check Out My Channel For more advanced automation systems and proven agency-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to scale automation agencies to $72K+ monthly revenue.