by Nick Saraev
AI Facebook Ad Spy Tool with Apify, OpenAI, Gemini & Google Sheets Categories: Competitive Intelligence, Marketing Automation, AI Analysis This workflow creates a comprehensive Facebook ad spy tool that scrapes competitor ads from Facebook's ad library and generates detailed analysis with rewritten versions. The system processes text, image, and video ads using different AI models, providing strategic intelligence for PPC agencies and marketers. Built to be sold as a premium service for $2,000+, this tool combines web scraping, multi-modal AI analysis, and competitor intelligence into one powerful automation. Benefits Complete Competitive Intelligence** - Analyze competitor strategies across all ad formats (text, image, video) Multi-Modal AI Analysis** - Uses GPT-4 Vision for images and Gemini for video content understanding Automated Ad Rewriting** - Generates inspired variations of successful competitor ads Quality Filtering** - Targets high-performing advertisers with significant page likes Scalable Processing** - Handle hundreds of competitor ads with detailed strategic analysis Premium Service Potential** - Easily sold to agencies and marketers for $2,000+ implementations How It Works Facebook Ad Library Scraping: Connects to Facebook's public ad library through Apify's specialized scraper Searches for active ads using customizable keywords and targeting parameters Extracts comprehensive ad data including creative assets, targeting info, and engagement metrics Filters results to focus on high-quality advertisers with substantial page followings Intelligent Content Routing: Automatically categorizes ads into text-only, image-based, or video content types Routes each ad type to specialized processing pipelines optimized for that content format Ensures appropriate AI models are used for each type of creative analysis Maintains data integrity while processing different content formats simultaneously Advanced Video Analysis Pipeline: Downloads video ads directly from Facebook's content delivery network Uploads videos to Google Drive for temporary storage and processing Initiates Gemini AI video upload sessions for multi-modal analysis Uses Gemini's advanced video understanding to generate detailed content descriptions Processes video narrative, visual elements, messaging strategy, and target audience insights Image and Text Processing: Analyzes image ads using GPT-4 Vision for comprehensive visual content understanding Processes text-only ads using GPT-4 for messaging strategy and copywriting analysis Identifies key persuasion techniques, target demographics, and messaging frameworks Generates detailed competitive intelligence reports for each ad format Strategic Intelligence Generation: Creates comprehensive summaries analyzing competitor messaging strategies and target audiences Generates rewritten ad copy that captures successful elements while avoiding direct copying Produces recreation prompts for images and videos that can be used with AI generation tools Organizes all insights in structured Google Sheets database for easy analysis and reporting Required Setup Configuration Apify Integration: Sign up for Apify account and obtain API key Replace <your-apify-api-key-here> in "Run Ad Library Scraper" node Customize Facebook Ad Library search URLs with your target keywords and regions AI Service Configuration: OpenAI API**: Set up for text analysis and image understanding with GPT-4 Vision Gemini API**: Configure for advanced video content analysis and description Replace <your-gemini-api-key-here> in all Gemini-related nodes Google Services Setup: Google Drive**: Configure OAuth for temporary video storage during Gemini processing Google Sheets**: Create results database with proper column structure for ad intelligence storage Facebook Ad Library Search Configuration: Customize the search parameters in the Apify scraper Google Sheets Database Structure: Create a sheet with these columns: ad_archive_id - Unique Facebook ad identifier page_id - Advertiser's Facebook page ID page_name - Advertiser's business name page_url - Link to advertiser's Facebook page type - Ad format (text, image, or video) date_added - When ad was analyzed summary - Detailed competitive intelligence analysis rewritten_ad_copy - AI-generated inspired version image_prompt - Description for recreating image ads video_prompt - Description for recreating video ads Business Use Cases PPC Agencies - Offer comprehensive competitor analysis services to clients for strategic advantage Marketing Teams - Research competitor strategies and messaging before launching new campaigns E-commerce Businesses - Analyze successful ads in your industry for creative inspiration SaaS Companies - Study how competitors position their products and target audiences Course Creators - Research educational content marketing approaches and messaging strategies Affiliate Marketers - Identify successful promotional strategies and high-converting ad formats Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$200 (Apify + OpenAI + Gemini + Google Workspace APIs) Watch My Complete Build Process Want to see exactly how I built this entire Facebook ad spy system from scratch? I walk through the complete development process live, including API integrations, multi-modal AI setup, error handling, and the exact business strategy for selling this as a premium service. 🎥 Watch My Live Build: "Build A Facebook Ads Spy Tool With N8N (Sell for $2k+)" This comprehensive tutorial shows the real development process - including complex API orchestration, multi-modal AI integration, and proven strategies for monetizing competitive intelligence systems. Set Up Steps Apify Scraper Configuration: Set up Apify account and configure Facebook Ad Library scraper Customize search parameters for your target industries and regions Configure result limits and filtering parameters for quality control Test scraper with sample searches to verify data quality Multi-Modal AI Setup: Configure OpenAI API credentials for text and image analysis Set up Gemini API access for advanced video content understanding Configure appropriate rate limits and error handling for API stability Test AI analysis with sample ads to optimize prompt quality Google Services Integration: Set up Google Drive OAuth for temporary video storage during processing Create Google Sheets database with proper column structure for intelligence storage Configure sharing permissions and access controls for team collaboration Test complete data flow from scraping to final intelligence reports Quality Control and Filtering: Configure page likes threshold in "Filter For Likes" node (recommend 1,000+ for quality) Adjust content routing logic in Switch node based on your analysis needs Set up error handling and retry logic for reliable large-scale processing Test complete workflow with various ad types to ensure proper routing Advanced Customization: Customize AI prompts for your specific industry analysis needs Configure additional filtering criteria beyond page likes Set up automated scheduling for regular competitor monitoring Add custom fields to database for tracking specific competitive metrics Advanced Features Scale the system with additional capabilities: Industry-Specific Analysis - Customize prompts and filters for different verticals Trend Tracking - Monitor messaging changes over time for strategic insights Performance Correlation - Cross-reference ad engagement with business outcomes Alert Systems - Notify when competitors launch new campaign types Custom Reporting - Generate client-ready intelligence reports automatically Integration Extensions - Connect to CRM and marketing platforms for strategic workflow Important Considerations API Rate Limits - Built-in delays and error handling prevent service interruptions Content Rights - System generates inspired variations, not direct copies, for legal compliance Data Storage - Organize intelligence database for easy client reporting and analysis Scalability - Batch processing handles hundreds of ads efficiently without blocking Quality Assurance - Filtering logic ensures analysis focuses on successful, high-quality advertisers Why This System Works The competitive advantage lies in comprehensive multi-modal analysis: Complete format coverage - analyzes text, image, and video ads with appropriate AI models Strategic depth - goes beyond basic scraping to provide actionable intelligence Automation scale - processes competitor research that would take weeks manually Premium positioning - advanced AI analysis justifies higher service pricing Immediate value - clients receive actionable insights within hours of setup Check Out My Channel For more advanced automation systems that generate real business results and premium service opportunities, explore my YouTube channel where I share proven strategies for building profitable automation businesses.
by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? IT teams and support organizations looking to automate Level 1 support with AI-powered assistance while maintaining proper ticket management workflows. What problem does this solve? Eliminates repetitive manual support tasks by providing instant, context-aware assistance that references organizational knowledge and creates structured tickets when needed. What this workflow does RAG Pipeline**: Processes PDF/CSV documents into searchable vector database Intelligent Slack Bot**: This AI-helpdesk assistant handles support requests with thread-aware conversations Vector Knowledge Search**: Searches embedded knowledge base articles and historical case data JIRA Integration**: Creates, searches, and manages support tickets automatically Emoji Reactions**: Users can trigger actions (create tickets, escalate) via emoji reactions Requirements Required Accounts: n8n Cloud or self-hosted instance Slack workspace with admin access Supabase account (vector database) JIRA Cloud instance OpenAI API key Technical Prerequisites: Basic n8n workflow knowledge Slack app creation experience Understanding of vector databases Setup Steps 1. Slack App Configuration Create new Slack app with Bot Token Scopes: app_mentions:read, channels:history, channels:read, groups:history, groups:read, im:history, im:read, mpim:history, mpim:read, users:read Configure Event Subscriptions: app_mention, message.channels, message.groups, reaction_added Set Request URL to your n8n Slack Trigger webhook 2. Supabase Vector Database Setup Create new Supabase project Enable pgvector extension Create documents table with vector column (1536 dimensions for OpenAI embeddings) Configure RLS policies for secure access 3. JIRA Configuration Generate API token from JIRA Cloud Create helpdesk project with appropriate issue types Note project ID and issue type IDs for workflow configuration 4. n8n Workflow Configuration Import workflow and configure credentials Update Slack channel IDs in trigger nodes Set OpenAI API key in all OpenAI nodes Configure Supabase connection in vector store nodes Update JIRA project settings in MCP server nodes 5. Knowledge Base Data Format Supported file formats: PDF, CSV CSV Structure: Structure your data with columns, but not limited to, Ticket#, Issue Description, Issue Summary, Resolution Provided, Case Status, Contact User PDF Content: Technical documentation, troubleshooting guides, policy documents Upload documents via the form trigger to automatically embed in vector database. Customization Options AI Agent Behavior Modify system prompt in AIHelpdesk Agent node Adjust conversation memory window (default: 20 messages) Change AI model (GPT-4o, GPT-3.5-turbo, etc.) Reaction Mappings Customize emoji-to-action mappings in Reaction Handler code Add new reaction types for department-specific workflows Configure escalation rules and priority levels JIRA Integration Customize ticket templates and fields Add auto-assignment rules based on issue type Configure SLA and priority mappings Vector Search Adjust similarity thresholds for knowledge retrieval Modify search result limits and relevance scoring Add metadata filtering for departmental knowledge bases Advanced Features Thread-aware conversation memory Automatic bot loop prevention Context-preserving ticket creation Multi-modal file processing (PDF + CSV) Scalable MCP architecture for tool integration Use Cases Level 1 IT Support**: Automate common troubleshooting workflows Employee Onboarding**: Answer policy and procedure questions Internal Help Desk**: Route and track internal service requests Knowledge Management**: Make organizational knowledge searchable and actionable Template includes Complete Slack integration with thread support RAG pipeline for document processing Vector similarity search implementation JIRA ticket lifecycle management Emoji reaction-based user interactions Comprehensive error handling and validation
by max e
Turn plain-language chat like “Tomorrow 9 AM: write blog post” into neatly organised Todoist tasks with GPT-4o and n8n—zero code. 🪄 Ultimate Personal Todoist Agent Turn natural-language requests into perfectly-organized Todoist tasks—all on autopilot inside n8n. > “Add Finish quarterly report by Friday afternoon” → the agent creates the task, sets the due date & priority, and even drops it into the right project. ✨ 🌟 Why this workflow rocks All-in-one Todoist super‑powers** – create, update, complete, move, archive… every major Todoist endpoint is wired up (tasks, projects, sections, labels, comments). LLM‑powered intent detection** – an OpenAI model interprets plain-English (or emoji‑filled!) messages so you don’t have to remember slash‑commands. Minimal setup** – just two credentials and you’re live. Battle‑tested building block** – use it as‑is, or plug the Todoist Agent node into your own agents & chatbots. 🛠️ What you’ll need | Credential | Where it’s used | How to set it up | | ------------------ | -------------------------------------- | --------------------------------------------------------------------------------------------- | | OpenAI API | Orchestrator & LLM nodes | Paste your OpenAI secret key into an OpenAI credential in n8n. | | Todoist OAuth2 | Todoist node and HTTP Request node | Log in Todoist from your browser to set up credential in n8n. | > That’s it—no webhooks, no extra secrets. > Tested with *gpt‑4o‑latest* – the fastest & most accurate model in our trials. ⚡ Quick‑start (5 minutes) Import the JSON template (hit ▶️ Try it out on the n8n template page or drag‑drop the file into your canvas). Select your credentials in the two credential dropdowns. Click Test workflow. In the sample Function node, tweak the message field (e.g. “Tomorrow at 9 am: write blog post”). Run → watch your new Todoist task appear. (Optional) Swap the Function node for your favourite chat trigger (Telegram, Slack, WhatsApp, Discord, you name it). Boom—your personal Todoist genie is alive! 🧞♂️ 🧩 How it works (under the hood) [Trigger / Chat message] │ ▼ [🗂️ Orchestrator Agent] ← OpenAI Chat Model + Short‑term Memory │ ↳ Parses intent & entities │ ▼ [🤖 Todoist Agent] ← 15+ Todoist endpoints │ ↳ Executes the right call (create, update, complete, etc.) ▼ [Done ✅ ] The Orchestrator is an example. In production you can drop it and simply expose the Todoist Agent as a tool for any other agent workflow. 🎛️ Customising & extending | Idea | How to do it | | ------------------------- | ---------------------------------------------------------------------------------------- | | Notion / Sheets sync | After the Todoist Agent node, add a Notion or Google Sheets node to log completed items. | | Voice commands | Swap the chat trigger for a Speech‑to‑Text node (e.g. Whisper). | 🤝 Need custom automations? Want me to build or tweak something for you? → Email maxemelyanenko@gmail.com and let’s make it happen! ⚠️ What’s not included (yet) Shared projects & other Todoist Pro/Business endpoints. File attachments in the comments. Editing comments. Pull requests welcome! 🙌
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.
by Yaron Been
🔍 Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. 🧩 How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. ✅ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key ⚙️ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use 🧠 Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance 💬 Support & Feedback For help or customization: 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by lin@davoy.tech
This workflow template, "Chinese Translator via Line x OpenRouter (Text & Image)" is designed to provide seamless Chinese translation services directly within the LINE messaging platform. By integrating with OpenRouter.ai and advanced language models like Qwen, this workflow translates text or images containing Chinese characters into pinyin and English translations, making it an invaluable tool for language learners, travelers, and businesses operating in multilingual environments. This template is ideal for: Language Learners: Who want to practice Chinese by receiving instant translations of text or images. Travelers: Looking for quick translations of Chinese signs, menus, or documents while abroad. Educators: Teaching Chinese language courses and needing tools to assist students with translations. Businesses: Operating in multilingual markets and requiring efficient communication tools. Automation Enthusiasts: Seeking to build intelligent chatbots that can handle language translation tasks. What Problem Does This Workflow Solve? Translating Chinese text or images into English and pinyin can be challenging, especially for beginners or those without access to reliable translation tools. This workflow solves that problem by: Automatically detecting and translating text or images containing Chinese characters. Providing accurate translations in both pinyin and English for better comprehension. Supporting multiple input formats (text, images) to cater to diverse user needs. Sending replies directly to users via the LINE messaging platform , ensuring accessibility and ease of use. What This Workflow Does 1) Receive Messages via LINE Webhook The workflow is triggered when a user sends a message (text, image, or other types) to the LINE bot. 2) Display Loading Animation A loading animation is displayed to reassure the user that their request is being processed. 3) Route Input Types The workflow uses a Switch node to determine the type of input (text, image, or unsupported formats). If the input is text , it is sent to the OpenRouter.ai API for translation. If the input is an image , the workflow extracts the image content, converts it to base64, and sends it to the API for translation. Unsupported formats trigger a polite response indicating the limitation. 4) Translate Content Using OpenRouter.ai The workflow leverages Qwen models from OpenRouter.ai to generate translations: For text inputs, it provides Chinese characters , pinyin , and English translations . For images, it extracts and translates using the qwen-VL model which can take images 5) Reply with Translations The translated content is formatted and sent back to the user via the LINE Reply API. Setup Guide Pre-Requisites Access to the LINE Developers Console to configure your webhook and channel access token. An OpenRouter.ai account with credentials to access Qwen models. Basic knowledge of APIs, webhooks, and JSON formatting. Step-by-Step Setup 1) Configure the LINE Webhook: Go to the LINE Developers Console and set up a webhook to receive incoming messages. Copy the Webhook URL from the Line Webhook node and paste it into the LINE Console. Remove any "test" configurations when moving to production. 2) Set Up OpenRouter.ai: Create an account on OpenRouter.ai and obtain your API credentials. Connect your credentials to the OpenRouter nodes in the workflow. 3) Test the Workflow: Simulate sending text or images to the LINE bot to verify that translations are processed and replied correctly. How to Customize This Workflow to Your Needs Add More Languages: Extend the workflow to support additional languages by modifying the API calls. Enhance Image Processing: Integrate more advanced OCR tools to improve text extraction from complex images. Customize Responses: Modify the reply format to include additional details, such as grammar explanations or cultural context. Expand Use Cases: Adapt the workflow for specific industries, such as tourism or e-commerce, by tailoring the translations to relevant vocabulary. Why Use This Template? Real-Time Translation: Provides instant translations of text and images, improving user experience and accessibility. Multimodal Support: Handles both text and image inputs, catering to diverse user needs. Scalable: Easily integrate into existing systems or scale to support multiple users and workflows. Customizable: Tailor the workflow to suit your specific audience or industry requirements.
by Mihai Farcas
This n8n workflow creates a financial analysis tool that generates reports on a company's quarterly earnings using the capabilities of OpenAI GPT-4o-mini, Google's Gemini AI and Pinecone's vector search. By analyzing PDFs of any company's earnings reports from their Investor Relations page, this workflow can answer complex financial questions and automatically compile findings into a structured Google Doc. How it works: Data loading and indexing Fetches links to PDF earnings document from a Google Sheet containing a list of file links. Downloads the PDFs from Google Drive. Parses the PDFs, splits the text into chunks, and generates embeddings using the Embeddings Google AI node (text-embedding-004 model). Stores the embeddings and corresponding text chunks in a Pinecone vector database for semantic search. Report generation with AI agent Utilizes an AI Agent node with a specifically crafted system prompt. The agent orchestrates the entire process. The agent uses a Vector Store Tool to access and retrieve information from the Pinecone database. Report delivery Saves the generated report as a Google Doc in a specified Google Drive location. Set up steps Google Cloud Project & Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API key: Obtain a Google AI API key from Google AI Studio. Pinecone account and API key: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-earnings in your Pinecone project. Google Drive - download and save financial documents: Go to a company you want to analize and download their quarterly earnings PDFs Save the PDFs in Google Drive Create a Google Sheet that stores a list of file URLs pointing to the PDFs you downloaded and saved to Google Drive Configure credentials in your n8n environment for: Google Sheets OAuth2 Google Drive OAuth2 Google Docs OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import and configure the workflow: Import this workflow into your n8n instance. Update the List Of Files To Load (Google Sheets) node to point to your Google Sheet. Update the Download File From Google Drive to point to the column where the file URLs are Update the Save Report to Google Docs node to point to your Google Doc where you want the report saved.
by Sobek
📝 DESCRIPTION OF THE WORKFLOW This workflow connects Salesforce and Geotab to streamline fleet tracking for field service jobs (Work Orders). When a new Work Order is created in Salesforce (with a 'New' status and valid coordinates), it creates a circular geofence zone in Geotab and updates the Work Order with the zone ID. If geolocation is missing, an alert email is sent to dedicated email. The workflow uses a Salesforce Outbound Message to trigger an n8n webhook. It includes robust credential handling and optional logic to skip or notify on bad data. Use Cases: Automating vehicle geofence setup for service visits Enhancing CRM-to-fleet system synchronisation Enforcing work orders data quality via alerts Integrations Used: Salesforce Geotab API Microsoft Outlook (or any SMTP-compatible service) Tags: geotab, salesforce, fleet management, gps tracking, field service, crm, automation, webhook, integration ADDITIONAL RESOURCES 🔗 Salesforce Salesforce Login \[Salesforce Setup (Admin Console)]\(https://login.salesforce.com/ → click “Setup” gear icon) Outbound Messages Documentation Salesforce Developer Documentation Salesforce Workbench (API Testing Tool) 🔗 Geotab Geotab Login (MyGeotab) Geotab Developer Portal Geotab API Explorer Geotab SDK (JavaScript Samples) Geotab Support Centre
by Amjid Ali
Proxmox AI Agent with n8n and Generative AI Integration This template automates IT operations on a Proxmox Virtual Environment (VE) using an AI-powered conversational agent built with n8n. By integrating Proxmox APIs and generative AI models (e.g., Google Gemini), the workflow converts natural language commands into API calls, enabling seamless management of your Proxmox nodes, VMs, and clusters. Buy My Book: Mastering n8n on Amazon Full Courses & Tutorials: http://lms.syncbricks.com Watch Video on Youtube How It Works Trigger Mechanism The workflow can be triggered through multiple channels like chat (Telegram, email, or n8n's built-in chat). Interact with the AI agent conversationally. AI-Powered Parsing A connected AI model (Google Gemini or other compatible models like OpenAI or Claude) processes your natural language input to determine the required Proxmox API operation. API Call Generation The AI parses the input and generates structured JSON output, which includes: response_type: The HTTP method (GET, POST, PUT, DELETE). url: The Proxmox API endpoint to execute. details: Any required payload parameters for the API call. Proxmox API Execution The structured output is used to make HTTP requests to the Proxmox VE API. The workflow supports various operations, such as: Retrieving cluster or node information. Creating, deleting, starting, or stopping VMs. Migrating VMs between nodes. Updating or resizing VM configurations. Response Formatting The workflow formats API responses into a user-friendly summary. For example: Success messages for operations (e.g., "VM started successfully"). Error messages with missing parameter details. Extensibility You can enhance the workflow by connecting additional triggers, external services, or AI models. It supports: Telegram/Slack integration for real-time notifications. Backup and restore workflows. Cloud monitoring extensions. Key Features Multi-Channel Input**: Use chat, email, or custom triggers to communicate with the AI agent. Low-Code Automation**: Easily customize the workflow to suit your Proxmox environment. Generative AI Integration**: Supports advanced AI models for precise command interpretation. Proxmox API Compatibility**: Fully adheres to Proxmox API specifications for secure and reliable operations. Error Handling**: Detects and informs you of missing or invalid parameters in your requests. Example Use Cases Create a Virtual Machine Input: "Create a VM with 4 cores, 8GB RAM, and 50GB disk on psb1." Action: Sends a POST request to Proxmox to create the VM with specified configurations. Start a VM Input: "Start VM 105 on node psb2." Action: Executes a POST request to start the specified VM. Retrieve Node Details Input: "Show the memory usage of psb3." Action: Sends a GET request and returns the node's resource utilization. Migrate a VM Input: "Migrate VM 202 from psb1 to psb3." Action: Executes a POST request to move the VM with optional online migration. Pre-Requisites Proxmox API Configuration Enable the Proxmox API and generate API keys in the Proxmox Data Center. Use the Authorization header with the format: PVEAPIToken=<user>@<realm>!<token-id>=<token-value> n8n Setup Add Proxmox API credentials in n8n using Header Auth. Connect a generative AI model (e.g., Google Gemini) via the relevant credential type. Access the Workflow Import this template into your n8n instance. Replace placeholder credentials with your Proxmox and AI service details. Additional Notes This template is designed for Proxmox 7.x and above. For advanced features like backup, VM snapshots, and detailed node monitoring, you can extend this workflow. Always test with a non-production Proxmox environment before deploying in live systems. Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Custom Workflows AI
Introduction The "High-Level Service Page SEO Blueprint Report" workflow is a powerful, AI-driven solution designed to generate comprehensive SEO content strategies for service-based businesses. By analyzing competitor websites and user intent, this workflow creates a detailed blueprint that outlines the optimal structure, content, and conversion elements for a service page. The workflow leverages the JINA Reader API to extract content from competitor websites and uses Google Gemini AI to perform deep analysis across multiple dimensions: competitor content structure, user intent, strategic opportunities, and conversion optimization. The final output is a professionally formatted Markdown document that provides actionable guidance for creating a high-performing service page that satisfies both user needs and search engine requirements. This workflow eliminates the time-consuming process of manually analyzing competitors and developing content strategies, providing a data-driven foundation for service page creation that would typically require hours of expert analysis. Who is this for? This workflow is designed for digital marketers, SEO specialists, content strategists, and web developers who need to create or optimize service pages for businesses. It's particularly valuable for marketing agencies and freelancers who regularly develop content strategies for clients across various industries. Users should have a basic understanding of SEO concepts, content marketing, and website structure. While technical SEO knowledge is beneficial, the workflow is designed to provide comprehensive guidance even for those with intermediate-level expertise. The ideal user is someone who wants to streamline their content planning process and ensure their service pages are built on data-driven insights rather than guesswork. What problem is this workflow solving? Creating effective service pages that rank well in search engines while converting visitors is a complex challenge that typically requires extensive competitive research, content planning, and conversion optimization expertise. This workflow addresses several key pain points: Time-consuming competitor analysis: Manually analyzing multiple competitor websites to identify content patterns, heading structures, and meta tag strategies can take hours. Difficulty identifying content gaps: Determining what topics competitors are missing that could provide a competitive advantage requires deep analysis and industry knowledge. Balancing SEO and conversion elements: Creating content that satisfies both search engines and user needs while driving conversions is a delicate balance that many struggle to achieve. Lack of structured approach: Many content creators work without a comprehensive blueprint, leading to inconsistent results and missed opportunities. Difficulty translating analysis into actionable recommendations: Even when analysis is performed, turning those insights into a concrete content plan can be challenging. This workflow automates these processes, providing a structured, data-driven approach to service page creation that saves hours of research and planning time. What this workflow does Overview The workflow takes a list of competitor URLs and a target keyword as input, then performs a multi-stage analysis to generate a comprehensive service page blueprint. It extracts and analyzes competitor content, evaluates user intent, identifies strategic opportunities, and creates detailed recommendations for page structure, content, and conversion elements. The final output is a professionally formatted Markdown document that serves as a complete roadmap for creating an effective service page. Process Data Collection: The workflow begins with a form that collects essential information: competitor URLs, target keyword, services offered, brand name, and whether the page is a homepage. Competitor Content Extraction: The workflow processes each competitor URL, using the JINA Reader API to extract the HTML content from each site. Content Structure Analysis: For each competitor site, the workflow extracts and analyzes heading structures, meta tags, schema markup, and recurring phrases (n-grams). Competitor Analysis Report: The AI synthesizes the competitive data to identify patterns in meta titles/descriptions, common outline sections, key heading concepts, and structural elements. User Intent Analysis: The workflow analyzes the target keyword to determine primary and secondary user intents, user personas, and their position in the buyer's journey. Gap Analysis: The AI identifies content overlaps ("table stakes"), content gaps (opportunities), SEO keyword priorities, and potential UX/conversion advantages. Page Outline Generation: Based on the previous analyses, the workflow creates an optimal page structure with H1, H2s, H3s, and potentially H4s, with justifications for each section. UX & Conversion Recommendations: The workflow adds detailed recommendations for calls-to-action, trust signals, copywriting tone, visual elements, and risk reversal strategies. Final Blueprint Creation: All analyses and recommendations are compiled into a comprehensive, well-structured Markdown document that serves as a complete service page blueprint. Setup Download or import the "High-Level Service Page SEO Blueprint Report" workflow JSON file into your n8n instance. Create a JINA Reader API key by visiting https://jina.ai/api-dashboard/key-manager. You can claim a free API key that allows up to 1 million tokens. Set up Google Gemini (PaLM) credentials by following the guide at https://docs.n8n.io/integrations/builtin/credentials/googleai/#using-geminipalm-api-key. Update the "Edit Fields" node with: Your JINA Reader API Key Adjust the "Waiting Time" to 20 seconds if using the free Google Gemini API tier (which limits to 5 requests per minute) Optionally change the Gemini model if needed Activate the workflow and start the form trigger. Complete the form with: Competitors (up to 5 direct competitor URLs) Target Keyword (the query related to your service) Services Offered (details of your complete service offerings) Brand Name (your company name) Whether the page is a homepage After processing, download the generated .txt file, which contains the blueprint in Markdown format. How to customize this workflow to your needs Adjust AI parameters: Modify the temperature settings in the Google Gemini Chat Model nodes to control creativity vs. precision in the AI outputs. Customize extraction logic: Edit the "Extract HTML Elements" code node to focus on specific HTML elements that are most relevant to your industry or content type. Modify analysis prompts: Customize the prompts in the various analysis nodes to focus on specific aspects of SEO or content strategy that are most important for your use case. Add industry-specific guidance: Enhance the prompts with industry-specific instructions or examples to make the output more relevant to particular sectors. Integrate with content management systems: Extend the workflow to automatically send the blueprint to content management systems, project management tools, or document storage platforms. Add competitor scoring: Implement a scoring system to evaluate and rank competitors based on specific criteria relevant to your strategy. Expand the analysis: Add additional analysis nodes to evaluate other aspects of competitor websites, such as page speed, mobile-friendliness, or backlink profiles.
by Gavin
This Template gives the ability to monitor all uplinks for your Meraki Dashboard and then alert your team in a method you prefer. This example is a Teams notification to our Dispatch Channel Setup will probably take around 30 minutes to 1h provided with the Template. Most time intensive steps are getting a Meraki API key which I go over and setting up the Teams node which n8n has good documentation for. Tutorial & explanation https://www.youtube.com/watch?v=JvaN0dNwRNU