by Growth AI
Intelligent chatbot with custom knowledge base Who's it for Businesses, developers, and organizations who need a customizable AI chatbot for internal documentation access, customer support, e-commerce assistance, or any use case requiring intelligent conversation with access to specific knowledge bases. What it does This workflow creates a fully customizable AI chatbot that can be deployed on any platform supporting webhook triggers (websites, Slack, Teams, etc.). The chatbot accesses a personalized knowledge base stored in Supabase and can perform advanced actions like sending emails, scheduling appointments, or updating databases beyond simple conversation. How it works The workflow combines several powerful components: Webhook Trigger: Accepts messages from any platform that supports webhooks AI Agent: Processes user queries with customizable personality and instructions Vector Database: Searches relevant information from your Supabase knowledge base Memory System: Maintains conversation history for context and traceability Action Tools: Performs additional tasks like email sending or calendar booking Technical architecture Chat trigger connects directly to AI Agent Language model, memory, and vector store all connect as tools/components to the AI Agent Embeddings connect specifically to the Supabase Vector Store for similarity search Requirements Supabase account and project AI model API key (any LLM provider of your choice) OpenAI API key (for embeddings - this is covered in Cole Medin's tutorial) n8n built-in PostgreSQL access (for conversation memory) Platform-specific webhook configuration (optional) How to set up Step 1: Configure your trigger The template uses n8n's default chat trigger For external platforms: Replace with webhook trigger and configure your platform's webhook URL Supported platforms: Any service with webhook capabilities (websites, Slack, Teams, Discord, etc.) Step 2: Set up your knowledge base For creating and managing your vector database, follow this comprehensive guide: Watch Cole Medin's tutorial on document vectorization This video shows how to build a complete knowledge base on Supabase The tutorial covers document processing, embedding creation, and database optimization Important: The video explains the OpenAI embeddings configuration required for vector search Step 3: Configure the AI agent Define your prompt: Customize the agent's personality and role Example: "You are the virtual assistant for example.com. Help users by answering their questions about our products and services." Select your language model: Choose any AI provider you prefer (OpenAI, Anthropic, Google, etc.) Set behavior parameters: Define response style, tone, and limitations Step 4: Connect Supabase Vector Store Add the "Supabase Vector Store" tool to your agent Configure your Supabase project credentials Mode: Set to "retrieve-as-tool" for automatic agent integration Tool Description: Customize description (default: "Database") to describe your knowledge base Table configuration: Specify the table containing your knowledge base (example shows "growth_ai_documents") Ensure your table name matches your actual knowledge base structure Multiple tables: You can connect several tables for organized data structure The agent will automatically decide when to search the knowledge base based on user queries Step 5: Set up conversation memory (recommended) Use "Postgres Chat Memory" with n8n's built-in PostgreSQL credentials Configure table name: Choose a name for your chat history table (will be auto-created) Context Window Length: Set to 20 messages by default (adjustable based on your needs) Benefits: Conversation traceability and analytics Context retention across messages Unique conversation IDs for user sessions Stored in n8n's database, not Supabase How to customize the workflow Basic conversation features Response style: Modify prompts to change personality and tone Knowledge scope: Update Supabase tables to expand or focus the knowledge base Language support: Configure for multiple languages Response length: Set limits for concise or detailed answers Memory retention: Adjust context window length for longer or shorter conversation memory Advanced action capabilities The chatbot can be extended with additional tools for: Email automation: Send support emails when users request assistance Calendar integration: Book appointments directly in Google Calendar Database updates: Modify Airtable or other databases based on user interactions API integrations: Connect to external services and systems File handling: Process and analyze uploaded documents Platform-specific deployments Website integration Replace chat trigger with webhook trigger Configure your website's chat widget to send messages to the n8n webhook URL Handle response formatting for your specific chat interface Slack/Teams deployment Set up webhook trigger with Slack/Teams webhook URL Configure response formatting for platform-specific message structures Add platform-specific features (mentions, channels, etc.) E-commerce integration Connect to product databases Add order tracking capabilities Integrate with payment systems Configure support ticket creation Results interpretation Conversation management Chat history: All conversations stored in n8n's PostgreSQL database with unique IDs Context tracking: Agent maintains conversation flow and references previous messages Analytics potential: Historical data available for analysis and improvement Knowledge retrieval Semantic search: Vector database returns most relevant information based on meaning, not just keywords Automatic decision: Agent automatically determines when to search the knowledge base Source tracking: Ability to trace answers back to source documents Accuracy improvement: Continuously refine knowledge base based on user queries Use cases Internal applications Developer documentation: Quick access to technical guides and APIs HR support: Employee handbook and policy questions IT helpdesk: Troubleshooting guides and system information Training assistant: Learning materials and procedure guidance External customer service E-commerce support: Product information and order assistance Technical support: User manuals and troubleshooting Sales assistance: Product recommendations and pricing FAQ automation: Common questions and instant responses Specialized implementations Lead qualification: Gather customer information and schedule sales calls Appointment booking: Healthcare, consulting, or service appointments Order processing: Take orders and update inventory systems Multi-language support: Global customer service with language detection Workflow limitations Knowledge base dependency: Quality depends on source documentation and embedding setup Memory storage: Requires active n8n PostgreSQL connection for conversation history Platform restrictions: Some platforms may have webhook limitations Response time: Vector search may add slight delay to responses Token limits: Large context windows may increase API costs Embedding costs: OpenAI embeddings required for vector search functionality
by Br1
Load Jira open issues with comments into Pinecone + RAG Agent (Direct Tool or MCP) Who’s it for This workflow is designed for support teams, data engineers, and AI developers who want to centralize Jira issue data into a vector database. It collects open issues and their associated comments, converts them into embeddings, and loads them into Pinecone for semantic search, retrieval-augmented generation (RAG), or AI-powered support bots. It’s also published as an MCP tool, so external applications can query the indexed issues directly. How it works The workflow automates Jira issue extraction, comment processing, and vector storage in Pinecone. Importantly, the Pinecone index is recreated at every run so that it always reflects the current set of unresolved tickets. Trigger – A schedule trigger runs the workflow at defined times (e.g., 8, 11, 14, and 17 on weekdays). Issue extraction with pagination – Calls the Jira REST API to fetch open issues matching a JQL query (unresolved cases created in the last year). Pagination is fully handled: issues are retrieved in batches of 25, and the workflow continues iterating until all open issues are loaded. Data transformation – Extracts key fields (issue ID, key, summary, description, product, customer, classification, status, registration date). Comments integration – Fetches all comments for each issue, filters out empty/irrelevant ones (images, dots, empty markdown), and merges them with the issue data. Text cleaning – Converts HTML descriptions into clean plain text for processing. Embedding generation – Uses the OpenAI Embeddings node to vectorize text. Vector storage with index recreation – Loads embeddings and metadata into Pinecone under the jira namespace and the openissues index. The namespace is cleared at every run to ensure the index contains only unresolved tickets. Document chunking – Splits long issue texts into smaller chunks (512 tokens, 50 overlap) for better embedding quality. MCP publishing – Exposes the Pinecone index as an MCP tool (openissues), enabling external systems to query Jira issues semantically. How to set up Jira – Configure a Jira account and generate a token. Update the Jira node with credentials and adjust the JQL query if needed. OpenAI – Set up an OpenAI API key for embeddings. Configure embedding dimensions (default: 512). Pinecone – Create an index (e.g., openissues) with matching dimensions (512). Configure Pinecone API credentials and namespace (jira). The index will be cleared automatically at every run before reloading unresolved issues. Schedule – Adjust the cron expression in the Schedule Trigger to fit your update frequency. Optional MCP – If you want to query Jira issues via MCP, configure the MCP trigger and tool nodes. Requirements Jira account with API access and permissions to read issues and comments. OpenAI API key with access to the embedding model. Pinecone account with an index created (dimensions = 512). n8n instance with credentials set up for Jira, OpenAI, and Pinecone. How to customize the workflow JQL query**: Modify it to control which issues are extracted (e.g., by project, type, or time window). Pagination size**: Adjust the maxResults parameter (default 25) if you want larger or smaller batches per iteration. Metadata fields**: Add or remove fields in the “Extract Relevant Info” code node. Chunk size**: Adjust chunk size/overlap in the Document Chunker for different embedding strategies. Embedding model**: Switch to a different embedding provider if preferred. Vector store**: Replace Pinecone with another supported vector database if needed. Downstream use**: Extend with notifications, dashboards, or AI assistants that consume the vector data. AI Chatbot for Jira open tickets with SLA insights Who’s it for This workflow is designed for commercial teams, customer support, and service managers who need quick, conversational access to unresolved Jira tickets. It enables them to check whether a client has open issues, see related details, and understand SLA implications without manually browsing Jira. How it works Chat interface** – Provides a web-based chat that team members can use to ask natural language questions such as: “Are there any issues from client ACME?” “Do we have tickets that have been open for a long time?” AI Agent** – Powered by OpenAI, it interprets questions and queries the Pinecone vector store (openissues index, jira namespace). Memory** – Maintains short-term chat history for more natural conversations. Ticket retrieval** – Uses Pinecone embeddings (dimension = 512) to fetch unresolved tickets enriched with metadata: Issue key, description, customer, product, severity color, status, AM contract type, and SLA. SLA integration** – Service levels (Basic, Advanced, Full Service, with optional Fast Support) are provided via the SLA node. The agent explains which SLA applies based on ticket severity, registration date, and contract type. AI response** – Returns a friendly, collaborative summary of all tickets found, including: Ticket identifier Description Customer and product Severity level (Red, Yellow, Green, White) Ticket status Contract level and SLA explanation Setup Configure Jira → Pinecone index (openissues, 512 dimensions) already populated with unresolved tickets. Provide OpenAI API credentials. Ensure the SLA node includes the correct service-level definitions. Adjust chat branding (title, subtitle, CSS) if desired. Requirements Jira account with API access. Pinecone account with an index (openissues, dimensions = 512). OpenAI API key. n8n instance with LangChain and chatTrigger nodes enabled. How to customize Change the SLA node text if your service levels differ. Adjust the chat interface design (colors, title, subtitle). Expand metadata in Pinecone (e.g., add project type, priority, or assigned team). Train with additional examples in the system message to refine AI behavior.
by Growth AI
AI-powered alt text generation from Google Sheets to WordPress media Who's it for WordPress site owners, content managers, and accessibility advocates who need to efficiently add alt text descriptions to multiple images for better SEO and web accessibility compliance. What it does This workflow automates the process of generating and updating alt text for WordPress media files using AI analysis. It reads image URLs from a Google Sheet, analyzes each image with Claude AI to generate accessibility-compliant descriptions, updates the sheet with the generated alt text, and automatically applies the descriptions to the corresponding WordPress media files. The workflow includes error handling to skip unsupported media formats and continue processing. How it works Input: Provide a Google Sheets URL containing image URLs and WordPress media IDs Authentication: Retrieves WordPress credentials from a separate sheet and generates Base64 authentication Processing: Loops through each image URL in the sheet AI Analysis: Claude AI analyzes each image and generates concise, accessible alt text (max 125 characters) Error Handling: Automatically skips unsupported media formats and continues with the next item Update Sheet: Writes the generated alt text back to the Google Sheet WordPress Update: Updates the WordPress media library with the new alt text via REST API Requirements Google Sheets with image URLs and WordPress media IDs WordPress site with Application Passwords enabled Claude AI (Anthropic) API credentials WordPress admin credentials stored in Google Sheets Export Media URLs WordPress plugin for generating the media list How to set up Step 1: Export your WordPress media URLs Install the "Export Media URLs" plugin on your WordPress site Go to the plugin settings and check both ID and URL columns for export (these are mandatory for the workflow) Export your media list to get the required data Step 2: Configure WordPress Application Passwords Go to WordPress Admin → Users → Your Profile Scroll down to "Application Passwords" section Enter application name (e.g., "n8n API") Click "Add New Application Password" Copy the generated password immediately (it won't be shown again) Step 3: Set up Google Sheets Duplicate this Google Sheets template to get the correct structure. The template includes two sheets: Sheet 1: "Export media" - Paste your exported media data with columns: ID (WordPress media ID) URL (image URL) Alt text (will be populated by the workflow) Sheet 2: "Infos client" - Add your WordPress credentials: Admin Name: Your WordPress username KEY: The application password you generated Domaine: Your site URL without https:// (format: "example.com") Step 4: Configure API credentials Add your Anthropic API credentials to the Claude node Connect your Google Sheets account to the Google Sheets nodes How to customize Language: The Claude prompt is in French - modify it in the "Analyze image" node for other languages Alt text length: Adjust the 125-character limit in the Claude prompt Batch processing: Change the batch size in the Split in Batches node Error handling: The workflow automatically handles unsupported formats, but you can modify the error handling logic Authentication: Customize for different WordPress authentication methods This workflow is perfect for managing accessibility compliance across large WordPress media libraries while maintaining consistent, AI-generated descriptions. It's built to be resilient and will continue processing even when encountering unsupported media formats.
by Meelioo
How it works This beginner-friendly workflow demonstrates the core building blocks of n8n. It guides you through: Triggers – Start workflows manually, on a schedule, via webhooks, or through chat. Data processing** – Use Set and Code nodes to create, transform, and enrich data. Logic and branching – Apply conditions with IF nodes and merge different branches back together. API integrations** – Fetch external data (e.g., users from an API), split arrays into individual items, and extract useful fields. AI-powered steps** – Connect to OpenAI for generating fun facts or build interactive assistants with chat triggers, memory, and tools. Responses** – Return structured results via webhooks or summary nodes. By the end, it demonstrates a full flow: creating data → transforming it → making decisions → calling APIs → using AI → responding with outputs. Set up steps Time required: 5–10 minutes. What you need: An n8n instance (cloud or self-hosted). Optional: API credentials (e.g., OpenAI) if you want to test AI features. Setup flow: Import this workflow. Add your API keys where needed (OpenAI, etc.). Trigger the workflow manually or test with webhooks. >👉 Detailed node explanations and examples are already included as sticky notes inside the workflow itself, so you can learn step by step as you explore.
by Yusuke
🧠 Overview Generate empathetic, professional reply drafts for customer or user messages. The workflow detects sentiment, tone, and risk level, drafts a concise response, sanitizes PII/links/emojis, and auto-escalates risky or low-confidence cases to human review. ⚙️ How It Works Input — Manual Test or Webhook Trigger AI Agent (Empathy) — returns { sentiment, tone, reply, confidence, needs_handover } Post-Process & Sanitize — removes URLs/hashtags, masks PII, caps length Risk & Handover Rules — checks confidence threshold, risk words, and negativity Routing — auto-send safe replies or flag to Needs Review 🧩 Setup Instructions (3–5 min) Open Set Config1 and adjust: MAX_LEN (default 600) ADD_FOLLOWUP_QUESTION (true/false) FORMALITY (auto | casual | polite) EMOJI_ALLOWED (true/false), BLOCK_LINKS (true/false) RISK_WORDS (e.g., refund, lawsuit, self-harm) Connect Anthropic credential to Anthropic Chat Model (Optional) Replace Manual Trigger with Webhook Trigger for real-time use > Tip: If you need to show literal angle brackets in messages, use backticks like `<example>` (no HTML entities needed). 📚 Use Cases 1) SaaS Billing Complaints Input:** “I was billed after canceling. This is unacceptable.” Output:** Calm, apologetic reply with refund steps; escalates if refund is in RISK_WORDS or confidence < 0.45. 2) Product Bug Reports Input:** “Upload fails on large files since yesterday.” Output:** Acknowledges impact, requests logs, offers workaround; routes to auto-send if low risk and high confidence. 3) Delivery/Logistics Delays Input:** “My order is late again. Should I file a complaint?” Output:** Empathetic apology, ETA guidance, partial credit policy note; escalates if language indicates legal action. 4) Community Moderation / Abuse Input:** “Support is useless—you’re all scammers.” Output:** De-escalating, policy-aligned response; auto-flags due to negative sentiment + risk keyword match. 5) Safety / Self-harm Mentions Input:** “I feel like hurting myself if this isn’t fixed.” Output:* *Immediate escalation**, inserts approved resources; never auto-sends. 🚨 Auto-Escalation Rules (defaults) Negative** sentiment Message matches any RISK_WORDS confidence < 0.45 Mentions of legal, harassment, or self-harm context 🧪 Notes & Best Practices 🔐 No hardcoded API keys — use n8n Credentials 🧭 Tune thresholds and RISK_WORDS to your org policy 🧩 Works on self-hosted or cloud n8n ✅ Treat outputs as drafts; ship after human/policy review 🔗 Resources GitHub (template JSON):** https://github.com/yskmtb0714/n8n-workflows/blob/main/empathy-reply-assistant.json
by Ruthwik
📧 AI-Powered Email Categorization & Labeling in Zoho Mail This n8n template demonstrates how to use AI text classification to automatically categorize incoming emails in Zoho Mail and apply the correct label (e.g., Support, Billing, HR). It saves time by keeping your inbox structured and ensures emails are routed to the right category. Use cases include: Routing customer support requests to the correct team. Organizing billing and finance communications separately. Streamlining HR and recruitment email handling. Reducing inbox clutter and ensuring no important message is missed. ℹ️ Good to know You’ll need to configure Zoho OAuth credentials — see Self Client Overview, Authorization Code Flow, and Zoho Mail OAuth Guide. The labels must already exist in Zoho Mail (e.g., Support, Billing, HR). The workflow fetches these labels and applies them automatically. The Zoho Mail API domain changes depending on your account region: .com → Global accounts (https://mail.zoho.com/api/...) .eu → EU accounts (https://mail.zoho.eu/api/...) .in → India accounts (https://mail.zoho.in/api/...) Example: For an EU account, the endpoint would be: https://mail.zoho.eu/api/accounts/<accountID>/updatemessage The AI model used for text classification may incur costs depending on your provider (e.g., OpenRouter). Start by testing with a small set of emails before enabling for your full inbox. 🔄 How it works A new email in Zoho Mail triggers the workflow. OAuth authentication retrieves access to Zoho Mail’s API. All available labels are fetched, and a label map (display name → ID) is created. The AI model analyzes the subject and body to predict the correct category. The workflow routes the email to the right category branch. The matching Zoho Mail label is applied (final node is deactivated by default). 🛠️ How to use Create the required labels (e.g., Support, Billing, HR, etc.) in your Zoho Mail account before running the workflow. Replace the Zoho Mail Account ID in the Set Account ID node. Configure your Zoho OAuth credentials in the Get Access Token node. Update the API base URL to match your Zoho account’s region (.com, .eu, .in, etc.). Activate the Apply Label to Email node once ready for production. Optionally, adjust categories in the AI classifier prompt to fit your organization’s needs. 📋 Requirements Zoho Mail account with API access enabled. Labels created in Zoho Mail for each category you want to classify. OAuth credentials set up in n8n. Correct Zoho Mail API domain (.com, .eu, .in) based on your account region. An AI model (via OpenRouter or other provider) for text classification. 🎨 Customising this workflow This workflow can be adapted to many inbox management scenarios. Examples include: Auto-routing customer inquiries to specific departments. Prioritizing VIP client emails with special labels. Filtering job applications directly into an HR-managed folder.
by John Alejandro SIlva
🤖 Human-like Evolution API Agent with Redis & PostgreSQL This production-ready template builds a sophisticated AI Agent using Evolution API that mimics human interaction patterns. Unlike standard chatbots that reply instantly to every incoming message, this workflow uses a Smart Redis Buffering System. It waits for the user to finish typing their full thought (text, audio, or image albums) before processing, creating a natural, conversational flow. It features a Hybrid Memory Architecture: active conversations are cached in Redis for ultra-low latency, while the complete chat history is securely stored in PostgreSQL. To optimize token usage and maintain long-term coherence, a Context Refiner Agent summarizes the conversation history before the Main AI generates a response. ✨ Key Features Human-like Buffering:** The agent waits (configurable time) to group consecutive messages, voice notes, and media albums into a single context. This prevents fragmented replies and feels like talking to a real person. Hybrid Memory:* Combines *Redis* (Hot Cache) for speed and *PostgreSQL** (Cold Storage) for permanent history. Context Refinement:** A specialized AI step summarizes past interactions, allowing the Main Agent to understand long conversations without exceeding token limits or increasing costs. Multi-Modal Support:** Natively handles text, audio transcription, and image analysis via Evolution API. Parallel Processing:** Manages "typing..." status and session checks in parallel to reduce response latency. 📋 Requirements To use this workflow, you must configure the Evolution API correctly: Evolution API Instance: You need a running instance of Evolution API. Configuration Guide N8n Community Node: Install the Evolution API node in your n8n instance. n8n-nodes-evolution-api Database: A PostgreSQL database for chat history and a Redis instance for the buffer/cache. AI Models: API keys for your LLM (OpenAI, Anthropic, or Google Gemini). ⚙️ Setup Instructions Install the Node: Go to Settings > Community Nodes in n8n and install n8n-nodes-evolution-api. Credentials: Configure credentials for Redis, PostgreSQL, and your AI provider (e.g., OpenAI/Gemini). Database Setup: Create a chat_history table in PostgreSQL (columns must match the Insert node). Redis Connection: Configure your Redis credentials in the workflow nodes. Global Variables: Set the following in the "Global Variables" node: wait_buffer: Seconds to wait for the user to stop typing (e.g., 5s). wait_conversation: Seconds to keep the cache alive (e.g., 300s). max_chat_history: Number of past messages to retrieve. Webhook: Point your Evolution API instance to this workflow's Webhook URL. 🚀 How it Works Ingestion: Receives data via Evolution API. Detects if it's text, audio, or an album. Smart Buffering: Holds the execution to collect all parts of the user's message (simulating a human reading/listening). Context Retrieval: Checks Redis for the active session. If empty, fetches from PostgreSQL. Refinement: The Refiner Agent summarizes the history to extract key details. Response: The Main Agent generates a reply based on the refined context and current buffer, then saves it to both Redis and Postgres. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by John Alejandro SIlva
🤖💬 Smart Telegram AI Assistant with Memory Summarization & Dynamic Model Selection > Optimize your AI workflows, cut costs, and get faster, more accurate answers. 📋 Description Tired of expensive AI calls, slow responses, or bots that forget your context? This Telegram AI Assistant template is designed to optimize cost, speed, and precision in your AI-powered conversations. By combining PostgreSQL chat memory, AI summarization, and dynamic model selection, this workflow ensures you only pay for what you really need. Simple queries get routed to lightweight models, while complex requests automatically trigger more advanced ones. The result? Smarter context, lower costs, and better answers. This template is perfect for anyone who wants to: ⚡ Save money by using cheaper models for easy tasks. 🧠 Keep context relevant with AI-powered summarization. ⏱️ Respond faster thanks to optimized chat memory storage. 💬 Deliver better answers directly inside Telegram. ✨ Key Benefits 💸 Cost Optimization: Automatically routes simple requests to Gemini Flash Lite and reserves Gemini Pro only for complex reasoning. 🧠 Smarter Context: Summarization ensures only the most relevant chat history is used. ⏱️ Faster Workflows: Storing user + agent messages in a single row reduces DB queries by half and saves ~0.3s per response. 🎤 Voice Message Support: Convert Telegram voice notes to text and reply intelligently. 🛡️ Error-Proof Formatting: Safe MarkdownV2 ensures Telegram-ready answers. 💼 Use Case This template is for anyone who needs an AI chatbot on Telegram that balances cost, performance, and intelligence. Customer support teams can reduce expenses by using lightweight models for FAQs. Freelancers and consultants can offer faster AI-powered chats without losing context. Power users can handle voice + text seamlessly while keeping conversations memory-aware. Whether you’re scaling a business or just want a smarter assistant, this workflow adapts to your needs and budget. 💬 Example Interactions Quick Q&A** → Routed to Gemini Flash Lite for fast, low-cost answers. Complex problem-solving** → Sent to Gemini Pro for in-depth reasoning. Voice messages** → Automatically transcribed, summarized, and answered. Long conversations** → Context is summarized, ensuring precise and efficient replies. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Database connection) Google Gemini API** (Flash Lite, Flash, Pro) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL table (chat_memory) from the Gray section SQL. 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your Gemini API credentials. 🗂️ Set up PostgreSQL nodes with your DB details. ▶️ Activate the workflow and start chatting with your AI-powered Telegram bot. 🏷 Tags telegram ai-assistant chatbot postgresql summarization memory gemini dynamic-routing workflow-optimization cost-saving voice-to-text 🙏 Acknowledgement A special thank you to Davide for the inspiration behind this template. His work on the AI Orchestrator that dynamically selects models based on input type served as a foundational guide for this architecture. 💡 Need Assistance? Want to customize this workflow for your business or project? Let’s connect: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Daniel Agrici
This workflow automates business intelligence. Submit one URL, and it scrapes the website, uses AI to perform a comprehensive analysis, and generates a professional report in Google Doc and PDF format. It's perfect for agencies, freelancers, and consultants who need to streamline client research or competitive analysis. How It Works The workflow is triggered by a form input, where you provide a single website URL. Scrape: It uses Firecrawl to scrape the sitemap and get the full content from the target website. Analyze: The main workflow calls a Tools Workflow (included below) which uses Google Gemini and Perplexity AI agents to analyze the scraped content and extract key business information. Generate & Deliver: All the extracted data is formatted and used to populate a template in Google Docs. The final report is saved to Google Drive and delivered via Gmail. What It Generates The final report is a comprehensive business analysis, including: Business Overview: A full company description. Target Audience Personas: Defines the demographic and psychographic profiles of ideal customers. Brand & UVP: Extracts the brand's personality matrix and its Unique Value Proposition (UVP). Customer Journey: Maps the typical customer journey from Awareness to Loyalty. Required Tools This workflow requires n8n and API keys/credentials for the following services: Firecrawl (for scraping) Perplexity (for AI analysis) Google Gemini (for AI analysis) Google Services (for Docs, Drive, and Gmail) ⚠️ Required: Tools Workflow This workflow will not work without its "Tools" sub-workflow. Please create a new, separate workflow in n8n, name it (e.g., "Business Analysis Tools"), and paste the following code into it. { "name": "Business Analysis Workflow Tools", "nodes": [ { "parameters": { "workflowInputs": { "values": [ { "name": "function" }, { "name": "keyword" }, { "name": "url" }, { "name": "location_code" }, { "name": "language_code" } ] } }, "type": "n8n-nodes-base.executeWorkflowTrigger", "typeVersion": 1.1, "position": [ -448, 800 ], "id": "e79e0605-f9ac-4166-894c-e5aa9bd75bac", "name": "When Executed by Another Workflow" }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "8d7d3035-3a57-47ee-b1d1-dd7bfcab9114", "leftValue": "serp_search", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "serp_search" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "bb2c23eb-862d-4582-961e-5a8d8338842c", "leftValue": "ai_mode", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "ai_mode" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "4603eee1-3888-4e32-b3b9-4f299dfd6df3", "leftValue": "internal_links", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "internal_links" } ] }, "options": {} }, "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ -208, 784 ], "id": "72c37890-7054-48d8-a508-47ed981551d6", "name": "Switch" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/organic/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword.replace(/[:'\"\\\\/]/g, '') }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"depth\": 10,\n \"group_organic_results\": true,\n \"load_async_ai_overview\": true,\n \"people_also_ask_click_depth\": 1\n }\n]", "options": { "redirect": { "redirect": { "followRedirects": false } } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 512 ], "id": "6203f722-b590-4a25-8953-8753a44eb3cb", "name": "SERP Google", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## SERP Google", "height": 272, "width": 688, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 432 ], "id": "81593217-034f-466d-9055-03ab6b2d7d08", "name": "Sticky Note3" }, { "parameters": { "assignments": { "assignments": [ { "id": "97ef7ee0-bc97-4089-bc37-c0545e28ed9f", "name": "platform", "value": "={{ $json.tasks[0].data.se }}", "type": "string" }, { "id": "9299e6bb-bd36-4691-bc6c-655795a6226e", "name": "type", "value": "={{ $json.tasks[0].data.se_type }}", "type": "string" }, { "id": "2dc26c8e-713c-4a59-a353-9d9259109e74", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "84c9be31-8f1d-4a67-9d13-897910d7ec18", "name": "results", "value": "={{ $json.tasks[0].result }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 512 ], "id": "a916551a-009b-403f-b02e-3951d54d2407", "name": "Prepare SERP output" }, { "parameters": { "content": "# Google Organic Search API\n\nThis API lets you retrieve real-time Google search results with a wide range of parameters and custom settings. \nThe response includes structured data for all available SERP features, along with a direct URL to the search results page. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 976, 432 ], "id": "87672b01-7477-4b43-9ccc-523ef8d91c64", "name": "Sticky Note17" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/ai_mode/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"device\": \"mobile\",\n \"os\": \"android\"\n }\n]", "options": { "redirect": { "redirect": {} } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 800 ], "id": "fb0001c4-d590-45b3-a3d0-cac7174741d3", "name": "AI Mode", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## AI Mode", "height": 272, "width": 512, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 720 ], "id": "2cea3312-31f8-4ff0-b385-5b76b836274c", "name": "Sticky Note11" }, { "parameters": { "assignments": { "assignments": [ { "id": "b822f458-ebf2-4a37-9906-b6a2606e6106", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "10484675-b107-4157-bc7e-b942d8cdb5d2", "name": "result", "value": "={{ $json.tasks[0].result[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 800 ], "id": "6b1e7239-ee2b-4457-8acb-17ce87415729", "name": "Prepare AI Mode Output" }, { "parameters": { "content": "# Google AI Mode API\n\nThis API provides AI-generated search result summaries and insights from Google. \nIt returns detailed explanations, overviews, and related information based on search queries, with parameters to customize the AI overview. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 800, 720 ], "id": "d761dc57-e35d-4052-a360-71170a155f7b", "name": "Sticky Note18" }, { "parameters": { "content": "## Input", "height": 384, "width": 544, "color": 7 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -528, 672 ], "id": "db90385e-f921-4a9c-89f3-53fc5825b207", "name": "Sticky Note" }, { "parameters": { "assignments": { "assignments": [ { "id": "b865f4a0-b4c3-4dde-bf18-3da933ab21af", "name": "platform", "value": "={{ $json.platform }}", "type": "string" }, { "id": "476e07ca-ccf6-43d4-acb4-4cc905464314", "name": "type", "value": "={{ $json.type }}", "type": "string" }, { "id": "f1a14eb8-9f10-4198-bbc7-17091532b38e", "name": "keyword", "value": "={{ $json.keyword }}", "type": "string" }, { "id": "181791a0-1d88-481c-8d98-a86242bb2135", "name": "results", "value": "={{ $json.results[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 800, 512 ], "id": "83fef061-5e0b-417c-b1f6-d34eb712fac6", "name": "Sort Results" }, { "parameters": { "content": "## Internal Links", "height": 272, "width": 272, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 1024 ], "id": "9246601a-f133-4ca3-aac8-989cb45e6cd2", "name": "Sticky Note7" }, { "parameters": { "method": "POST", "url": "https://api.firecrawl.dev/v2/map", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer your-firecrawl-apikey" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={\n \"url\": \"https://{{ $json.url }}\",\n \"limit\": 400,\n \"includeSubdomains\": false,\n \"sitemap\": \"include\"\n }", "options": {} }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 368, 1104 ], "id": "fd6a33ae-6fb3-4331-ab6a-994048659116", "name": "Get Internal Links" }, { "parameters": { "content": "# Firecrawl Map API\n\nThis endpoint maps a website from a single URL and returns the list of discovered URLs (titles and descriptions when available) — extremely fast and useful for selecting which pages to scrape or for quickly enumerating site links. (Firecrawl)\n\nIt supports a search parameter to find relevant pages inside a site, location/languages options to emulate country/language (uses proxies when available), and SDK + cURL examples in the docs,\n\n👉 Documentation\n\n[1]: https://docs.firecrawl.dev/features/map \"Map | Firecrawl\"\n", "height": 272, "width": 624, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 560, 1024 ], "id": "08457204-93ff-4586-a76e-03907118be3c", "name": "Sticky Note24" } ], "pinData": { "When Executed by Another Workflow": [ { "json": { "function": "serp_search", "keyword": "villanyszerelő Largo Florida", "url": null, "location_code": 2840, "language_code": "hu" } } ] }, "connections": { "When Executed by Another Workflow": { "main": [ [ { "node": "Switch", "type": "main", "index": 0 } ] ] }, "Switch": { "main": [ [ { "node": "SERP Google", "type": "main", "index": 0 } ], [ { "node": "AI Mode", "type": "main", "index": 0 } ], [ { "node": "Get Internal Links", "type": "main", "index": 0 } ] ] }, "SERP Google": { "main": [ [ { "node": "Prepare SERP output", "type": "main", "index": 0 } ] ] }, "AI Mode": { "main": [ [ { "node": "Prepare AI Mode Output", "type": "main", "index": 0 } ] ] }, "Prepare SERP output": { "main": [ [ { "node": "Sort Results", "type": "main", "index": 0 } ] ] }, "Sort Results": { "main": [ [] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "6fce16d1-aa28-4939-9c2d-930d11c1e17f", "meta": { "instanceId": "1ee7b11b3a4bb285563e32fdddf3fbac26379ada529b942ee7cda230735046a1" }, "id": "VjpOW2V2aNV9HpQJ", "tags": [] } `
by Toshiya Minami
Here’s a clean, English-only template description you can paste into the n8n “Description” field. Overview This workflow analyzes customer survey responses, groups them by sentiment (positive / neutral / negative), generates themes and insights with an AI agent, and delivers a consolidated report to your destinations (Google Sheets, Slack). It runs on a daily schedule and uses batch-based AI analysis for accuracy. Flow: Schedule → Fetch from Sheets → Group & batch (Code) → AI analysis → Aggregate → Save/Notify (Sheets, Slack) What You’ll Need A survey data source (Google Sheets recommended) AI model credentials (e.g., OpenAI or OpenRouter) Optional destinations: Google Sheets (summary sheet), Slack channel Setup Data source (Google Sheets) In Get Survey Responses, replace YOUR_SHEET_ID and YOUR_SHEET_NAME with your sheet details. Ensure the sheet includes columns like: 満足度 (Rating), 自由記述コメント (Comment), 回答日時 (Timestamp). AI model Add credentials to your preferred LLM node (OpenAI/OpenRouter). Keep the prompt’s JSON-only requirement so the structured parser can consume it reliably. Destinations Save to Sheet: set your output documentId / sheetName. Slack: set the target channelId on the Slack node. How It Works Daily Schedule Trigger — starts the workflow at your chosen time. Get Survey Responses (Sheets) — reads survey data. Group & Prepare Data (Code) — classifies by rating (>=4: positive, =3: neutral, <3: negative) and creates batches (max 50 per batch). Loop Over Batches — feeds each sentiment batch to the AI separately for cleaner signals. Analyze Survey Batch (AI Agent) — returns structured JSON: themes, insights, recommendations. Add Metadata (Code) — attaches original sentiment and item counts to each AI result. Aggregate Results (Code) — merges all batches; outputs Top Themes, Key Insights, Priority Recommendations, and an Executive Summary. Save to Sheet / Slack — appends the summary to a sheet and posts highlights to Slack. Data Assumptions (Columns) Your source should include at least: 満足度 (Rating) — integer 1–5 自由記述コメント (Comment) — string 回答日時 (Timestamp) — ISO string or date Outputs Consolidated summary** containing: Top themes (with example quotes) Key insights Priority recommendations Executive summary Destinations**: Google Sheets (one row per run) and Slack (high-level highlights) Customize Adjust sentiment thresholds (e.g., require >=5 for positive) or batch size (default 50) in the Code node. Tailor the AI prompt or the output JSON schema to your domain. Add more outputs (CSV export, database insert, additional channels) in parallel after the Aggregate step. Before You Run (Checklist) [ ] Add credentials for Sheets / AI / Slack in Credentials [ ] Update documentId, sheetName, and Slack channelId [ ] Confirm your column names match the Code node references [ ] Verify schedule time and timezone (e.g., Asia/Tokyo) Troubleshooting Parser errors on AI output: ensure the model response is **JSON-only; reduce temperature or simplify schema if needed. Only some batches run**: check batch size in Loop and ensure each sentiment bucket actually contains responses. No output to Sheets/Slack**: verify credentials, IDs, and required fields; confirm permissions. Security & Template Notes Do not include credentials in the template file. Users add their own after import. Use Sticky Notes to document Overview, Setup, Processing Logic, and Output choices. This template already includes guideline-friendly notes.
by isaWOW
Description An AI-powered workflow that analyzes any website to identify missing pages that would improve user experience and business performance. Submit a URL, and the system detects existing pages, researches competitors using Perplexity, and generates a professional gap analysis report with prioritized recommendations—saved directly to Google Docs. What this workflow does This automation delivers a complete website page gap analysis: Smart page detection:** Automatically scans website HTML and identifies 15 common page types (About, Contact, Services, Blog, Portfolio, Pricing, FAQ, Testimonials, Team, Careers, Privacy, Terms, etc.) Business type classification:** Determines if the site is ecommerce, portfolio/agency, blog, SaaS, or service-based to tailor recommendations Competitor research:** Uses Perplexity to research 5-7 top competitors in the same industry and identify their page structures AI gap analysis:** GPT-4.1-mini with web search compares the website against industry standards and competitor best practices Prioritized recommendations:** Generates High/Medium/Low priority suggestions with business value explanations and actionable content ideas Google Docs report:** Saves a professional gap analysis report ready to share with clients or stakeholders Setup requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Docs with OAuth access OpenAI API key (GPT-4.1-mini access with web search) Perplexity API key (for competitor research) Estimated setup time: 15–20 minutes Step-by-step setup 1. Connect Google Docs In n8n: Credentials → Add credential → Google Docs OAuth2 API Complete OAuth authentication Create a new Google Doc for storing reports (or use an existing one) Open "Save Report to Google Docs" node Paste the Google Doc URL in the documentURL field 2. Add OpenAI API credentials Get API key: https://platform.openai.com/api-keys In n8n: Credentials → Add credential → OpenAI API Paste your API key Open "OpenAI GPT-4.1 Mini with Web Search" node Select your OpenAI credential Ensure model is set to gpt-4.1-mini Verify Web Search is enabled in the built-in tools section 3. Add Perplexity API credentials Get API key: https://www.perplexity.ai/settings/api In n8n: Credentials → Add credential → Perplexity API Paste your API key Open "Perplexity Competitor Research Tool" node Select your Perplexity credential 4. Share the form URL Open "Submit Website URL for Analysis" node Copy the Form URL from the node settings Share this URL with anyone who needs to run website audits The form accepts a single field: Website URL 5. Test the workflow Open the Form URL in your browser Enter a test website: https://example.com Submit the form Wait 30-60 seconds for the analysis to complete Check your Google Docs—the gap analysis report should appear Verify that: Existing pages are correctly detected Business type is identified Competitor research is included Recommendations are prioritized and actionable 6. Activate the workflow Toggle the workflow to Active at the top The form will now accept submissions 24/7 Each submission generates a new report appended to your Google Doc How it works 1. URL submission via form User opens the form link and submits a website URL they want to analyze. The form triggers the workflow immediately. 2. HTML fetch and extraction The workflow sends an HTTP request to the submitted URL and retrieves the complete HTML source code of the website's homepage. 3. Automated page detection A code node analyzes the HTML to detect 15 common page types: Navigation pages:** Home, About, Contact, Services, Products Content pages:** Blog, Portfolio, Pricing, FAQ Social proof pages:** Testimonials, Team, Careers Legal pages:** Privacy Policy, Terms of Service The detection works by: Scanning all internal links in the HTML Matching URL patterns (e.g., /about, /contact-us, /services) Searching for navigation keywords in anchor text Normalizing URLs (removing query params, anchors, trailing slashes) 4. Business type classification The code also identifies the website's business type based on HTML content patterns: Ecommerce:** Detects shopping cart, checkout, product pages Portfolio/Agency:** Identifies portfolio, case studies, creative work Blog/Content site:** Finds blog posts, articles, news sections SaaS/Software:** Detects subscription, cloud platform, software keywords Service/Agency:** Identifies consulting, marketing, agency services General:** Default if no specific patterns match 5. Deep competitor research with Perplexity The AI Agent instructs Perplexity to: Crawl the target website thoroughly (navigation, footer, hidden menus) Research 5-7 top competitors in the identified business type Document each competitor's page structure Identify industry-standard pages that successful sites consistently have Perplexity focuses only on user-facing pages and ignores technical files (sitemap.xml, robots.txt, admin pages). 6. AI gap analysis and recommendations GPT-4.1-mini with web search: Compares the website's existing pages against competitor structures Identifies genuinely missing pages (cross-references with detected existing pages) Prioritizes recommendations based on business impact (High/Medium/Low) Provides specific business value explanations for each recommendation Suggests actionable content for each missing page Includes competitor examples showing how others use these pages Critical filtering rules: Never recommends pages already detected Excludes technical files and admin pages Only suggests pages with clear business value Each recommendation must be justified 7. Professional report generation The AI Agent outputs a structured report containing: Website Overview:** URL, business type, analysis date Pages Confirmed as Existing:** Formatted list of detected pages Gap Analysis Summary:** Total pages analyzed, existing vs. missing count Recommended Pages to Add:** Detailed recommendations with priority, business value, content suggestions, and competitor examples Implementation Priority Summary:** Quick-reference lists by priority level Next Steps:** 2-3 actionable implementation steps 8. Save to Google Docs The complete report is inserted into your Google Docs document. Each new analysis appends to the same document, creating a running archive of all audits performed. Key features ✅ Automatic page detection: Scans HTML and identifies 15 common page types without manual input—no need to know the site structure beforehand ✅ Business type intelligence: Classifies websites into 6 business categories to provide industry-relevant recommendations ✅ Perplexity-powered research: Deep competitor analysis covering 5-7 top sites in the same niche with complete page inventories ✅ Smart filtering: Never recommends pages that already exist, even if they use different naming conventions or URL structures ✅ Priority-based recommendations: Every suggestion labeled High/Medium/Low with clear business justification—know what to build first ✅ Actionable content ideas: Not just "add a blog"—specific suggestions for what content should go on each recommended page ✅ Competitor examples: See how successful competitors use each recommended page type in their own sites ✅ Google Docs integration: Professional reports saved automatically—no downloads, no formatting, ready to share ✅ Form-based workflow: Single URL submission—anyone can run audits without touching n8n Troubleshooting HTML fetch fails SSL certificate issues:** Some websites block automated requests. Try adding "rejectUnauthorized": false in HTTP Request options. Cloudflare/bot protection:** Sites with aggressive protection may block the request. Test with a simple site first (like your own). Timeout errors:** Increase timeout setting in HTTP Request node to 30-60 seconds for slow-loading sites. Pages not detected correctly Non-standard URL structure:** The workflow detects pages using common patterns (/about, /contact, etc.). If a site uses unusual URLs like /company-info instead of /about, manual review may be needed. Single-page websites:** Sites built as SPAs (single-page applications) with JavaScript routing may not have distinct page URLs in HTML. The workflow works best with traditional multi-page sites. Check detection details:** The code node outputs detection_details showing exactly what was found. Review this to debug false negatives. Perplexity API errors Rate limits:** Perplexity has usage limits. If you hit them, wait or upgrade your plan. API key invalid:** Verify the key is correct at https://www.perplexity.ai/settings/api Quota exceeded:** Check your Perplexity dashboard for remaining credits. OpenAI web search not working Web search disabled:** Ensure "Web Search" is enabled in the OpenAI Chat Model node under "Built-in Tools" Search context size:** Set to "Medium" for balanced performance Model compatibility:** Web search only works with GPT-4 models, not GPT-3.5 Report not saving to Google Docs Re-authenticate OAuth:** Go to Credentials → Google Docs OAuth2 API → Reconnect Document URL format:** Ensure the URL is a valid Google Docs link (not a folder or Sheets link) Permissions:** Verify the connected Google account has edit access to the document Document locked:** Check if the document is open in another tab with unsaved changes AI recommendations are too generic Perplexity research quality:** The AI relies on Perplexity's research. If competitors have similar structures, recommendations may overlap. Try analyzing a more unique website. Business type misclassification:** Check the detected business_type in the code output. If it's wrong, the recommendations will be off-target. Improve prompt:** Edit the AI Agent system prompt to emphasize more specific or creative recommendations for your niche. Use cases Web design agencies: Audit client websites before proposals. Show exactly which pages are missing compared to competitors, with business justification for each recommendation. Win more projects by demonstrating data-driven insights. SEO consultants: Include page gap analysis in site audits. Identify missing pages that competitors rank for (FAQ, pricing, testimonials). Provide clients with actionable roadmaps for site expansion. In-house marketing teams: Analyze competitor websites quarterly to spot new page types or content strategies. Keep your site competitive by identifying gaps before leadership asks about them. Freelance developers: Offer value-add services to existing clients. Run audits on their sites, identify quick wins (missing FAQ, testimonials, pricing pages), and sell additional development work. Startup founders: Before hiring a design agency, understand what pages you actually need. Get a competitor-researched report showing industry standards for your business type—save time and budget. Content strategists: Identify content opportunities beyond blog posts. See which informational pages (FAQ, case studies, resource libraries) competitors have that you're missing. Expected results Time savings:** 2-3 hours saved per website audit (manual competitor research eliminated) Analysis speed:** Complete gap analysis in 30-60 seconds vs. hours of manual work Competitor coverage:** Research 5-7 competitors automatically vs. 1-2 manual comparisons Report quality:** Professional, shareable reports vs. rough notes or spreadsheets Actionability:** Prioritized recommendations with business value vs. generic "add more pages" advice Scalability:** Run 50+ audits per day without additional effort Support Need help or custom development? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Lucas
🎶 Add liked songs to a monthly playlist > This Workflow is a port of Add saved songs to a monthly playlist from IFTTT. When you like a song, the workflow will save this song in a monthly playlist. E.g.: It's June 2024, I liked a song. The workflow will save this song in a playlist called June '24. If this playlist does not exist, the workflow will create it for me. ⚙ How it works Each 5 minutes, the workflow will start automatically. He will do 3 things : Get the last 10 songs you saved in the "Liked song" playlist (by clicking on the heart in the app) and save them in a NocoDB table (of course, the workflow avoid to create duplicates). Check if the monthly playlist is already created. Otherwise, the playlist is created. The created playlist is also saved in NocoDB to avoid any problems. Check if the monthly playlist contains all the song liked this month by getting them from NocoDB. If they are not present, add them one by one in the playlist. You may have a question regarding the need of NocoDB. Over the last few weeks/months, I've had duplication problems in my playlists and some playlists have been created twice because Spotify wasn't returning all the information but only partial information. Having the database means I don't have to rely on Spotify's data but on my own, which is accurate and represents reality. 📝 Prerequisites You need to have : Spotify API keys, which you can obtain by creating a Spotify application here: https://developer.spotify.com/dashboard. Create a NocoDB API token 📚 Instructions Follow the instructions below Create your Spotify API credential Create your NocoDB credential Populate all Spotify nodes with your credentials Populate all Spotify nodes with your credentials Enjoy ! If you need help, feel free to ping me on the N8N Discord server or send me a DM at "LucasAlt" Show your support Share your workflow on X and mention @LucasCtrlAlt Consider buying me a coffee 😉