by Julian Kaiser
Startup Funding Research Automation with Claude, Perplexity AI, and Airtable How it works This intelligent workflow automatically discovers and analyzes recently funded startups by: Monitoring multiple news sources (TechCrunch and VentureBeat) for funding announcements Using AI to extract key funding details (company name, amount raised, investors) Conducting automated deep research on each company through perplexity deep research or jina deep search. Organizing all findings into a structured Airtable database for easy access and analysis Set up steps (10-15 minutes) Connect your news feed sources (TechCrunch and VentureBeat). Could be extended. These were easy to scrape and this data can be expensive. Set up your AI service credentials (Claude and Perplexity or jina which has generous free tier) Connect your Airtable account and create a base with appropriate fields (can be imported from my base) or see structure below. Airtable Base Structure Funding Round Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | lead_investor | String | The primary investor leading the funding round | | market | String | The market or industry sector the company operates in | | participating_investors | String | List of other investors participating in the funding round | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Structure Company Deep Research Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | currency | String | Currency of the funding amount | | announcement_date | String | Date when the funding was announced | | lead_investor | String | The primary investor leading the funding round | | participating_investors | String | List of other investors participating in the funding round | | industry | String | The industry sectors the company operates in | | company_description | String | Description of the company's business | | hq_location | String | Company headquarters location | | founding_year | Number | Year the company was founded | | founder_names | String | Names of the company founders | | ceo_name | String | Name of the company CEO | | employee_count | Number | Number of employees at the company | | total_funding | Number | Total funding amount received to date | | total_funding_currency | String | Currency of total funding | | funding_purpose | String | Purpose or use of the funding | | business_model | String | Company's business model | | valuation | Object | Company valuation information | | previous_rounds | Object | Information about previous funding rounds | | source_urls | String | Source URLs for the funding information | | original_report | String | Original report text about the funding | | market | String | The market the company operates in | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Notes I found that by using perplexity via open router, we lose access to the sources, as they are not stored in the same location as the report itself so I opted to use perplexity API via HTTP node. For using perplexity and or jina you have to configure header auth as described in Header Auth - n8n Docs What you can learn How to scrape data using sitemaps How to extract strucutred data from unstructured text How to execute parts of the workflow as subworkflow How to use deep research in a practical scenario How to define more complex JSON schemas
by Zacharia Kimotho
N8n recently introduced folders and it has been a big improvement on workflow management on top of the tags. This means the current workflows need to be moved manually to the folders. The simplest idea to try is to convert the current tags into folders and move all the current workflows within the respective tags into the folders This assumes the tag name will be used as the folder name. To Note For workflows that use more than 1 tag, the workflow will be assigned the last tag that runs as the folder. How does it work I took the liberty of simplifying the setup of this workflow that will be needed on your part and also be beginner-friendly Copy and paste this workflow into your n8n canvas. You must have existing workflows and tags before you can run this Set your n8n login details on the node set Credentials with the n8n URL, username, and password. Setup your n8n API credentials on the n8n node get workflows Run the workflow. This opens up a form where you can select the number of tags to move and click on submit The workflow responds with the successful number of workflows that were imported Read more about the template Built by Zacharia Kimotho - Imperol
by CustomJS
! n8n Workflow: HTML to PDF Generator This n8n workflow converts HTML content into a styled PDF and returns it as a response via a webhook. The workflow receives HTML input, processes it using CustomJS's PDF toolkit, and sends back the resulting PDF to the original webhook requester. @custom-js/n8n-nodes-pdf-toolkit Features: Webhook Trigger**: Accepts incoming requests with HTML content. HTML to PDF Conversion**: Uses CustomJS to transform HTML into a PDF. Response**: Sends the generated PDF back to the webhook response. Requirements: Self-hosted** n8n instance A CustomJS API key for HTML to PDF conversion HTML content** to be converted into a PDF Workflow Steps: Webhook Trigger: Accepts incoming HTTP requests with HTML content. This data is passed to the next node for processing. HTML to PDF Conversion: Uses the CustomJS node to convert the incoming HTML into a PDF document. You can customize the HTML content to match the design requirements. Respond to Webhook: Sends the generated PDF as a binary response to the original webhook request. Setup Guide: 1. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 2. Design Workflow Create a Webhook: Set up a webhook to trigger the workflow when HTML content is received. Prepare HTML Content: The incoming request should include the HTML content you wish to convert into a PDF. Configure HTML to PDF Node: Use the HTML to PDF node to convert the provided HTML into a PDF. The node uses the HTML input to generate a PDF using the CustomJS API. Respond with the PDF: The Respond to Webhook node will send the generated PDF back to the original requester as a binary response. Example HTML Input: Hello CustomJS! CustomJS provides the missing toolset for your no-code projects Result PDF
by Muhammad Asadullah
Document Chat Bot with Automated RAG System This workflow creates a conversational assistant that can answer questions based on your Google Drive documents. It automatically processes various file types and uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on your document content. How It Works Monitors Google Drive for New Documents: Automatically detects when files are created or updated in designated folders Processes Multiple File Types: Handles PDFs, Excel spreadsheets, and Google Docs Builds a Knowledge Base: Converts documents into searchable vector embeddings stored in Supabase Provides Chat Interface: Users can ask questions about their documents through a web interface Retrieves Relevant Information: Uses advanced RAG techniques to find and present the most relevant information Setup Steps (Estimated time: 25-30 minutes) API Credentials: Connect your OpenAI API key for text processing and embeddings Google Drive Integration: Set up Google Drive triggers to monitor specific folders Supabase Configuration: Configure Supabase vector database for document storage Chat Interface Setup: Deploy the web-based chat interface using the provided webhook The workflow automatically chunks documents into manageable segments, generates embeddings, and stores them in a vector database for efficient retrieval. When users ask questions, the system finds the most relevant document sections and uses them to generate accurate, contextual responses.
by Custom Workflows AI
Introduction The "Automatic Weekly Digital PR Stories Suggestions" workflow is a sophisticated automated system designed to identify trending news stories on Reddit, analyze public sentiment through comment analysis, extract key information from source articles, and generate strategic angles for potential digital PR campaigns. This workflow leverages the power of social media trends, natural language processing, and AI-driven analysis to deliver curated, sentiment-analyzed news opportunities for PR professionals. Operating on a weekly schedule, the workflow searches Reddit for posts related to specified topics, filters them based on engagement metrics, and performs a deep analysis of both the content and public reaction. It then generates comprehensive reports that include story opportunities, audience insights, and strategic recommendations. These reports are automatically compiled, stored in Google Drive, and shared with team members via Mattermost for immediate collaboration. This workflow solves the time-consuming process of manually monitoring social media for trending stories, analyzing public sentiment, and identifying PR opportunities. By automating these tasks, PR professionals can focus on strategy development and execution rather than spending hours on research and analysis. Who is this for? This workflow is designed for digital PR professionals, content marketers, communications teams, and media relations specialists who need to stay on top of trending stories and public sentiment to develop timely and effective PR campaigns. It's particularly valuable for: PR agencies managing multiple clients across different industries In-house PR teams needing to identify media opportunities quickly Content marketers looking for trending topics to create timely content Communications professionals monitoring public perception of industry news Users should have basic familiarity with n8n workflows and the PR strategy development process. While technical knowledge of the integrated APIs is not required to use the workflow, some understanding of Reddit, sentiment analysis, and PR campaign development would be beneficial for interpreting and acting on the generated reports. What problem is this workflow solving? Digital PR professionals face several challenges that this workflow addresses: Information Overload: Manually monitoring social media platforms for trending stories is time-consuming and often results in missed opportunities. Sentiment Analysis Complexity: Understanding public perception of news stories requires reading through hundreds of comments and identifying patterns, which is labor-intensive and subjective. Content Extraction: Visiting multiple news sources to read and analyze articles takes significant time. Strategic Angle Development: Identifying unique PR angles that leverage trending stories and public sentiment requires synthesizing large amounts of information. Team Collaboration: Sharing findings and insights with team members in a structured format can be cumbersome. By automating these processes, the workflow enables PR professionals to quickly identify trending stories with PR potential, understand public sentiment, and develop strategic angles based on comprehensive analysis, all while maintaining a structured approach to team collaboration. What this workflow does Overview The workflow automatically identifies trending posts on Reddit related to specified topics, analyzes both the content of linked articles and public sentiment from comments, and generates comprehensive PR strategy reports. These reports include story opportunities, audience insights, and strategic recommendations based on the analysis. The final reports are compiled, stored in Google Drive, and shared with team members via Mattermost. Process Topic Selection and Reddit Search: The workflow starts with a list of topics specified in the "Set Data" node It searches Reddit for posts related to these topics Posts are filtered based on upvotes and other criteria to focus on trending content Comment Analysis: For each post, the workflow retrieves comments It extracts the top 30 comments based on score Using Claude AI, it analyzes the comments to understand: Overall sentiment Dominant narratives Audience insights PR implications Content Analysis: The workflow extracts the content of the linked article using Jina AI It analyzes the content to identify: Core story elements Technical aspects Narrative opportunities Viral elements PR Strategy Development: Based on the combined analysis of comments and content, the workflow generates: First-mover story opportunities Trend-amplifier story ideas Priority rankings Execution roadmap Strategic recommendations Report Generation and Distribution: The workflow compiles comprehensive reports for each post Reports are converted to text files All files are compressed into a ZIP archive The archive is uploaded to Google Drive A link to the archive is shared with team members via Mattermost Setup To set up this workflow, follow these steps: Import the Workflow: Download the workflow JSON file Import it into your n8n instance Configure API Credentials: Reddit: Add a new credential "Reddit OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/reddit/ Anthropic: Add a new credential "Anthropic Account" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/anthropic/ Google Drive: Add a new credential "Google Drive OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service/ Configure the "Set Data" Node: Set your interested topics (one per line) Add your Jina API key (obtain from https://jina.ai/api-dashboard/key-manager) Configure the Mattermost Node: Update your Mattermost instance URL Set your Webhook ID and Channel Follow the guide at https://developers.mattermost.com/integrate/webhooks/incoming/ for webhook setup Adjust the Schedule (Optional): The workflow is set to run every Monday at 6am Modify the "Schedule Trigger" node if you need a different schedule Test the Workflow: Run the workflow manually to ensure all connections are working properly Check the output to verify the reports are being generated correctly How to customize this workflow to your needs This workflow can be customized in several ways to better suit your specific requirements: Topic Selection: Modify the topics in the "Set Data" node to focus on industries or subjects relevant to your PR strategy Add multiple topics to cover different client interests or market segments Filtering Criteria: Adjust the "Upvotes Requirement Filtering" node to change the minimum upvotes threshold Modify the filtering conditions to include or exclude certain types of posts Analysis Parameters: Customize the prompts in the "Comments Analysis," "News Analysis," and "Stories Report" nodes to focus on specific aspects of the content or comments Adjust the temperature settings in the Anthropic Chat Model nodes to control the creativity of the AI responses Report Format: Modify the "Set Final Report" node to change the structure or content of the final reports Add or remove sections based on your specific reporting needs Distribution Method: Replace or supplement the Mattermost notification with email notifications, Slack messages, or other communication channels Add additional storage options beyond Google Drive Schedule Frequency: Change the "Schedule Trigger" node to run the workflow more or less frequently Set up multiple triggers for different topics or clients Integration with Other Systems: Add nodes to integrate with your CRM, content management system, or project management tools Create connections to automatically populate content calendars or task management systems
by Oskar
This workflow with AI agent is designed to navigate through the page to retrieve specific type of information (in this example: social media profile links). The agent is equipped with 2 tools: text tool:** to retrieve all the text from the page, URLs tool:** to extract all possible links from the page. 💡 You can edit prompt and JSON schema connected to the agent in order to return other data then social media profile links. 👉 This workflow uses Supabase as storage (input/output). Feel free to change it to any other database of your choice. 🎬 See this workflow in action in my YouTube video. How it works? The workflow uses the input URL (website) as a starting point to retrieve the data (e.g. example.com). Using the "URLs tool", the agent is able to retrieve all links from the page and navigate to them. For example, if you want to retrieve contact information, agent will try to find a subpage that might contain this information (e.g. example.com/contact) and extract the information using the text tool. Set up steps Connect database with input data (website addresses) or pin sample data to trigger node. Configure the crawling agent to retrieve the desired data (e.g. modify prompt and/or parsing schema). Set credentials for OpenAI. Optionally: split agent tools to separate workflows. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Gregor
Awork currently does not support a check for open subtasks or open dependencies when setting a task status to done. This workflow offers you a simple workaround to add this functionality to Awork and notifies users when triggered. Multiple configuration options available. How it works Triggered via Awork Webhook call on status change of tasks If task is marked as done, subtasks and/or dependent tasks are checked for their status If unfinished tasks are found, a status rollback to previous status is performed and user gets notified Set up steps Add webhook call to Awork Configure Awork API credentials Set up workflow configuration via setup node, e.g. user notification text, restrict to subtasks/dependency checks etc.
by Joseph
Note: Now includes an Apify alternative for Rapid API (Some users can't create new accounts on Rapid API, so I have added an alternative for you. But immediately you are able to get access to Rapid API, please use that option, it returns more detailed data). *Scroll to bottom for APify setup guide* This n8n workflow automates LinkedIn lead generation, enrichment, and activity analysis using Apollo.io, RapidAPI, Google Sheets and Mail.so. Perfect for sales teams, founders, B2B marketers, and cold outreach pros who want personalized lead insights to drive better conversion rates. ⚙️ How This Workflow Works The workflow is broken down into several key steps, each designed to help you build and enrich a valuable list of LinkedIn leads: 1. 🔑 Lead Discovery (Keyword Search via Apollo) Pulls leads using Apollo.io's API based on keywords, industries, or job titles. Saves lead name, title, company, and LinkedIn URL to your Google Sheet. You can replace the trigger node from the form node to a webhook, whatsapp, telegram, etc, any way for you to send over your query variables over to initiate the workflow. 2. 🧠 Username Extraction (from LinkedIn URL) Extracts the LinkedIn username from profile URLs using a simple script node. This is required for further enrichment via RapidAPI. 3. ✉️ Email Lookup (via Apollo User ID) Uses the Apollo User ID to retrieve the lead’s verified work email. Ensures high-quality leads with reliable contact info. To double check that the email is currently valid, we use the mail.so api and filter out emails that fail deliverability and mx-record check. We don't wanna risk sending emails to no longer existent addresses, right? 4. 🧾 Profile Summary Enrichment (via RapidAPI) Queries the LinkedIn Data API to fetch a lead’s profile summary/bio. Gives you a deeper understanding of their background and expertise. 5. 📰 Recent Activity Collection (Posts & Reposts) Retrieves recent posts or reposts from each lead’s profile. Great for tailoring outreach with reference to what they’re currently talking about. 6. 🗂️ Leads Database Update All enriched data is written to the same Google Sheet. New columns are filled in without overwriting existing data. ✅ Smart Retry & Row Status Logic Every subworkflow includes a fail-safe mechanism to ensure: ✅ Each row has status columns (e.g., done, failed, pending). 🕒 A scheduled retry workflow resets failed rows to pending after 2 weeks (customizable). 💬 This gives failed enrichments another chance to be processed later, reducing data loss. 📋 Google Sheets Setup Template 1: Apollo Leads Scraping & Enrichment Template 2: Enriched Leads Database Make a copy to your Drive and use. Columns will be filled as each subworkflow runs (email, summary, interests, etc.) 🔐 Required API Keys To use this workflow, you’ll need the following credentials: 🧩 Apollo.io Sign up and get your key here: Apollo.io API Keys ⚠️ Important: Toggle the “Master API Key” option to ON when generating your key. This ensures the same key can be used for all Apollo endpoints in this workflow. 🌐 RapidAPI (LinkedIn Data API) Subscribe to the API here: LinkedIn Data API on RapidAPI Use the key in the x-rapidapi-key header in the relevant nodes. ✉️ Mail.so Sign up and get your key here: Mail.so API > 💡 For both APIs, set up the credentials in n8n as “Generic Credential” types. This way, you won’t need to reconfigure the headers in each node. 🛠️ Customization Options Modify the Apollo filters (location, industry, seniority) to target your ideal customers. Change retry interval in the scheduler (e.g., weekly instead of 2 weeks). Connect the database to your email campaign tool like Mailchimp or Instantly.ai. Replace the AI nodes with your desired AI agents and customize the system messages further to get desired results. 🆕 Apify Update Guide To use this workflow, you’ll need the following credentials: Login to Apify, then open this link; https://console.apify.com/actors/2SyF0bVxmgGr8IVCZ/ Click on integrations and scroll down to API Solutions and select "Use API endpoints". Scroll to "Run Actor synchronously and get dataset items" and copy the actor endpoint url then paste it in the placeholder inside the http node of Apify alternative flow "apify-actor-endpoint". That's it, you are set to go. I am available for custom n8n workflows, if you like my work, please get in touch with me on email at joseph@uppfy.com
by Alex Kim
This workflow leverages n8n to perform automated Google Maps API queries and manage data efficiently in Google Sheets. It's designed to extract specific location data based on a given list of ZIP codes and categories. Features Queries the Google Maps API for location data using predefined ZIP codes and subcategories. Filters, de-duplicates, and organizes data into structured rows in Google Sheets. Implements exponential backoff retries to handle API rate limits. Logs and updates statuses directly in Google Sheets for easy tracking. Prerequisites Google OAuth Credentials: A configured Google Cloud project for Google Maps API and Sheets API access. Google Sheets: A sheet with ZIP codes and categories defined (e.g., "AZ Zips"). n8n Setup: A running instance of n8n with credentials configured for Google OAuth. Setup Instructions 1. Prepare Google Sheets Add the ZIP codes to the "AZ Zips" sheet. Define subcategories in another sheet (e.g., "Google Maps Categories"). Provide the sheet's URL in the Settings node of the workflow. 2. Configure API Access Set up Google OAuth credentials for Maps and Sheets APIs in n8n. Ensure your API key has access to the places.searchText endpoint. 3. Workflow Customization Modify textQuery parameters in the GMaps API node to match your query needs. Adjust trigger intervals as required (e.g., manual or scheduled execution). 4. Run the Workflow Execute the workflow manually or schedule periodic runs to keep your data updated. Notes This workflow includes robust error handling to retry failed API calls with exponential backoff. All data is organized and logged directly in Google Sheets for easy reference and updates. For more information or issues, feel free to reach out!
by JPres
👥 Who Is This For? Content creators, marketing teams, and channel managers who need to streamline video publishing with optimized metadata and scheduled releases across multiple videos. 🛠 What Problem Does This Solve? Manual YouTube video publishing is time-consuming and often results in inconsistent descriptions, tags, and scheduling. This workflow fully automates: Extracting video transcripts via Apify for metadata generation Creating SEO-optimized descriptions and tags for each video Setting videos to private during initial upload (critical for scheduling) Implementing scheduled publishing at strategic times Maintaining consistent branding and formatting across all content 🔄 Node-by-Node Breakdown | Step | Node Purpose | |------|--------------| | 1 | Every Day (Scheduler) | Trigger workflow on a regular schedule | | 2 | Get Videos to Harmonize | Retrieve videos requiring metadata enhancement | | 3 | Get Video IDs (Unpublished) | Filter for videos that need publishing | | 4 | Loop over Video IDs | Process each video individually | | 5 | Get Video Data | Retrieve metadata for the current video | | 6 | Loop over Videos with Parameter IS | Set parameters for processing | | 7 | Set Videos to Private | Ensure videos are private (required for scheduling) | | 8 | Apify: Get Transcript | Extract video transcript via Apify | | 9 | Fetch Latest Videos | Get most recent channel content | | 10 | Loop Over Items | Process each video item | | 11 | Generate Description, Tags, etc. | Create optimized metadata from transcript | | 12 | AP Clean ID | Format identifiers | | 13 | Retrieve Generated Data | Collect the enhanced metadata | | 14 | Adjust Transcript Format | Format transcript for better processing | | 15 | Update Video's Metadata | Apply generated description and tags to video | ⚙️ Pre-conditions / Requirements n8n with YouTube API credentials configured Apify account with API access for transcript extraction YouTube channel with upload permissions Master templates for description formatting Videos must be initially set to private for scheduling to work ⚙️ Setup Instructions Import this workflow into your n8n instance. Configure YouTube API credentials with proper channel access. Set up Apify integration with appropriate actor for transcript extraction. Define scheduling parameters in the Every Day node. Configure description templates with placeholders for dynamic content. Set default tags and customize tag generation rules. Test with a single video before batch processing. 🎨 How to Customize Adjust prompt templates for description generation to match your brand voice. Modify tag selection algorithms based on your channel's SEO strategy. Create multiple publishing schedules for different content categories. Integrate with analytics tools to optimize publishing times. Add notification nodes to alert when videos are successfully scheduled. ⚠️ Important Notes Videos MUST be uploaded as private initially - the Publish At logic only works for private videos that haven't been published before. Publishing schedules require videos to remain private until their scheduled time. Transcript quality affects metadata generation results. Consider YouTube API quotas when scheduling large batches of videos. 🔐 Security and Privacy API credentials are stored securely within n8n. Transcripts are processed temporarily and not stored permanently. Webhook URLs should be protected to prevent unauthorized triggering. Access to the workflow should be limited to authorized team members only.
by Davide
Voiceflow is a no-code platform that allows you to design, prototype, and deploy conversational assistants across multiple channels—such as chat, voice, and phone—with advanced logic and natural language understanding. It supports integration with APIs, webhooks, and even tools like Twilio for phone agents. It's perfect for building customer support agents, voice bots, or intelligent assistants. This workflow connects n8n and Voiceflow with tools like Google Calendar, Qdrant (vector database), OpenAI, and an order tracking API to power a smart, multi-channel conversational agent. There are 3 main webhook endpoints in n8n that Voiceflow interacts with: n8n_order – receives user input related to order tracking, queries an API, and responds with tracking status. n8n_appointment – processes appointment booking, reformats date input using OpenAI, and creates a Google Calendar event. n8n_rag – handles general product/service questions using a RAG (Retrieval-Augmented Generation) system backed by: Google Drive document ingestion, Qdrant vector store for search, and OpenAI models for context-based answers. Each webhook is connected to a corresponding "Capture" block inside Voiceflow, which sends data to n8n and waits for the response. How It Works This n8n workflow integrates Voiceflow for chatbot/voice interactions, Google Calendar for appointment scheduling, and RAG (Retrieval-Augmented Generation) for knowledge-based responses. Here’s the flow: Trigger**: Three webhooks (n8n_order, n8n_appointment, n8n_rag) receive inputs from Voiceflow (chat, voice, or phone calls). Each webhook routes requests to specific functions: Order Tracking: Fetches order status via an external API. Appointment Scheduling: Uses OpenAI to parse dates, creates Google Calendar events, and confirms via WhatsApp. RAG System: Queries a Qdrant vector store (populated with Google Drive documents) to answer customer questions using GPT-4. AI Processing**: OpenAI Chains: Convert natural language dates to Google Calendar formats and generate responses. RAG Pipeline: Embeds documents (via OpenAI), stores them in Qdrant, and retrieves context-aware answers. Voiceflow Integration: Routes responses back to Voiceflow for multi-channel delivery (chat, voice, or phone). Outputs**: Confirmation messages (e.g., "Event created successfully"). Dynamic responses for orders, appointments, or product support. Setup Steps Prerequisites: APIs**: Google Calendar & Drive OAuth credentials. Qdrant vector database (hosted or cloud). OpenAI API key (for GPT-4 and embeddings). Configuration: Qdrant Setup: Run the "Create collection" and "Refresh collection" nodes to initialize the vector store. Populate it with documents using the Google Drive → Qdrant pipeline (embeddings generated via OpenAI). Voiceflow Webhooks: Link Voiceflow’s "Captures" to n8n’s webhook URLs (n8n_order, n8n_appointment, n8n_rag). Google Calendar: Authenticate the Google Calendar node and set event templates (e.g., summary, description). RAG System: Configure the Qdrant vector store and OpenAI embeddings nodes. Adjust the Retrieve Agent’s system prompt for domain-specific queries (e.g., electronics store support). Optional: Add Twilio for phone-agent capabilities. Customize OpenAI prompts for tone/accuracy. PS. You can import a Twilio number to assign it to your agent for becoming a Phone Agent Need help customizing? Contact me for consulting and support or add me on Linkedin
by PUQcloud
Overview The Docker NextCloud WHMCS module leverages a sophisticated workflow for n8n, designed to automate the comprehensive deployment, configuration, and management processes for NextCloud and NextCloud Office services. Through its intuitive API interface, the workflow securely receives commands and orchestrates predefined tasks via SSH on your Docker-hosted server, ensuring streamlined operations and efficient management. Prerequisites You must deploy your own dedicated n8n server to manage workflows effectively. Alternatively, you may opt for the official n8n cloud-based solutions accessible via: n8n Official Site Your Docker server must be accessible via SSH with necessary permissions. Installation Steps Install the Required Workflow on n8n You can select from two convenient installation options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates are continuously updated and available on the n8n marketplace. Explore all templates provided by PUQcloud directly here: PUQcloud on n8n Option 2: Manual Installation Each module version includes a bundled workflow template file. Import this workflow file directly into your n8n server manually. n8n Workflow API Backend Setup for WHMCS Configure API Webhook and SSH Access Create a secure Basic Auth Credential for Webhook API interactions within n8n. Create an SSH Credential within n8n to securely communicate with the Docker host. Modify Template Parameters Adjust and update the following critical parameters to match your deployment specifics: server_domain – Set this to the domain of your WHMCS Docker server. clients_dir – Specify the directory where user data and related resources will be stored. mount_dir – The standard mount point for container storage (recommended to remain unchanged). Do not alter the following technical parameters to avoid workflow disruption: screen_left, screen_right. Deploy-docker-compose Configuration Fine-tune Docker Compose configurations tailored specifically for these critical operational scenarios: Initial service provisioning and setup Service suspension and subsequent unlocking Service configuration updates Routine service maintenance tasks nginx Configuration Management Enhance and customize proxy server configurations using the dedicated nginx workflow element: main**: Define specialized parameters within the server configuration block. main_location**: Set custom headers, caching policies, and routing rules for the root location. Bash Script Automation Automate Docker container management and related server tasks through dynamically generated Bash scripts within n8n. Scripts execute securely via SSH and provide responses in JSON or plain text formats for easy parsing and logging. Scripts are conveniently linked directly to the SSH action elements. You retain complete flexibility to adapt or extend these scripts as necessary to meet your precise operational requirements.